Space System Acquisition Risks and Keys to Addressing Them
GAO-06-776R: Published: Jun 1, 2006. Publicly Released: Jun 1, 2006.
On April 6, 2006, we testified before Congress on the Department of Defense's (DOD) space acquisitions. In fiscal year 2007, DOD expects to spend nearly $7 billion to acquire space-based capabilities to support current military and other government operations as well as to enable DOD to transform the way it collects and disseminates information, gathers data on its adversaries, and attacks targets. Despite its growing investment in space, however, DOD's space system acquisitions have experienced problems over the past several decades that have driven up costs by hundreds of millions, even billions, of dollars; stretched schedules by years; and increased performance risks. In some cases, capabilities have not been delivered to the warfighter after decades of development. Within this context, Congress requested that we provide additional comments regarding the need for better program management, space acquisition policy, and DOD's Space Radar and Transformational Satellite Communications System acquisitions.
GAO provided information on the top obstacles to achieving program success from the point of view of program mangers. We found that the top obstacles are funding instability, requirements instability, staffing problems, excessive oversight and inexperienced leadership. Estimated costs have been high, and grown, for DOD's Space Based Infrared System (SBIRS)-High, the Evolved Expendable Launch Vehicle program, the Advanced Extremely High Frequency Satellite (AEHF) Program, the National Polar-orbiting Operational Environmental Satellite System (NPOESS), and the Space Based Infrared System-Low program also had high estimated costs which increased. Additionally, the Space Based Infrared System-Low program and SBIRS-High both overpromised capabilities. DOD has been taking actions to improve cost estimating and we are in the process of addressing these actions. To address the problem of low levels of technological maturity, DOD has committed to delay the development of one new major space program--the Transformational Satellite Communications System (TSAT)--until technology needs are better understood. It has also committed to deliver new space-based capabilities in an incremental fashion so that acquisition efforts can be more executable and the science and technology base can be more engaged in major space programs. DOD has also faced issues regarding the addition of new requirements well into the acquisition phase. Our past reports have pointed to requirements setting problems in the AEHF, NPOESS, and SBIRS-High programs, and noted that DOD could take further steps to strengthen requirements setting by implementing processes and policies, as needed, which stabilize requirements for acquisitions. GAO and DOD have disagreed on what Technological Readiness Levels (TRL) should beat major decision points for space system acquisitions, and will continue to disagree as long as GAO continues to base its reviews of space programs on best practices and DOD continues to use the wide leeway afforded regarding critical technologies and their maturity levels to initiate product development. We identified the main difference between TRL 6 and 7 as the testing environment. For TRL 6, the testing environment would be a laboratory or a simulated operational environment, and for TRL 7, the testing environment would be an operational environment. Achieving TRL 6 or 7 by the critical design review (CDR) is a matter of risk--if the critical technologies in question are supremely important and have no space-based heritage, then it is warranted to test the technologies in space before proceeding through CDR. To ensure its integration efforts are successful, the TSAT program is planning to demonstrate critical technologies at TRL 6 when key integration tests are conducted in fiscal year 2007, use the results of its first round of integration tests to refine testing during the second, more comprehensive round, conduct a series of independent tests to verify results of contractor testing, and assess the results of the main integration tests before making a decision to enter the production development phase. According to GAO's prior work on best practices, to ensure success integration, programs mangers should ensure that (1) the right validation events occur at the right times, (2) each validation event produces quality results, and (3) the knowledge gained from an event is used to improve the product. Finally, the program manager needs assurance that all testing that has been done is reflective of the capabilities that the program is trying to deliver.