This is the accessible text file for GAO report number GAO-09-3SP 
entitled 'GAO Cost Estimating And Assessment Guide' which was released 
on March 2, 2009. 

This text file was formatted by the U.S. Government Accountability 
Office (GAO) to be accessible to users with visual impairments, as part 
of a longer term project to improve GAO products' accessibility. Every 
attempt has been made to maintain the structural and data integrity of 
the original printed product. Accessibility features, such as text 
descriptions of tables, consecutively numbered footnotes placed at the 
end of the file, and the text of agency comment letters, are provided 
but may not exactly duplicate the presentation or format of the printed 
version. The portable document format (PDF) file is an exact electronic 
replica of the printed version. We welcome your feedback. Please E-mail 
your comments regarding the contents or accessibility features of this 
document to Webmaster@gao.gov. 

This is a work of the U.S. government and is not subject to copyright 
protection in the United States. It may be reproduced and distributed 
in its entirety without further permission from GAO. Because this work 
may contain copyrighted images or other material, permission from the 
copyright holder may be necessary if you wish to reproduce this 
material separately. 

United States Government Accountability Office: 
GAO: 

Applied Research and Methods: 

GAO Cost Estimating And Assessment Guide: 
Best Practices For Developing And Managing Capital Program Costs: 

March 2009: 

GAO-09-3SP: 

Preface: 

The U.S. Government Accountability Office is responsible for, among 
other things, assisting the Congress in its oversight of the federal 
government, including agencies’ stewardship of public funds. To use 
public funds effectively, the government must meet the demands of 
today’s changing world by employing effective management practices and 
processes, including the measurement of government program performance. 
In addition, legislators, government officials, and the public want to 
know whether government programs are achieving their goals and what 
their costs are. To make those evaluations, reliable cost information 
is required and federal standards have been issued for the cost 
accounting that is needed to prepare that information.[Footnote 1] We 
developed the Cost Guide in order to establish a consistent methodology 
that is based on best practices and that can be used across the federal 
government for developing, managing, and evaluating capital program 
cost estimates. 

For the purposes of this guide, a cost estimate is the summation of 
individual cost elements, using established methods and valid data, to 
estimate the future costs of a program, based on what is known today. 
[Footnote 2] The management of a cost estimate involves continually 
updating the estimate with actual data as they become available, 
revising the estimate to reflect changes, and analyzing differences 
between estimated and actual costs—for example, using data from a 
reliable earned value management (EVM) system.[Footnote 3] 
 
The ability to generate reliable cost estimates is a critical function, 
necessary to support the Office of Management and Budget’s (OMB) 
capital programming process.[Footnote 4] Without this ability, agencies 
are at risk of experiencing cost overruns, missed deadlines, and 
performance shortfalls—all recurring problems that our program 
assessments too often reveal. Furthermore, cost increases often mean 
that the government cannot fund as many programs as intended or deliver 
them when promised. The methodology outlined in this guide is a 
compilation of best practices that federal cost estimating 
organizations and industry use to develop and maintain reliable cost 
estimates throughout the life of a government acquisition program. By 
default, the guide will also serve as a guiding principle for our 
auditors to evaluate the economy, efficiency, and effectiveness of 
government programs. 

The U.S. Government Accountability Office, the Congressional Budget 
Office (CBO), and others have shown through budget simulations that the 
nation is facing a large and growing structural deficit in the long 
term, primarily because the population is aging and health care costs 
are rising. As Comptroller General David Walker noted, “Continuing on 
this unsustainable path will gradually erode, if not suddenly damage, 
our economy, our standard of living and ultimately our national 
security.”[Footnote 5] New budgetary demands and demographic trends 
will place serious budgetary pressures on federal discretionary 
spending, as well as on other federal policies and programs, in the 
coming years. 

As resources become scarce, competition for them will increase. It is 
imperative, therefore, that government acquisition programs deliver as 
promised, not only because of their value to their users but also 
because every dollar spent on one program will mean one less available 
dollar to fund other efforts. To get better results, programs will need 
higher levels of knowledge when they start and standardized monitoring 
metrics such as EVM so that better estimates can be made of total 
program costs at completion. 

[End of Preface] 

Contents: 

Preface: 

Contents: 

Abbreviations: 

Introduction: 
The Guide’s Case Studies: 
The Cost Guide in Relation to Established Standards: 
The Guide’s Readers: 
Acknowledgments: 

Chapter 1: 
The Characteristics of Credible Cost Estimates and a Reliable Process 
for Creating Them: 
Basic Characteristics of Credible Cost Estimates: 
A Reliable Process for Developing Credible Cost Estimates: 

Chapter 2: 
Why Government Programs Need Cost Estimates and the Challenges in 
Developing Them: 
Cost Estimating Challenges: 
Earned Value Management Challenges: 

Chapter 3: 
Criteria for Cost Estimating, EVM, and Data Reliability: 

Chapter 4: 
Cost Analysis Overview: 
Differentiating Cost Analysis and Cost Estimating: 
Main Cost Estimate Categories: 
The Overall Significance of Cost Estimates: 
The Importance of Cost Estimates in Establishing Budgets: 
Cost Estimates and Affordability: 
Evolutionary Acquisition and Cost Estimation: 

Chapter 5: 
The Cost Estimate’s Purpose, Scope, and Schedule: 
Purpose: 
Scope: 

Chapter 6: 
The Cost Assessment Team: 
Team Composition and Organization: 
Cost Estimating Team Best Practices: 
Certification and Training for Cost Estimating and EVM Analysis: 

Chapter 7: 
Technical Baseline Description Definition and Purpose: 
Process: 
Schedule: 
Contents: 
Key System Characteristics and Performance Parameters: 

Chapter 8: 
Work Breakdown: Structure: 
Best Practice: Product-Oriented WBS: 
Common WBS Elements: 
WBS Development: 
Standardized WBS: 
WBS and Scheduling: 
WBS and EVM: 
WBS and Risk Management: 
WBS Benefits: 

Chapter 9: 
Ground Rules and Assumptions: 
Assumptions: 
Global and Element-Specific Ground Rules and Assumptions: 
Assumptions, Sensitivity, and Risk Analysis: 

Chapter 10: 
Data: 
Data Collection: 
Types of Data: 
Sources of Data: 
Data Applicability: 
Validating and Analyzing the Data: 
EVM Data Reliability: 
Data Normalization: 
Recurring and Nonrecurring Costs: 
Inflation Adjustments: 
Selecting the Proper Indexes: 
Data Documentation: 

Chapter 11: 
Developing a Point Estimate: 
Cost Estimating Methods: 
Production Rate Effects on Learning: 
Pulling the Point Estimate Together: 

Chapter 12: 
Estimating Software Costs: 
Unique Components of Software Estimation: 
Estimating Software Size: 
Estimating Software Development Effort: 
Software Maintenance: 
Parametric Software Estimation: 
Commercial Off-the-Shelf Software: 
Enterprise Resource Planning Software:
Software Costs Must also Account for Information Technology 
Infrastructure and Services: 
Unique Components of IT Estimation: 

Chapter 13: 
Sensitivity Analysis: 
Sensitivity Factors: 
Steps in Performing a Sensitivity Analysis: 
Sensitivity Analysis Benefits: 

Chapter 14: 
Cost Risk and Uncertainty: 
The Difference Between Risk and Uncertainty: 
Point Estimates Alone Are Insufficient for Good Decisions: 
Budgeting to a Realistic Point Estimate: 
Developing A Credible S Curve of Potential Program Costs: 
Risk Management: 

Chapter 15: 
Validating the Estimate: 
The Cost Estimating Community’s Best Practices for Validating 
Estimates: 

Chapter 16: 
Documenting the Estimate: 
Elements of Cost Estimate Documentation: 
Other Considerations: 

Chapter 17: 
Presenting the Estimate to Management: 

Chapter 18: 
Managing Program Costs: Planning: 
The Nature and History of EVM: 
Implementing EVM: 
Federal and Industry Guidelines for Implementing EVM: 
The Thirteen Steps in the EVM Process: 
Integrated Baseline Reviews: 
Award Fees: 
Progress and Performance-Based Payments Under Fixed-Price Contracts: 

Chapter 19: 
Managing Program Costs: 
Execution: 
Contract Performance Reports: 
Monthly EVM Analysis: 
Project Future Performance: 
Provide Analysis to Management: 
Continue EVM Until the Program is Complete: 

Chapter 20: 
Managing Program Costs: 
Updating: 
Incorporating Authorized Changes into the Performance Measurement 
Baseline: 
Using EVM System Surveillance to Keep the Performance Measurement
Baseline Current: 
Overtarget Baselines and Schedules: 
Update the Program Cost Estimate with Actual Costs: 
Keep Management Updated: 

Appendixes: 
 
Appendix 1: Auditing Agencies and Their Web Sites: 

Appendix 2: Case Study Backgrounds: 

Appendix 3: Experts Who Helped Develop This Guide: 

Appendix 4: The Federal Budget Process: 

Appendix 5: Federal Cost Estimating and EVM Legislation, Regulations, 
Policies, and Guidance: 

Appendix 6: Data Collection Instrument: 

Appendix 7: 
Data Collection Instrument: Data Request Rationale: 

Appendix 8: SEI Checklist: 

Appendix 9: Examples of Work Breakdown Structures: 

Appendix 10: Schedule Risk Analysis: 

Appendix 11: Learning Curve Analysis: 

Appendix 12: Technology Readiness Levels: 

Appendix 13: EVM-Related Award Fee Criteria: 

Appendix 14: Integrated Baseline Review Case Study and Other 
Supplemental Tools: 
Exhibit A: 
Exhibit B: 
Exhibit C: 
Exhibit D: 

Appendix 15: Common Risks to Consider in Software Cost Estimating: 

Appendix 16: Contacts and Acknowledgments: 

References: 

Image Sources: 

List of figures: 

Figure 1: The Cost Estimating Process: 

Figure 2: Challenges Cost Estimators Typically Face: 

Figure 3: Life-Cycle Cost Estimate for a Space System: 

Figure 4: Cone of Uncertainty: 

Figure 5: An Affordability Assessment: 

Figure 6: Typical Capital Asset Acquisition Funding Profiles by Phase: 

Figure 7: Evolutionary and Big Bang Acquisition Compared: 

Figure 8: Incremental Development: 

Figure 9: Disciplines and Concepts in Cost Analysis: 

Figure 10: A Product-Oriented Work Breakdown Structure: 

Figure 11: A Work Breakdown Structure with Common Elements: 

Figure 12: A Contract Work Breakdown Structure: 

Figure 13: A Learning Curve: 

Figure 14: A Sensitivity Analysis That Creates a Range around a Point 
Estimate: 

Figure 16: A Cumulative Probability Distribution, or S Curve: 

Figure 17: A Risk Cube Two-Dimensional Matrix: 

Figure 18: The Distribution of Sums from Rolling Two Dice: 

Figure 19: A Point Estimate Probability Distribution Driven by WBS 
Distributions: 

Figure 20: Integrating Cost Estimation, Systems Development Oversight, 
and Risk Management: 

Figure 21: Integrating EVM and Risk Management: 

Figure 22: Inputs and Outputs for Tracking Earned Value: 

Figure 23: WBS Integration of Cost, Schedule, and Technical 
Information: 

Figure 24: Identifying Responsibility for Managing Work at the Control 
Account: 

Figure 25: An Activity Network: 

Figure 26: Activity Durations as a Gantt Chart: 

Figure 27: Earned Value, Using the Percent Complete Method, Compared to
Planned Costs: 

Figure 28: The Genesis of the Performance Measurement Baseline: 

Figure 29: The Time-Phased Cumulative Performance Measurement Baseline: 

Figure 30: A Performance-Based Payments Structured Contract: 

Figure 31: The EVM System Acceptance Process: 

Figure 32: IBR Control Account Manager Discussion Template: 

Figure 33: Monthly Program Assessment Using Earned Value: 

Figure 34: Overall Program View of EVM Data: 

Figure 35: A Contract Performance Report’s Five Formats: 

Figure 36: Understanding Program Cost Growth by Plotting Budget at 
Completion Trends: 

Figure 37: Understanding Program Performance by Plotting Cost and 
Schedule Variances: 

Figure 38: Understanding Expected Cost Overruns by Plotting Estimate at
Completion: 

Figure 39: Rolling Wave Planning: 

Figure 40: The Effect on a Contract of Implementing an Overtarget 
Budget: 

Figure 41: Steps Typically Associated with Implementing an Overtarget 
Budget: 

Figure 42: Establishing a New Baseline with a Single Point Adjustment: 

Figure 43: MasterFormat™ Work Breakdown Structure: 

Figure 44: Network Diagram of a Simple Schedule: 

Figure 45: Example Project Schedule: 

Figure 46: Estimated Durations for Remaining WBS Areas in the Schedule: 

Figure 47: Cumulative Distribution of Project Schedule, Including Risk: 

Figure 48: Identified Risks on a Spacecraft Schedule: An Example: 

Figure 49: A Risk Register for a Spacecraft Schedule: 

Figure 50: Spacecraft Schedule Results from a Monte Carlo Simulation: 

Figure 51: A Schedule Showing Critical Path through Unit 2: 

Figure 52: Results of a Monte Carlo Simulation for a Schedule Showing 
Critical Path through Unit 2: 

Figure 53: Sensitivity Index for Spacecraft Schedule: 

Figure 54: Evaluation of Correlation in Spacecraft Schedule: 

Figure 55: An Example of Probabilistic Branching Contained in the 
Schedule: 

Figure 56: Probability Distribution Results for Probabilistic Branching 
in Test Unit: 

Figure 57: A Project Schedule Highlighting Correlation Effects: 

Figure 58: Risk Results Assuming No Correlation between Activity 
Durations: 

Figure 59: Risk Results Assuming 90 Percent Correlation between 
Activity Durations: 

Figure 60: Schedule Analysis Results with and without Correlation: 

Figure 61: IBR Team’s Program Summary Assessment Results for Program X: 

Figure 62: Program X IBR Team’s Assessment Results by Program Area: 

Figure 63: Program X IBR Team’s Detailed Assessment Results for an 
Individual Program Area: 

List of Tables: 

Table 1: GAO’s 1972 Version of the Basic Characteristics of Credible 
Cost Estimates: 

Table 2: The Twelve Steps of a High-Quality Cost Estimating Process: 

Table 3: Cost Estimating and EVM Criteria for Federal Agencies: 
Legislation, Regulations, Policies, and Guidance: 

Table 4: Life-Cycle Cost Estimates, Types of Business Case Analyses, 
and Other Types of Cost Estimates: 

Table 5: Certification Standards in Business, Cost Estimating, and 
Financial Management in the Defense Acquisition Education, Training, 
and Career Development Program: 

Table 6: Typical Technical Baseline Elements: 

Table 7: General System Characteristics: 

Table 8: Common Elements in Work Breakdown Structures: 

Table 9: Cost Element Structure for a Standard DOD Automated 
Information System: 

Table 10: Basic Primary and Secondary Data Sources: 

Table 11: Three Cost Estimating Methods Compared: 

Table 12: An Example of the Analogy Cost Estimating Method: 

Table 13: An Example of the Engineering Build-Up Cost Estimating 
Method: 

Table 14: An Example of the Parametric Cost Estimating Method: 

Table 15: The Twelve Steps of High-Quality Cost Estimating Summarized: 

Table 16: Sizing Metrics and Commonly Associated Issues: 

Table 17: Common Software Risks That Affect Cost and Schedule: 

Table 18: Best Practices Associated with Risks in Implementing ERP: 

Table 19: Common IT Infrastructure Risks: 

Table 20: Common Labor Categories Described: 

Table 21: Potential Sources of Program Cost Estimate Uncertainty: 

Table 22: A Hardware Risk Scoring Matrix: 

Table 23: A Software Risk Scoring Matrix: 

Table 24: Eight Common Probability Distributions: 

Table 25: The Twelve Steps of High-Quality Cost Estimating, Mapped to 
the Characteristics of a High-Quality Cost Estimate: 

Table 26: Questions for Checking the Accuracy of Estimating Techniques: 

Table 27: Eight Types of Independent Cost Estimate Reviews: 

Table 28: What Cost Estimate Documentation Includes: 

Table 29: Key Benefits of Implementing EVM: 

Table 30: Ten Common Concerns about EVM: 

Table 31: ANSI Guidelines for EVM Systems: 

Table 32: EVM Implementation Guides: 

Table 33: Typical Methods for Measuring Earned Value Performance: 

Table 34: Integrated Baseline Review Risk Categories: 

Table 35: Contract Performance Report Data Elements: Format 1: 

Table 36: EVM Performance Indexes: 

Table 37: Best Predictive EAC Efficiency Factors by Program Completion 
Status: 

Table 38: Basic Program Management Questions That EVM Data Help Answer: 

Table 39: ANSI Guidelines Related to Incorporating Changes in an EVM 
System: 

Table 40: Elements of an Effective Surveillance Organization: 

Table 41: Key EVM Processes across ANSI Guidelines for Surveillance: 

Table 42: Risk Factors That Warrant EVM Surveillance: 

Table 43: A Program Surveillance Selection Matrix: 

Table 44: A Color-Category Rating System for Summarizing Program 
Findings: 

Table 45: Overtarget Budget Funding Implications by Contract Type: 

Table 46: Common Indicators of Poor Program Performance: 

Table 47: Options for Treating Variances in Performing a Single Point 
Adjustment: 

Table 48: Case Studies Drawn from GAO Reports Illustrating This Guide: 

Table 49: Phases of the Budget Process: 

Table 50: The Budget Process: Major Steps and Time Periods: 

Table 51: Aircraft System Work Breakdown Structure: 

Table 52: Electronic/Automated Software System Work Breakdown 
Structure: 

Table 53: Ground Software Work Breakdown Structure: 

Table 54: Missile System Work Breakdown Structure: 

Table 55: Ordnance System Work Breakdown Structure: 

Table 56: Sea System Work Breakdown Structure: 

Table 57: Space System Work Breakdown Structure: 

Table 58: Surface Vehicle System Work Breakdown Structure: 

Table 59: Unmanned Air Vehicle System Work Breakdown Structure: 

Table 60: Department of Energy Project Work Breakdown Structure: 

Table 61: General Services Administration Construction Work Breakdown 
Structure: 

Table 62: Automated Information System: Enterprise Resource Planning 
Program Level Work Breakdown Structure: 

Table 63: Environmental Management Work Breakdown Structure: 

Table 64: Pharmaceutical Work Breakdown Structure: 

Table 65: Process Plant Construction Work Breakdown Structure: 

Table 66: Telecon Work Breakdown Structure: 

Table 67: Software Implementation Project Work Breakdown Structure: 

Table 68: Major Renovation Project Work Breakdown Structure: 

Table 69: Sample IT Infrastructure and Service Work Breakdown 
Structure: 

Table 70: CSI MasterFormat™ 2004 Structure Example: Construction 
Phase: 

Table 71: The Anderlohr Method for the Learning Lost Factor: 

Table 72: IBR Leadership Roles and Responsibilities: 

Case studies: 

Case Study 1: Basic Estimate Characteristics, from NASA, GAO-04-642: 

Case Study 2: Basic Estimate Characteristics, from Customs Service 
Modernization, GAO/AIMD-99-41: 

Case Study 3: Following Cost Estimating Steps, from NASA, GAO-04-642: 

Case Study 4: Cost Analysts’ Skills, from NASA, GAO-04-642: 

Case Study 5: Recognizing Uncertainty, from Customs Service 
Modernization, GAO/AIMD-99-41: 

Case Study 6: Using Realistic Assumptions, from Space Acquisitions, GAO-
07-96: 

Case Study 7: Program Stability Issues, from Combating Nuclear 
Smuggling, GAO-06-389: 

Case Study 8: Program Stability Issues, from Defense Acquisitions, GAO-
05-183: 

Case Study 9: Development Schedules, from Defense Acquisitions, GAO-06-
327: 

Case Study 10: Risk Analysis, from Defense Acquisitions, GAO-05-183: 

Case Study 11: Risk Analysis, from NASA, GAO-04-642: 

Case Study 12: Applying EVM, from Cooperative Threat Reduction, GAO-06-
692: 

Case Study 13: Rebaselining, from NASA, GAO-04-642: 

Case Study 14: Realistic Estimates, from Defense Acquisitions, GAO-05-
183: 

Case Study 15: Importance of Realistic LCCEs, from Combating Nuclear 
Smuggling, GAO-07-133R: 

Case Study 16: Importance of Realistic LCCEs, from Space Acquisitions, 
GAO-07-96: 

Case Study 17: Evolutionary Acquisition and Cost Estimates, from Best 
Practices, GAO-03-645T: 

Case Study 18: Incremental Development, from Customs Service 
Modernization, GAO/AIMD-99-41: 

Case Study 19: The Estimate’s Context, from DOD Systems Modernization, 
GAO-06-215: 

Case Study 20: Defining Requirement, from United States Coast Guard, 
GAO-06-623: 

Case Study 21: Managing Requirements, from DOD Systems Modernization, 
GAO-06-215: 

Case Study 22: Product-Oriented Work Breakdown Structure, from 
Air Traffic Control, GAO-08-756: 

Case Study 23: Developing Work Breakdown Structure, from NASA, 
GAO-04-642: 

Case Study 24: Developing Work Breakdown Structure, from Homeland 
Security, GAO-06-296: 

Case Study 25: The Importance of Assumptions, from Space Acquisitions, 
GAO-07-96: 

Case Study 26: Testing Ground Rules for Risk, from Space Acquisitions, 
GAO-07-96: 

Case Study 27: The Industrial Base, from Defense Acquisition, GAO-05-
183: 

Case Study 28: Technology Maturity, from Defense Acquisitions, GAO-05-
183: 

Case Study 29: Technology Maturity, from Space Acquisitions, GAO-07-96: 

Case Study 30: Informing Management of Changed Assumptions, from 
Customs Service Modernization, GAO/AIMD-99-41: 

Case Study 31: Fitting the Estimating Approach to the Data, from Space 
Acquisitions, GAO-07-96: 

Case Study 32: Data Anomalies, from Cooperative Threat Reduction, GAO-
06-692: 

Case Study 33: Inflation, from Defense Acquisitions, GAO-05-183: 

Case Study 34: Cost Estimating Methods, from Space Acquisitions, GAO-07-
96: 

Case Study 35: Expert Opinion, from Customs Service Modernization, 
GAO/AIMD-99-41: 

Case Study 36: Production Rate, from Defense Acquisitions, GAO-05-183: 

Case Study 37: Underestimating Software, from Space Acquisitions, GAO-
07-96: 

Case Study 38: Sensitivity Analysis, from Defense Acquisitions, GAO-05-
183: 

Case Study 39: Point Estimates, from Space Acquisitions, GAO-07-96: 
 
Case Study 40: Point Estimates, from Defense Acquisitions, GAO-05-183: 

Case Study 41: Validating the Estimate, from Chemical Demilitarization, 
GAO-07-240R: 

Case Study 42: Independent Cost Estimates, from Space Acquisitions, GAO-
07-96: 

Case Study 43: Documenting the Estimate, from Telecommunications, GAO-
07-268: 

Case Study 44: Validating the EVM System, from Cooperative Threat 
Reduction, GAO-06-692: 

Case Study 45: Validating the EVM System, from DOD Systems 
Modernization, GAO-06-215: 

Case Study 46: Cost Performance Reports, from Defense Acquisitions, GAO-
05-183: 

Case Study 47: Maintaining Performance Measurement Baseline Data, from 
National Airspace System, GAO-03-343: 

Case Study 48: Maintaining Realistic Baselines, from Uncertainties 
Remain, GAO-04-643R: 

Best Practices Checklist: 

1. The Estimate: 

2. Purpose, Scope, and Schedule: 

3. Cost Assessment Team: 

4. Technical Baseline Description: 

5. Work Breakdown Structure: 

6. Ground Rules and Assumptions: 

7. Data: 

8. Developing a Point Estimate: 

9. Estimating Software Costs: 

10. Sensitivity Analysis: 

11. Cost Risk and Uncertainty: 

12. Validating the Estimate: 

13. Documenting the Estimate: 

14. Presenting the Estimate to Management: 

15. Managing Program Costs: Planning: 

16. Managing Program Costs: Execution: 

17. Managing Program Costs: Updating: 

Abbreviations: 

ACWP: actual cost of work performed: 

ANSI: American National Standards Institute: 

AOA: analysis of alternatives: 

BAC: budget at completion: 

BCA: business case analysis: 

BCWP: budgeted cost for work performed: 

BCWS: budgeted cost for work scheduled: 

CAIG: Cost Analysis Improvement Group: 

CBO: Congressional Budget Office: 

CEA: cost-effectiveness analysis: 

CER: cost estimating relationship: 

COSMIC: Common Software Measurement International Consortium: 

CPI: cost performance index: 

CPR: contract performance report: 

C/SCSC: Cost/Schedule and Control System: 

CSDR: cost and software data report: 

DAU: Defense Acquisition University: 

DCAA: Defense Contract Audit Agency: 

DCMA: Defense Contract Management: 

DOD: Department of Defense: 

EA: economic analysis: 

EAC: estimate at completion: 

EIA: Electronic Industries Alliance: 

ERP: enterprise resource planning: 

EVM: earned value management: 

FAR: Federal Acquisition Regulation: 

GR&A: ground rules and assumptions: 

IBR: integrated baseline review: 

ICA: independent cost assessment: 

ICE: independent cost estimate: 

IGCE: independent government cost estimate: 

IMS: integrated master schedule: 

IT: information technology: 

LCCE: life-cycle cost estimate: 

NAR: nonadvocate review: 

NASA: National Aeronautics and Space Administration: 

NDIA: National Defense Industrial Association: 

OMB: Office of Management and Budget Criteria: 

OTB: overtarget baseline: 

OTS: overtarget schedule: 

PMB: performance measurement baseline: 

PMI: Project Management Institute: 

SCEA: Society of Cost Estimating and Agency Analysis: 

SEI: Software Engineering Institute: 

SLOC: source line of code: 

SPI: schedule performance index: 

TCPI: to complete performance index: 

WBS: work breakdown structure: 

[End of section] 

Introduction: 

Because federal guidelines are limited on processes, procedures, and 
practices for ensuring credible cost estimates, the Cost Guide is 
intended to fill that gap. Its purpose is twofold—to address generally 
accepted best practices for ensuring credible program cost estimates 
(applicable across government and industry) and to provide a detailed 
link between cost estimating and EVM. Providing that link is especially 
critical, because it demonstrates how both elements are needed for 
setting realistic program baselines and managing risk. 

As a result, government managers and auditors should find in the Cost 
Guide principles to guide them as they assess (1) the credibility of a 
program’s cost estimate for budget and decision making purposes and (2) 
the program’s status using EVM. Throughout this guide, we refer to 
program cost estimates that encompass major system acquisitions, as 
well as government in-house development efforts for which a cost 
estimate must be developed to support a budget request. 

The basic information in the Cost Guide includes the purpose, scope, 
and schedule of a cost estimate; a technical baseline description; a 
work breakdown structure (WBS); ground rules and assumptions; how to 
collect data; estimation methodologies; software cost estimating; 
sensitivity and risk analysis; validating a cost estimate; documenting 
and briefing results; updating estimates with actual costs; EVM; and 
the composition of a competent cost estimating team.[Footnote 6] The 
guide discusses pitfalls associated with cost estimating and EVM that 
can lead government agencies to accept unrealistic budget requests—as 
when risks are embedded in an otherwise logical approach to estimating 
costs. Since the Department of Defense (DOD) is considered the leader 
in government cost estimating, the guide relies heavily on DOD for 
terminology and examples that may not be used by, or even apply to, 
other federal agencies. 

Chapters 1–17 of the Cost Guide discuss the importance of cost 
estimating and best practices associated with creating credible cost 
estimates. They describe how cost estimates predict, analyze, and 
evaluate a program’s cost and schedule and serve as a critical program 
control planning tool. Once cost estimates have been presented to and 
approved by management, the chapters also establish the basis for 
measuring actual performance against the approved baseline plan using 
an EVM system. 

Those chapters explain how EVM, if it is to work, must have a cost 
estimate that identifies the effort that is needed—the work breakdown 
structure—and the period of time over which the work is to be 
performed—the program schedule.[Footnote 7] In essence, the cost 
estimate is the basis for establishing the program’s detailed schedule, 
and it identifies the bounds for how much program costs can be expected 
to vary, depending on the uncertainty analysis. When all these tasks 
are complete, the cost estimate can be used to lay the foundation for 
the performance measurement baseline (PMB), which will measure actual 
program performance. 

Since sound acquisition management requires more than just a reliable 
cost estimate at a project’s outset, chapters 18–20 provide guidance on 
converting the cost estimate into an executable program and a means for 
managing program costs. Our program assessments have too often revealed 
that not integrating cost estimation, system development oversight, and 
risk management—three key disciplines, interrelated and essential to 
effective acquisition management—has resulted in programs costing more 
than planned and delivering less than promised. Therefore, chapters 
18–20 address best practices in implementing and integrating these 
disciplines and using them to manage costs throughout the life of a 
program. 

OMB has set the expectation that programs will maintain current 
estimates of cost. This requires rigorous performance-based program 
management, which can be satisfied with EVM. Chapters 18–20 address the 
details of EVM, which is designed to integrate cost estimation, system 
development oversight, and risk management. Additionally, for programs 
classified as major acquisitions—regardless of whether the development 
work is completed in-house or under contract—the use of EVM is a 
requirement for development, as specified by OMB.[Footnote 8] The 
government may also require the use of EVM for other acquisitions, in 
accordance with agency procedures. 

Since linking cost estimating and EVM results in a better view of a 
program and allows for greater understanding of program risks, cost 
estimators and EVM analysts who join forces can use each other’s data 
to update program costs and examine differences between estimated and 
actual costs. This way, scope changes, risks, and other opportunities 
can be presented to management in time to plan for and mitigate their 
impact. In addition, program status can be compared to historical data 
to better understand variances. Finally, cost estimators can help EVM 
analysts calculate a cumulative probability distribution to determine 
the level of confidence in the baseline. 

But bringing a program to successful completion requires knowing 
potential risks and identifying ways to respond to them before they 
happen—using risk management to identify, mitigate, and assign 
resources to manage risks so that their impact can be minimized. This 
requires the support of many program management and engineering staff 
and it results in better performance and more reliable predictions of 
program outcomes. By integrating EVM data and risk management, program 
managers can develop current estimates at completion (EAC) for all 
levels of management, including OMB reporting requirements. Therefore, 
chapters 18–20 expand on these concepts by examining program cost 
planning, execution, and updating. 

The Guide’s Case Studies: 

The Cost Guide contains a number of case studies drawn from GAO program 
reviews. The case studies highlight problems typically associated with 
cost estimates and augment the key points and lessons learned that the 
chapters discuss. For example, GAO has found that in many programs cost 
growth results from optimistic assumptions about technological 
enhancements. Experts on cost estimating have also found that many 
program managers believe they can deliver state-of-the-art technology 
upgrades within a constrained budget before proof is available that the 
requirements are feasible. Studies have shown that it costs more to 
develop technology from scratch than to develop it incrementally over 
time.[Footnote 9] Appendix II gives some background information for 
each program used in the case studies. (Appendix I is a list of 
auditing agencies.) 

The Cost Guide In Relation To Established Standards: 

Our intent is to use this Cost Guide in conjunction with Government 
Auditing Standards and Standards for Internal Control in the Federal 
Government, commonly referred to as the yellow book and the green book, 
respectively.[Footnote 10] If auditors cite compliance with these 
standards and internal controls and find inconsistencies between them 
and the Cost Guide, they should defer to the yellow and green books for 
the prevailing rules. 

This guide’s reference list identifies cost estimating guides and 
sources available from other government agencies and organizations that 
we relied on to determine the processes, practices, and procedures most 
commonly recommended in the cost estimating community. Users of the 
guide may wish to refer to those references for more information. In 
addition, we relied on information from the Society of Cost Estimating 
and Analysis (SCEA), which provides standards for cost estimating, and 
the Project Management Institute (PMI), which provides EVM standards. 
[Footnote 11] 
 
The Guide’s Readers: 

The federal audit community is the primary audience for this guide. In 
addition, agencies that do not have a formal policy for conducting or 
reviewing cost estimates will benefit from it, because it will inform 
them of the criteria GAO uses in assessing a cost estimate’s 
credibility. Besides GAO, auditing agencies include Inspectors General 
and audit services such as the Naval Audit Service and the Army Audit 
Agency. Appendix I lists other auditing agencies that GAO may contact 
at the start of an audit. The list may help ease the burden on agencies 
as they work to meet the needs of various oversight offices and should 
help speed up delivery of data request items. 

We intend to periodically update the Cost Guide. Comments and 
suggestions from experienced users are always welcome, as are 
recommendations from experts in the cost estimating and EVM 
disciplines. 

Acknowledgments: 

The Cost Guide team thanks the many members of the cost community who 
helped make the guide a reality. After we discussed our plans for 
developing the guide with members of the cost community, several 
experts expressed interest in working with us. The number of experts 
who helped us create this guide grew over time, beginning with our 
first meeting in June 2005. Their contributions were invaluable. 

Together with these experts, GAO has developed a guide that clearly 
outlines its criteria for assessing cost estimates and EVM data during 
audits and that we believe will benefit all agencies in the federal 
government. We would like to thank everyone who gave their time by 
attending meetings, giving us valuable documentation, and providing 
comments. Those who worked with us on this guide are listed in appendix 
III. Additional acknowledgments are in appendix XVI. 

Chapter 1: The Characteristics Of Credible Cost Estimates And A 
Reliable Process For Creating Them: 

More than 30 years ago, we reported that realistic cost estimating was 
imperative when making wise decisions in acquiring new systems. In 
1972, we published a report called Theory and Practice of Cost 
Estimating for Major Acquisitions, in which we stated that estimates of 
the cost to develop and produce weapon systems were frequently 
understated, with cost increases on the order of $15.6 billion from 
early development estimates.[Footnote 12] In that report, we identified 
factors in the cost estimating function that were causing this problem 
and offered suggestions for solving or abating the problem of 
unexpected cost growth. 

We found that uniform guidance on cost estimating practices and 
procedures that would be the basis for formulating valid, consistent, 
and comparable estimates was lacking within DOD. In fact, evidence 
showed that each military service issued its own guidance for creating 
cost estimates and that the guidance ranged from a detailed estimating 
manual to a few general statements. In addition, we reported that cost 
estimators often ignored this guidance.[Footnote 13] 

In that 1972 report, we also stated that cost estimates for specific 
systems were frequently revisions of previously developed estimates and 
that accurate revisions of both the original and updated cost estimates 
required documentation showing data sources, assumptions, methods, and 
decisions basic to the estimates. However, we discovered that in 
virtually every system we reviewed for the report, documentation 
supplying such information was inaccurate or lacking. Among the 
resulting difficulties were that: 
 
* known costs had been excluded without adequate or valid 
justification; 
 
* historical cost data used for computing estimates were sometimes 
invalid, unreliable, or unrepresentative;
 
* inflation was not always included or was not uniformly treated when 
it was included; and; 

* understanding the proper use of the estimates was hindered because 
the estimates were too low.[Footnote 14] 
 
Another finding was that readily retrievable cost data that could serve 
in computing cost estimates for new weapon systems were generally 
lacking. Additionally, organized and systematic efforts were not made 
to gather actual cost information to achieve comparability between data 
collected on various weapon systems or to see whether the cost data the 
contractors reported were accurate and consistent.[Footnote 15] 

Our conclusion was that without realism and objectivity in the cost 
estimating process, bias and overoptimism creep into estimates that 
advocates of weapon systems prepare, and the estimates tend to be too 
low. Therefore, staff not influenced by the military organization’s 
determination to field a weapon system, or by the contractor’s 
intention to develop and produce the system, should review every weapon 
system at major decision points in the acquisition.[Footnote 16] 
 
Basic Characteristics Of Credible Cost Estimates: 
 
The basic characteristics of effective estimating have been studied and 
highlighted many times. Their summary, in table 1, is from our 1972 
report, Theory and Practice of Cost Estimating for Major Acquisitions. 
These characteristics are still valid today and should be found in all 
sound cost analyses. 

Table 1: GAO’s 1972 Version of the Basic Characteristics of Credible 
Cost Estimates: 
 
Characteristic: Clear identification of task; 
Description: Estimator must be provided with the system description, 
ground rules and assumptions, and technical and performance 
characteristics; Estimate’s constraints and conditions must be clearly 
identified to ensure the preparation of a well-documented estimate. 
 
Characteristic: Broad participation in preparing estimates; 
Description: All stakeholders should be involved in deciding mission 
need and requirements and in defining system parameters and other 
characteristics Data should be independently verified for accuracy, 
completeness, and reliability. 
 
Characteristic: Availability of valid data; 
Description: Numerous sources of suitable, relevant, and available data 
should be used; Relevant, historical data should be used from similar 
systems to project costs of new systems; these data should be directly 
related to the system’s performance characteristics. 
 
Characteristic: Standardized structure for the estimate; 
Description: A standard work breakdown structure, as detailed as 
possible, should be used, refining it as the cost estimate matures and 
the system becomes more defined; The work breakdown structure ensures 
that no portions of the estimate are omitted and makes it easier to 
make comparisons to similar systems and programs. 

Characteristic: Provision for program uncertainties; 
Description: Uncertainties should be identified and allowance developed 
to cover the cost effect; Known costs should be included and unknown 
costs should be allowed for. 
 
Characteristic: Recognition of inflation; 
Description: The estimator should ensure that economic changes, such as 
inflation, are properly and realistically reflected in the life-cycle 
cost estimate. 

Characteristic: Recognition of excluded costs; 
Description: All costs associated with a system should be included; any 
excluded costs should be disclosed and given a rationale. 
 
Characteristic: Independent review of estimates; 
Description: Conducting an independent review of an estimate is crucial 
to establishing confidence in the estimate; the independent reviewer 
should verify, modify, and correct an estimate to ensure realism, 
completeness, and consistency. 
 
Characteristic: Revision of estimates for significant program changes; 
Description: Estimates should be updated to reflect changes in a 
system’s design requirements. Large changes that affect costs can 
significantly influence program decisions. 
 
Source: GAO. 

[End of table] 

In a 2006 survey to identify the characteristics of a good estimate, 
participants from a wide variety of industries—aerospace, automotive, 
energy—as well as consulting firms and the U.S. Navy and Marine Corps 
corroborated the continuing validity of the characteristics in table 1. 
Despite the fact that these basic characteristics have been published 
and known for decades, we find that many agencies still lack the 
ability to develop cost estimates that can satisfy them. Case studies 1 
and 2, drawn from GAO reports, show the kind of cross-cutting findings 
we have reported in the past. Because of findings like those in case 
studies 1 and 2, the Cost Guide provides best practice processes, 
standards, and procedures for developing, implementing, and evaluating 
cost estimates and EVM systems and data. By satisfying these criteria, 
agencies should be able to better manage their programs and inform 
decision makers of the risks involved. 

Case Study 1: Basic Estimate Characteristics, from NASA, GAO-04-642. 
GAO found that the National Aeronautics and Space Administration’s 
(NASA) basic cost estimating processes—an important tool for managing 
programs—lacked the discipline needed to ensure that program estimates 
were reasonable. Specifically, none of the 10 NASA programs GAO 
reviewed in detail met all GAO’s cost estimating criteria, which are 
based on criteria Carnegie Mellon University’s Software Engineering 
Institute developed. Moreover, none of the 10 programs fully met 
certain key criteria—including clearly defining the program’s life 
cycle to establish program commitment and manage program costs, as 
required by NASA. 

In addition, only 3 programs provided a breakdown of the work to be 
performed. Without this knowledge, the programs’ estimated costs could 
be understated and thereby subject to underfunding and cost overruns, 
putting programs at risk of being reduced in scope or requiring 
additional funding to meet their objectives. Finally, only 2 programs 
had a process in place for measuring cost and performance to identify 
risks. 

Source: GAO, NASA: Lack of Disciplined Cost-Estimating Processes 
Hinders Effective Program Management, [hyperlink, 
http://www.gao.gov/cgi-bin/getrpt?GAO-04-642] (Washington, D.C.: May 
28, 2004) 

[End of case study] 

Case Study 2: Basic Estimate Characteristics, from Customs Service 
Modernization, GAO/AIMD-99-41. GAO analyzed the U.S. Customs Service 
approach to deriving its $1.05 billion Automated Commercial Environment 
life-cycle cost estimate with Software Engineering Institute (SEI) 
criteria. SEI had seven questions for decision makers to use in 
assessing the reliability of a project’s cost estimate and detailed 
criteria to help evaluate how well a project satisfies each question. 
Among the criteria were several very significant and closely 
intertwined requirements that are at the core of effective cost 
estimating. Specifically, embedded in several of the questions were 
requirements for using (1) formal cost models; (2) structured and 
documented processes for determining the software size and reuse inputs 
to the models; and (3) relevant, measured, and normalized historical 
cost data (estimated and actual) to calibrate the models. 

GAO found that Customs did not satisfy any of these requirements. 
Instead of using a cost model, it used an unsophisticated spreadsheet 
to extrapolate the cost of each Automated Commercial Environment 
increment. Its approach to determining software size and reuse was not 
documented and was not well supported or convincing. Customs had no 
historical project cost data when it developed the $1.05 billion 
estimate and did not account for relevant, measured, and normalized 
differences in the increments. Clearly, such fundamental changes can 
dramatically affect system costs and should have been addressed 
explicitly in Customs’ cost estimates. 

Source: GAO, Customs Service Modernization: Serious Management and 
Technical Weaknesses Must Be Corrected, [hyperlink, 
http://www.gao.gov/cgi-bin/getrpt?GAO/AMD-99-41] Washington, D.C.: Feb. 
26, 1999. 

[End of case study] 

A Reliable Process For Developing Credible Cost Estimates: 
 
Certain best practices should be followed if accurate and credible cost 
estimates are to be developed. These best practices represent an 
overall process of established, repeatable methods that result in high-
quality cost estimates that are comprehensive and accurate and that can 
be easily and clearly traced, replicated, and updated. Figure 1 shows 
the cost estimating process. 

Figure 1: The Cost Estimating Process: 

[Refer to PDF for image: illustration] 

Initiation and research: 
Your audience, what you are estimating, and why you are estimating it 
are of the utmost importance. 

Assessment: 
Cost assessment steps are iterative and can be accomplished in varying 
order or concurrently. 

Analysis: 
The confidence in the point or range of the estimate is crucial to the 
decision maker. 

Presentation: 
Documentation and presentation make or break a cost estimating decision 
outcome. 

Define the estimate's purpose; 
Develop the estimating plan; 
- Define the program; 
- Determine the estimating structure; 
- Identify ground rules and assumptions; 
- Obtain the data; 
- Develop the point estimate and compare it to an independent cost 
estimate; 
Conduct sensitivity; 
Conduct a risk and uncertainty analysis; 
Document the estimate; 
Present estimate to management for approval; 
Update the estimate to reflect actual costs/changes. 

Analysis, presentation, and updating the estimate steps can lead to 
repeating previous assessment steps. 

Source: GAO. 

[End of figure] 

We have identified 12 steps that, followed correctly, should result in 
reliable and valid cost estimates that management can use for making 
informed decisions. Table 2 identifies all 12 steps and links each one 
to the chapter in this guide where it is discussed. 

Table 2: The Twelve Steps of a High-Quality Cost Estimating Process: 
 
Step: 1; 
Description: Define estimate’s purpose; 
Associated task: 
* Determine estimate’s purpose, required level of detail, and overall 
scope; 
* Determine who will receive the estimate; 
Chapter: 5. 

Step: 2; 
Description: Develop estimating plan; 
Associated task: 
* Determine the cost estimating team and develop its master schedule; 
* Determine who will do the independent cost estimate; 
* Outline the cost estimating approach; 
* Develop the estimate timeline; 
Chapter: 5 and 6. 

Step: 3; 
Description: Define program characteristics; 
Associated task: 
* In a technical baseline description document, identify the program’s 
purpose and its system and performance characteristics and all system 
configurations; 
* Any technology implications; 
* Its program acquisition schedule and acquisition strategy; 
* Its relationship to other existing systems, including predecessor or 
similar legacy systems; 
* Support (manpower, training, etc.) and security needs and risk items; 
* System quantities for development, test, and production; 
* Deployment and maintenance plans; 
Chapter: 7. 

Step: 4; 
Description: Determine estimating structure 
Associated task: 
* Define a work breakdown structure (WBS) and describe each element in 
a WBS dictionary (a major automated information system may have only a 
cost element structure); 
* Choose the best estimating method for each WBS element; 
* Identify potential cross-checks for likely cost and schedule drivers; 
* Develop a cost estimating checklist; 
Chapter: 8.

Step: 5; 
Description: Identify ground rules and assumptions; 
Associated task: 
* Clearly define what the estimate includes and excludes; 
* Identify global and program-specific assumptions, such as: 
- the estimate’s base year, including time-phasing and life cycle; 
* Identify program schedule information by phase and program 
acquisition strategy; 
* Identify any schedule or budget constraints, inflation assumptions, 
and travel costs; 
* Specify equipment the government is to furnish as well as the use of 
existing facilities or new modification or development; 
* Identify prime contractor and major subcontractors; 
* Determine technology refresh cycles, technology assumptions, and new 
technology to be developed; 
* Define commonality with legacy systems and assumed heritage savings; 
Describe effects of new ways of doing business; 
Chapter: 9. 

Step: 6; 
Description: Obtain data; 
Associated task: 
* Create a data collection plan with emphasis on collecting current and 
relevant technical, programmatic, cost, and risk data; 
* Investigate possible data sources; 
* Collect data and normalize them for cost accounting, inflation, 
learning, and quantity adjustments; 
* Analyze the data for cost drivers, trends, and outliers and compare 
results against rules of thumb and standard factors derived from 
historical data; 
* Interview data sources and document all pertinent information, 
including an assessment of data reliability and accuracy; 
* Store data for future estimates; 
Chapter: 10. 

Step: 7; 
Description: Develop point estimate and compare it to an independent 
cost estimate; 
Associated task: 
* Develop the cost model, estimating each WBS element, using the best 
methodology from the data collected, and including all estimating 
assumptions[A]; 
* Express costs in constant year dollars; 
* Time-phase the results by spreading costs in the years they are 
expected to occur, based on the program schedule; 
* Sum the WBS elements to develop the overall point estimate; 
* Validate the estimate by looking for errors like double counting and 
omitted costs; 
* Compare estimate against the independent cost estimate and examine 
where and why there are differences; 
* Perform cross-checks on cost drivers to see if results are similar; 
* Update the model as more data become available or as changes occur 
and compare results against previous estimates; 
Chapter: 11, 12, and 15. 

Step: 8; 
Description: Conduct sensitivity analysis; 
Associated task: 
* Test the sensitivity of cost elements to changes in estimating input 
values and key assumptions; 
* Identify effects on the overall estimate of changing the program 
schedule or quantities; 
* Determine which assumptions are key cost drivers and which cost 
elements are affected most by changes; 
Chapter: 13. 

Step: 9; 
Description: Conduct risk and uncertainty analysis; 
Associated task: 
* Determine and discuss with technical experts the level of cost, 
schedule, and technical risk associated with each WBS element; 
* Analyze each risk for its severity and probability; 
* Develop minimum, most likely, and maximum ranges for each risk 
element; 
* Determine type of risk distributions and reason for their use; 
* Ensure that risks are correlated; 
* Use an acceptable statistical analysis method (e.g., Monte Carlo 
simulation) to develop a confidence interval around the point estimate; 
* Identify the confidence level of the point estimate; 
* Identify the amount of contingency funding and add this to the point 
estimate to determine the risk-adjusted cost estimate; 
* Recommend that the project or program office develop a risk 
management plan to track and mitigate risks; 
Chapter: 14.

Step: 10; 
Description: Document the estimate; 
Associated task: 
* Document all steps used to develop the estimate so that a cost 
analyst unfamiliar with the program can recreate it quickly and produce 
the same result; 
* Document the purpose of the estimate, the team that prepared it, and 
who approved the estimate and on what date; 
* Describe the program, its schedule, and the technical baseline used 
to create the estimate; 
* Present the program’s time-phased life-cycle cost; 
* Discuss all ground rules and assumptions; 
* Include auditable and traceable data sources for each cost element 
and document for all data sources how the data were normalized; 
* Describe in detail the estimating methodology and rationale used to 
derive each WBS element’s cost (prefer more detail over less); 
* Describe the results of the risk, uncertainty, and sensitivity 
analyses and whether any contingency funds were identified; 
* Document how the estimate compares to the funding profile; 
* Track how this estimate compares to any previous estimates; 
Chapter: 16.

Step: 11; 
Description: Present estimate to management for approval; 
Associated task: 
* Develop a briefing that presents the documented life-cycle cost 
estimate; 
* Include an explanation of the technical and programmatic 
baseline and any uncertainties; 
* Compare the estimate to an independent cost estimate (ICE) and 
explain any differences; 
* Compare the estimate (life-cycle cost estimate (LCCE)) or independent 
cost estimate to the budget with enough detail to easily defend it by 
showing how it is accurate, complete, and high in quality; 
* Focus in a logical manner on the largest cost elements and cost 
drivers; 
* Make the content clear and complete so that those who are unfamiliar 
with it can easily comprehend the competence that underlies the 
estimate results; 
* Make backup slides available for more probing questions; 
* Act on and document feedback from management; 
* Request acceptance of the estimate; 
Chapter: 17. 

Step: 12; 
Description: Update the estimate to reflect actual costs and changes 
Associated task: 
* Update the estimate to reflect changes in technical or program 
assumptions or keep it current as the program passes through new phases 
or milestones; 
* Replace estimates with EVM and independent estimate at completion 
(EAC) from the integrated EVM system; 
* Report progress on meeting cost and schedule estimates; 
* Perform a post mortem and document lessons learned for elements whose 
actual costs or schedules differ from the estimate; 
* Document all changes to the program and how they affect the cost 
estimate 
Chapter: 16, 18, 19, and 20. 

Source: GAO, DHS, DOD, DOE, NASA, SCEA, and industry. 

[A] In a data-rich environment, the estimating approach should precede 
the investigation of data sources; in reality, a lack of data often 
determines the approach. 

[End of table] 

Each of the 12 steps is important for ensuring that high-quality cost 
estimates are developed and delivered in time to support important 
decisions.[Footnote 17] Unfortunately, we have found that some agencies 
do not incorporate all the steps and, as a result, their estimates are 
unreliable. For example, in 2003, we completed a cross-cutting review 
at the National Aeronautics and Space Administration (NASA) that showed 
that the lack of an overall process affected NASA’s ability to create 
credible cost estimates (case study 3). 

Case Study 3: Following Cost Estimating Steps, from NASA, GAO-04-642: 
NASA’s lack of a quality estimating process resulted in unreliable cost 
estimates throughout each program’s life cycle. As of April 2003, the 
baseline development cost estimates for 27 NASA programs varied 
considerably from their initial baseline estimates. More than half 
the programs’ development cost estimates increased. For some of these 
programs, the increase was as much as 94 percent. In addition, the 
baseline development estimates for 10 programs that GAO reviewed in 
detail were rebaselined—some as many as four times. 

The Checkout and Launch Control System (CLCS) program—whose baseline 
had increased from $206 million in fiscal year 1998 to $399 million by 
fiscal year 2003—was ultimately terminated. CLCS’ cost increases 
resulted from poorly defined requirements and design and fundamental 
changes in the contractors’ approach to the work. GAO also found that 
 
* the description of the program objectives and overview in the program 
commitment agreement was not the description used to generate the cost 
estimate; 

* the total life cycle and WBS were not defined in the program’s life-
cycle cost estimate; 

* the 1997 nonadvocate review identified the analogy to be used as well 
as six different projects for parametric estimating, but no details on 
the cost model parameters were documented; and; 

* no evidence was given to explain how the schedule slip, from June 
2001 to June 2005, affected the cost estimate. 

GAO recommended that NASA establish a framework for developing life-
cycle cost estimates that would require each program to base its cost 
estimates on a WBS that encompassed both in-house and contractor 
efforts and also to prepare a description of cost analysis 
requirements. NASA concurred with the recommendation; it intended 
to revise its processes and its procedural requirements document and 
cost estimating handbook accordingly. 

Source: GAO, NASA: Lack of Disciplined Cost-Estimating Processes 
Hinders Effective Program Management, [hyperlink, 
http://www.gao.gov/products/GAO-04-642], Washington, D.C.: May 28, 
2004. 

[End of case study] 

NASA has since developed a cost estimating handbook that reflects a 
“renewed appreciation within the Agency for the importance of cost 
estimating as a critical part of project formulation and execution.” It 
has also stated that “There are newly formed or regenerated cost 
organizations at NASA Headquarters The field centers cost organizations 
have been strengthened, reversing a discouraging trend of decline.” 

Finally, NASA reported in its cost handbook that “Agency management, 
from the Administrator and Comptroller on down, is visibly supportive 
of the cost estimating function.”[Footnote 18]

While these are admirable improvements, even an estimate that meets all 
these steps may be of little use or may be overcome by events if it is 
not ready when needed. Timeliness is just as important as quality. In 
fact, the quality of a cost estimate may be hampered if the time to 
develop it is compressed. When this happens, there may not be enough 
time to collect historical data. Since data are the key drivers of an 
estimate’s quality, their lack increases the risk that the estimate may 
not be reliable. In addition, when time is a factor, an independent 
cost estimate (ICE) may not be developed, further adding to the risk 
that the estimate may be overly optimistic. This is not an issue for 
DOD’s major defense acquisition programs, because an ICE is required 
for certain milestones. 

Relying on a standard process that emphasizes pinning down the 
technical scope of the work, communicating the basis on which the 
estimate is built, identifying the quality of the data, determining the 
level of risk, and thoroughly documenting the effort should result in 
cost estimates that are defensible, consistent, and trustworthy. 
Furthermore, this process emphasizes the idea that a cost estimate 
should be a “living document,” meaning that it will be continually 
updated as actual costs begin to replace the original estimates. This 
last step links cost estimating with data that are collected by an EVM 
system, so that lessons learned can be examined for differences and 
their reasons. It also provides valuable information for strengthening 
the credibility of future cost estimates, allowing for continuous 
process improvement. 

[End of chapter 1] 

Chapter 2: Why Government Programs Need Cost Estimates And The 
Challenges In Developing Them: 

Cost estimates are necessary for government acquisition programs for 
many reasons: to support decisions about funding one program over 
another, to develop annual budget requests, to evaluate resource 
requirements at key decision points, and to develop performance 
measurement baselines. Moreover, having a realistic estimate of 
projected costs makes for effective resource allocation, and it 
increases the probability of a program’s success. Government programs, 
as identified here, include both in-house and contract efforts. 

For capital acquisitions, OMB’s Capital Programming Guide helps 
agencies use funds wisely in achieving their missions and serving the 
public. The Capital Programming Guide stresses the need for agencies to 
develop processes for making investment decisions that deliver the 
right amount of funds to the right projects. It also highlights the 
need for agencies to identify risks associated with acquiring capital 
assets that can lead to cost overruns, schedule delays, and assets that 
fail to perform as expected. 

OMB’s guide has made developing accurate life-cycle cost estimates a 
priority for agencies in properly managing their portfolios of capital 
assets that have an estimated life of 2 years or more. Examples of 
capital assets are land; structures such as office buildings, 
laboratories, dams, and power plants; equipment like motor vehicles, 
airplanes, ships, satellites, and information technology hardware; and 
intellectual property, including software. 

Developing reliable cost estimates has been difficult for agencies 
across the federal government. Too often, programs cost more than 
expected and deliver results that do not satisfy all requirements. 
According to the 2002 President’s Management Agenda: 

Everyone agrees that scarce federal resources should be allocated to 
programs and managers that deliver results. Yet in practice, this is 
seldom done because agencies rarely offer convincing accounts of the 
results their allocations will purchase. There is little reward, in 
budgets or in compensation, for running programs efficiently. And once 
money is allocated to a program, there is no requirement to revisit the 
question of whether the results obtained are solving problems the 
American people care about.[Footnote 19] 

The need for reliable cost estimates is at the heart of two of the five 
governmentwide initiatives in that agenda: improved financial 
performance and budget and performance integration. These initiatives 
are aimed at ensuring that federal financial systems produce accurate 
and timely information to support operating, budget, and policy 
decisions and that budgets are based on performance. With respect to 
these initiatives, President Bush called for changes to the budget 
process to better measure the real cost and performance of programs. 

In response to the 2002 President’s Management Agenda, OMB’s Capital 
Programming Guide requires agencies to have a disciplined capital 
programming process that sets priorities between new and existing 
assets.[Footnote 20] It also requires agencies to perform risk 
management and develop cost estimates to improve the accuracy of cost, 
schedule, and performance management. These activities should help 
mitigate difficult challenges associated with asset management and 
acquisition. In addition, the Capital Programming Guide requires an 
agency to develop a baseline assessment for each major program it plans 
to acquire. As part of this baseline, a full accounting of life-cycle 
cost estimates, including all direct and indirect costs for planning, 
procurement, operations and maintenance, and disposal is expected. 

The capital programming process, as promulgated in OMB’s Capital 
Programming Guide, outlines how agencies should use long-range planning 
and a disciplined budget process to effectively manage a portfolio of 
capital assets that achieves program goals with the least life-cycle 
costs and risks. It outlines three phases: (1) planning and budgeting, 
(2) acquisition, and (3) management in use, often referred to as 
operations and maintenance. For each phase, reliable cost estimates are 
essential and necessary to establish realistic baselines from which to 
measure future progress. 

Regarding the planning and budgeting phase, the federal budget process 
is a cyclical event. Each year in January or early February, the 
president submits budget proposals for the year that begins October 1. 
They include data for the most recently completed year, the current 
year, the budget year, and at least the 4 years following the budget 
year. The budget process has four phases: 

1. executive budget formulation, 
2. congressional budget process, 
3. budget execution and control, and, 
4. audit and evaluation. 

Budget cycles overlap—the formulation of one budget begins before 
action has been completed on the previous one. (Appendix IV gives an 
overview of the federal budget process, describing its phases and the 
major steps and time periods for each phase.) 

For the acquisition and management in use phases, reliable cost 
estimates are also important for program approval and for the continued 
receipt of annual funding. However, cost estimating is difficult. To 
develop a sound cost estimate, estimators must possess a variety of 
skills and have access to high-quality data. Moreover, credible cost 
estimates take time to develop; they cannot be rushed. Their many 
challenges increase the possibility that estimates will fall short of 
cost, schedule, and performance goals. If cost analysts recognize these 
challenges and plan for them early, this can help organizations 
mitigate these risks. 

Cost Estimating Challenges: 

Developing a good cost estimate requires stable program requirements, 
access to detailed documentation and historical data, well-trained and 
experienced cost analysts, a risk and uncertainty analysis, the 
identification of a range of confidence levels, and adequate 
contingency and management reserves.[Footnote 21] Even with the best of 
these circumstances, cost estimating is difficult. It requires both 
science and judgment. And, since answers are seldom if ever precise, 
the goal is to find a “reasonable” answer. However, the cost estimator 
typically faces many challenges. These challenges often lead to bad 
estimates—that is, estimates that contain poorly defined assumptions, 
have no supporting documentation, are accompanied by no comparisons to 
similar programs, are characterized by inadequate data collection and 
inappropriate estimating methodologies, are sustained by irrelevant or 
out-of-date data, provide no basis or rationale for the estimate, and 
can show no defined process for generating the estimate. Figure 2 
illustrates some of the challenges a cost estimator faces and some of 
the ways to mitigate them. 

Figure 2: Challenges Cost Estimators Typically Face: 

[Refer to PDF for image: illustration] 

Detailed documentation available; 
Adequate cost reserve; 
Well defined; 
Risk analysis conducted; 
Stable program; 
Adequate budget; 
Historical data available; 
Well trained and experienced analysts. 

Versus: 

Inexperienced analyst; 
Unreliable data; 
Unrealistic assumptions; 
Historical cost databases not available; 
Data not normalized; 
Unreasonable program baselines; 
Overoptimism; 
New processes; 
First-time integration; 
Cutting edge technology; 
Obtaining data; 
Program instability; 
Complex technology; 
Diminishing industrial base; 
Unrealistic projected savings. 

Source: GAO. 

[End of figure] 

Some cost estimating challenges are widespread. Deriving high-quality 
cost estimates depends on the quality of, for example, historical 
databases. It is often not possible for the cost analyst to collect the 
kinds of data needed to develop cost estimating relationships (CERs), 
analysis of development software cost, engineering build-up, and many 
other practices. In most cases, the better the data are, the better the 
resulting estimate will be. Since much of a cost analyst’s time is 
spent obtaining and normalizing data, experienced and well-trained cost 
analysts are necessary. Too often, individuals without these skills are 
thrown into performing a cost analysis to meet a pressing need (see 
case study 4). In addition, limited program resources (funds and time) 
often constrain broad participation in cost estimation processes and 
force the analyst (or cost team) to reduce the extent to which trade-
off, sensitivity, and even uncertainty analyses are performed. 

Case Study 4: Cost Analysts’ Skills, from NASA, GAO-04-642: 
GAO found that NASA’s efforts to improve its cost estimating processes 
were undermined by ineffective use of its limited number of cost-
estimating analysts. For example, headquarters officials stated that as 
projects entered the formulation phase, they typically relied on 
program control and budget specialists—not cost analysts—to provide the 
financial services to manage projects. Yet budget specialists were 
generally responsible for obligating and spending funds—not for 
conducting cost analyses that underlay the budget or ensuring that 
budgets were based on reasonable cost estimates—and, therefore, they 
tended to assume that the budget was realistic. 

Source: GAO, NASA: Lack of Disciplined Cost-Estimating Processes 
Hinders Effective Program Management, [hyperlink, 
http://www.gao.gov/products/GAO-04-642], Washington, D.C.: May 28, 
2004. 

[End of case study] 

Many cost estimating challenges can be traced to overoptimism. Cost 
analysts typically develop their estimates from technical baselines 
that program offices provide. Since program technical baselines come 
with uncertainty, recognizing this uncertainty can help form a better 
understanding of where problems will occur in the execution phase. For 
example, if a program baseline states that its total source lines of 
code will be 100,000 but the eventual total is 200,000, the cost will 
be underestimated. Or if the baseline states that the new program will 
reuse 80,000 from a legacy system but can eventually reuse only 10,000, 
the cost will be underestimated. This is illustrated in case study 5. 

Case Study 5: Recognizing Uncertainty, from Customs Service 
Modernization, GAO/AIMD-99-41: 
 
Software and systems development experts agree that early project 
estimates are imprecise by definition and that their inherent 
imprecision decreases during a project’s life cycle as more information 
becomes known. The experts emphasize that to be useful, each cost 
estimate should indicate its degree of uncertainty, possibly as an 
estimated range or qualified by some factor of confidence. The U.S. 
Customs Service did not reveal the degree of uncertainty of its cost 
estimate for the Automated Commercial Environment (ACE) program to 
managers involved in investment decisions. For example, Customs did not 
disclose that it made the estimate before fully defining ACE 
functionality. Instead, Customs presented its $1.05 billion ACE life-
cycle cost estimate as an unqualified point estimate. This suggests an 
element of precision that cannot exist for such an undefined system, 
and it obscures the investment risk remaining in the project. 
 
Source: GAO, Customs Service Modernization: Serious Management and 
Technical Weaknesses Must Be Corrected, [hyperlink, 
http://www.gao.gov/products/GAO/AMD-99-41], Washington, D.C.: Feb. 26, 
1999. 

[End of case study] 

Program proponents often postulate the availability of a new 
technology, only to discover that it is not ready when needed and 
program costs have increased. Proponents also often make assumptions 
about the complexity or difficulty of new processes, such as first-time 
integration efforts, which may end up to be unrealistic. More time and 
effort lead directly to greater costs, as case study 6 demonstrates. 

Case Study 6: Using Realistic Assumptions, from Space Acquisitions, GAO-
07-96: 
 
In five of six space system acquisition programs GAO reviewed, program 
officials and cost estimators assumed when cost estimates were 
developed that critical technologies would be mature and available. 
They made this assumption even though the programs had begun without 
complete understanding of how long they would run or how much it would 
cost to ensure that the technologies could work as intended. After the 
programs began, and as their development continued, the technology 
issues ended up being more complex than initially believed. 

For example, for the National Polar-orbiting Operational Satellite 
System (NPOESS), DOD and the U.S. Department of Commerce committed 
funds for developing and producing satellites before the technology was 
mature. Only 1 of 14 critical technologies was mature at program 
initiation, and it was found that 1 technology was less mature after 
the contractor conducted more verification testing. 

GAO found that the program was later beset by significant cost 
increases and schedule delays, partly because of technical problems 
such as the development of key sensors. 

Source: GAO, Space Acquisitions: DOD Needs to Take More Acton to 
Address Unrealistic Initial Cost Estimates of Space Systems, 
[hyperlink, http://www.gao.gov/products/GAO-07-96], Washington, D.C.: 
Nov. 17, 2006. 

[End of case study] 

Collecting historical data and dedicating the time needed to do this 
continuously is another challenge facing cost estimators. Certain 
acquisition policy changes and pressured scheduling have had the 
unintended consequence of curtailing the generation of a great deal of 
historical data used for cost estimating. Outside of highly specific 
technology areas, it is often difficult for the cost analyst to collect 
the kinds of data needed to develop software cost estimates, valid 
CERs, and detailed engineering build-up estimates. 

In addition, limited program resources in terms of both funds and time 
often constrain broad participation in cost estimation processes and 
force the analyst or cost team to reduce the extent to which trade-off, 
sensitivity, and even uncertainty analyses are performed. Addressing 
these critical shortfalls is important and requires policy and cultural 
adjustments to fix. 

Program stability presents another serious challenge to cost analysts. 
A risk to the program also arises when the contractor knows the 
program’s budget. The contractor is pressured into presenting a cost 
estimate that fits the budget instead of a realistic estimate. Budget 
decisions drive program schedules and procurement quantities. If 
development funding is reduced, the schedule can stretch and costs can 
increase; if production funding is reduced, the number of quantities to 
be bought will typically decrease, causing unit procurement costs to 
increase. For example, projected savings from initiatives such as 
multiyear procurement—contracting for purchase of supplies or services 
for more than one program year—may disappear, as can be seen in case 
study 7. 

Case Study 7: Program Stability Issues, from Combating Nuclear 
Smuggling, GAO-06-389: 
 
According to officials of Customs and Border Protection (CBP) and the 
Pacific Northwest National Laboratory (PNNL), recurrent difficulties 
with project funding were the most important explanations of schedule 
delays. Specifically, according to Department of Homeland Security and 
PNNL officials, CBP had been chronically late in providing appropriated 
funds to PNNL, hindering its ability to meet program deployment goals. 
For example, PNNL did not receive its fiscal year 2005 funding until 
September 2005, the last month of the fiscal year. According to PNNL 
officials, because of this delay, some contracting activities in all 
deployment phases had had to be delayed or halted; the adverse effects 
on seaports were especially severe. For example, PNNL reported in 
August 2005 that site preparation work at 13 seaports had ceased 
because PNNL had not received its fiscal year 2005 funding allocation. 

Source: GAO, Combating Nuclear Smuggling: DHS Has Made Progress 
Deploying Radiation Detection Equipment at U.S. Ports-of-Entry, but 
Concerns Remain, [hyperlink, http://www.gao.gov/products/GAO-06-389], 
Washington, D.C.: Mar. 22, 2006. 

[End of case study] 

Stability issues can also arise when expected funding is cut. For 
example, if budget pressures cause breaks in production, highly 
specialized vendors may no longer be available or may have to 
restructure their prices to cover their risks. When this happens, 
unexpected schedule delays and cost increases usually result. A 
quantity change, even if it does not result in a production break, is a 
stability issue that can increase costs by affecting workload. Case 
study 8, from a GAO report on Navy shipbuilding, illustrates this 
point. 

Case Study 8: Program Stability Issues, from Defense Acquisitions, GAO-
05-183: 
 
Price increases contributed to growth in materials costs. For example, 
the price of array equipment on Virginia class submarines rose by $33 
million above the original price estimate. In addition to inflation, a 
limited supplier base for highly specialized and unique materials made 
ship materials susceptible to price increases. According to the 
shipbuilders, the low rate of ship production affected the stability of 
the supplier base. Some businesses closed or merged, leading to reduced 
competition for their services and higher prices. In some cases, the 
Navy lost its position as a preferred customer and the shipbuilder had 
to wait longer to receive materials. With a declining number of 
suppliers, more ship materials contracts went to single and sole source 
vendors. Over 75 percent of the materials for Virginia class 
submarines—reduced from 14 ships to 9 over a 10-year period—were 
produced by single source vendors. 
 
Source: GAO, Defense Acquisitions: Improved Management Practices Could 
Minimize Cost Growth in Navy Shipbuilding Programs, [hyperlink, 
http://www.gao.gov/products/GAO-05-183], Washington, D.C.: Feb. 28, 
2005. 

[End of case study] 

Significantly accelerating (sometimes called crashing) development 
schedules also present risks. In such cases, technology tends to be 
incorporated before it is ready, tests are reduced or eliminated, or 
logistics support is not in place. As case study 9 shows, the result 
can be a reduction in costs in the short term but significantly 
increased long-term costs as problems are discovered, technology is 
back-fit, or logistics support is developed after the system is in the 
field. 

Case Study 9: Development Schedules, from Defense Acquisitions, GAO-06-
327: 

Time pressures caused the Missile Defense Agency (MDA) to stray from a 
knowledge-based acquisition strategy. Key aspects of product knowledge, 
such as technology maturity, are proven in a knowledge-based strategy 
before committing to more development. MDA followed a knowledge-based 
strategy without fielding elements such as the Airborne Laser and 
Kinetic Energy Interceptor. But it allowed the Ground-Based Midcourse 
Defense program to concurrently become mature in its technology, 
complete design activities, and produce and field assets before end-to-
end system testing—all at the expense of cost, quantity, and 
performance goals. For example, the performance of some program 
interceptors was questionable because the program was inattentive to 
quality assurance. If the block approach continued to feature 
concurrent activity as a means of acceleration, MDA’s approach might 
not be affordable for the considerable amount of capability that 
was yet to be developed and fielded. 

Source: GAO, Defense Acquisitions: Missile Defense Agency Fields 
Initial Capability but Falls Short of Original Goals, [hyperlink, 
http://www.gao.gov/products/GAO-06-327], Washington, D.C.: Mar. 15, 
2006. 

[End of case study] 

In developing cost estimates, analysts often fail to adequately address 
risk, especially risks that are outside the estimator’s control or that 
were never conceived to be possible. This can result in point estimates 
that give decision makers no information about their likelihood of 
success or give them meaningless confidence intervals. A risk analysis 
should be part of every cost estimate, but it should be performed by 
experienced analysts who understand the process and know how to use the 
appropriate tools. On numerous occasions, GAO has encountered cost 
estimates with meaningless confidence intervals because the analysts 
did not understand the underlying mathematics or tools. An example is 
given in case study 10. 

Case Study 10: Risk Analysis, from Defense Acquisitions, GAO-05-183: 
 
In developing cost estimates for eight case study ships, U.S. Navy cost 
analysts did not conduct uncertainty analyses to measure the 
probability of cost growth. Uncertainty analyses are particularly 
important, given uncertainties inherent in ship acquisition, such as 
the introduction of new technologies and the volatility of overhead 
rates. Despite the uncertainties, the Navy did not test the validity of 
the cost analysts’ assumptions in estimating construction costs for the 
eight case study ships, and it did not identify a confidence level for 
estimates. 

Specifically, it did not conduct uncertainty analyses, which generate 
values for parameters that are less than precisely known around a 
specific set of ranges. For example, if the number of hours to 
integrate a component into a ship is not precisely known, analysts may 
put in low and high values. The estimate will generate costs for these 
variables, along with other variables such as weight, experience, and 
degree of rework. The result will be a range of estimates that enables 
cost analysts to make better decisions on likely costs. Instead, the 
Navy presented its cost estimates as unqualified point estimates, 
suggesting an element of precision that cannot exist early in the 
process. Other military services qualify their cost estimates by 
determining a confidence level of 50 percent. 

Source: GAO, Defense Acquisitions: Improved Management Practices Could 
Minimize Cost Growth in Navy Shipbuilding Programs, [hyperlink, 
http://www.gao.gov/products/GAO-05-183], Washington, D.C.: Feb. 28, 2005

[End of case study] 

A risk analysis should be used to determine a program’s contingency 
funding. All development programs should have contingency funding 
because it is simply unreasonable to expect a program not to encounter 
problems. Problems always occur, and program managers need ready access 
to funding in order to resolve them without adversely affecting 
programs (for example, stretching the schedule). Unfortunately, budget 
cuts often target contingency funding, and in some cases such funding 
is not allowed by policy. Decision makers and budget analysts should 
understand that eliminating contingency funding is counterproductive. 
(See case study 11.) 

Case Study 11: Risk Analysis, from NASA, GAO-04-642: 

Only by quantifying cost risk can management make informed decisions 
about risk mitigation strategies. Quantifying cost risk also provides a 
benchmark for measuring future progress. Without this knowledge, NASA 
may have little specific basis for determining adequate financial 
reserves, schedule margins, and technical performance margins. Managers 
may thus not have the flexibility they need to address program, 
technical, cost, and schedule risks, as NASA policy requires. 

Source: GAO, NASA: Lack of Disciplined Cost-Estimating Processes 
Hinders Effective Program Management, [hyperlink, 
http://www.gao.gov/products/GAO-04-642], Washington, D.C.: May 28, 
2004. 

[End of case study] 

Too often, organizations encourage goals that are unattainable because 
there is overoptimism that their organizations can reach them. These 
decisions follow a thought process that accentuates the positive 
without truly understanding the pitfalls being faced—in other words, 
the decision makers are avoiding risk. Recognizing and understanding 
risk is an important program management discipline, but most program 
managers believe they are dealing with risks when in fact they have 
created risk by their assumptions. History shows that program managers 
tend to be too optimistic. They believe that lessons learned from past 
programs will apply to their program and everything will work out fine. 
But a plan is by its nature meant to be optimistic, to ensure that the 
results will be successful. While program managers believe they build 
risk into their plan, they often do not put in enough. This is because 
they believe in the original estimates for the plan without allowing 
for additional changes in scope, schedule delays, or other elements of 
risk. In addition, in today’s competitive environment, contractor 
program managers may overestimate what their company can do compared to 
their competition, since they want to win. 

Since most organizations have a limited amount of money for addressing 
these issues, optimism is prevalent. To properly overcome this 
optimism, it is important to have an independent view. Through the 
program planning process, overoptimism can be tempered by challenging 
the assumptions the plan was based on. This can be done by 
independently assessing the outcomes, by using comparative data or 
experts in accomplishing the efforts planned. While this function can 
be performed by either inside or outside analysts, if the organization 
is not willing to address and understand the risks its program faces, 
it will have little hope of effectively managing and mitigating them. 
Having this “honest broker” approach to working these programs helps 
bring to light actions that can potentially limit the organization’s 
ability to succeed. Therefore, program managers and their organizations 
must understand the value and need for risk management by addressing 
risk proactively and having a plan should risks be realized. Doing so 
will enable the program management team to use this information to 
succeed in the future. 

Earned Value Management Challenges: 

OMB recommends that programs manage risk by applying EVM, among other 
ways. Reliable EVM data usually indicate monthly how well a program is 
performing in terms of cost, schedule, and technical matters. This 
information is necessary for proactive program management and risk 
mitigation. Such systems represent a best practice if implemented 
correctly, but qualified analytic staff are needed to validate and 
interpret the data. (See case study 12.) 

Case Study 12: Applying EVM, from Cooperative Threat Reduction, GAO-06-
692: 

In December 2005, a contractor’s self-evaluation stated that the EVM 
system for the chemical weapons destruction facility at Shchuch’ye, 
Russia, was fully implemented. DOD characterized the contractor’s EVM 
implementation as a “management failure,” citing a lack of experienced 
and qualified contractor staff. DOD withheld approximately $162,000 of 
the contractor’s award fee because of its concern about the EVM system. 
In March 2006, DOD officials stated that EVM was not yet a usable tool 
in managing the Shchuch’ye project. They stated that the contractor 
needed to demonstrate that it had incorporated EVM into project 
management rather than simply fulfilling contractual requirements. DOD 
expected the contractor to use EVM to estimate cost and schedule 
effects and their causes and, most importantly, to help eliminate or 
mitigate identified risks. The contractor’s EVM staff stated that they 
underestimated the effort needed to incorporate EVM data into the 
system, train staff, and develop EVM procedures. The contractor’s 
officials were also surprised by the number of man-hours required to 
accomplish these tasks, citing high staff turnover as contributing to 
the problem. According to the officials, working in a remote and 
isolated area caused many of the non-Russian employees to leave the 
program rather than extend their initial tour of duty. 

Source: GAO, Cooperative Threat Reduction: DOD Needs More Reliable Data 
to Better Estimate the Cost and Schedule of the Shchuch’ye Facility, 
[hyperlink, http://www.gao.gov/products/GAO-06-692], Washington, D.C.: 
May 31, 2006. 

[End of case study] 

Case study 12 shows that using EVM requires a cultural change. As with 
any initiative, an agency’s management must show an interest in EVM if 
its use is to be sustained. Executive personnel should understand EVM 
terms and analysis products if they expect program managers and teams 
to use them. Additionally, at the program level, EVM requires qualified 
staff to independently assess what was accomplished. EVM training 
should be provided and tracked at all levels of personnel. This does 
not always happen, and government agencies struggle with how to obtain 
qualified and experienced personnel. Perhaps the biggest challenge in 
using EVM is the trend to rebaseline programs. This happens when the 
current baseline is not adequate to complete all the work, causing a 
program to fall behind schedule or run over cost (see case study 13). A 
new baseline serves an important management control purpose when 
program goals can no longer be achieved: it gives perspective on the 
program’s current status. However, auditors should be aware that 
comparing the latest cost estimate with the most recent approved 
baseline provides an incomplete perspective on a program’s performance, 
because a rebaseline shortens the period of performance reported and 
resets the measurement of cost growth to zero. 

Case Study 13: Rebaselining, from NASA, GAO-04-642: 

Baseline development cost estimates for the programs GAO reviewed 
varied considerably from the programs’ initial baseline estimates. 
Development cost estimates of more than half the programs increased; 
for some programs, the increase was significant. The baseline 
development cost estimates for the 10 programs GAO reviewed in detail 
were rebaselined—that is, recalculated to reflect new costs, time 
periods, or resources associated with changes in program objectives, 
deliverables, or scope and plans. Although NASA provided specific 
reasons for the increased cost estimates and rebaselinings—such as 
delays in development or delivery of key system components and funding 
shortages—it did not have guidance for determining when rebaselinings 
were justified. Such criteria are important for instilling discipline 
in the cost estimating process. 

Source: GAO, NASA: Lack of Disciplined Cost-Estimating Processes 
Hinders Effective Program Management, [hyperlink, 
http://www.gao.gov/products/GAO-04-642], Washington, D.C.: May 28, 
2004. 

[End of case study] 

These challenges make it difficult for cost estimators to develop 
accurate estimates. Therefore, it is very important that agencies’ cost 
estimators have adequate guidance and training to help mitigate these 
challenges. In chapter 3, we discuss audit criteria related to cost 
estimating and EVM. We also identify some of the guidance we relied on 
to develop this guide. 

[End of Chapter 2] 

Chapter 3: Criteria For Cost Estimating, EVM, And Data Reliability: 

Government auditors use criteria as benchmarks for how well a program 
is performing. Criteria provide auditors with a context for what is 
required, what the program’s state should be, or what it was expected 
to accomplish. Criteria are the laws, regulations, policies, 
procedures, standards, measures, expert opinions, or expectations that 
define what should exist. When auditors conduct an audit, they should 
select criteria by whether they are reasonable, attainable, and 
relevant to the program’s objectives. 

Criteria include the: 

* purpose or goals that statutes or regulations have prescribed or that 
the audited entity’s officials have set, 

* policies and procedures the audited entity’s officials have 
established, 

* technically developed norms or standards, 

* expert opinions, 

* earlier performance, 

* performance in the private sector, and, 
 
* leading organizations’ best practices. 

In developing this guide, we researched legislation, regulations, 
policy, and guidance for the criteria that most pertained to cost 
estimating and EVM. Our research showed that while DOD has by far the 
most guidance on cost estimating and EVM in relation to civil agencies, 
other agencies are starting to develop policies and guidance. 
Therefore, we intend this guide as a starting point for auditors to 
identify criteria. 

For each new engagement, however, GAO auditors should exercise 
diligence to see what, if any, new legislation, regulation, policy, and 
guidance exists. Auditors also need to decide whether criteria are 
valid. Circumstances may have changed since they were established and 
may no longer conform to sound management principles or reflect current 
conditions. In such cases, GAO needs to select or develop criteria that 
are appropriate for the engagement’s objectives. Table 3 lists criteria 
related to cost estimating and EVM. Each criterion is described in more 
detail in appendix V. 

Table 3: Cost Estimating and EVM Criteria for Federal Agencies: 
Legislation, Regulations, Policies, and Guidance: 

Type: Legislation or regulation: 

Date: 1968; 
Title: SAR: Selected Acquisition Reports, 10 U.S.C. § 2432 (2006); 
Applicable agency: DOD; 
Notes: Became permanent law in 1982; applies only to DOD’s major 
defense acquisition programs. 

Date: 1982; 
Title: Unit Cost Reports (“Nunn-McCurdy”); 10 U.S.C. § 2433 (2006); 
Applicable agency: DOD; 
Notes: Applies only to DOD’s major defense acquisition programs. 

Date: 1983; 
Title: Independent Cost Estimates; Operational Manpower Requirements, 
10 U.S.C. § 2434 (2006); 
Applicable agency: DOD; 
Notes: Applies only to DOD’s major defense acquisition programs. 

Date: 1993; 
Title: GPRA: Government Performance and Results Act, Pub. L. No. 103-62 
(1993); 
Applicable agency: All; 
Notes: Requires agencies to prepare (1) multiyear strategic plans 
describing mission goals and methods for reaching them and (2) annual 
program performance reports to review progress toward annual 
performance goals. 

Date: 1994; 
Title: The Federal Acquisition Streamlining Act of 1994, § 5051(a), 41 
U.S.C. § 263 (2000); 
Applicable agency: All civilian agencies; 
Notes: Established congressional policy that agencies should achieve, 
on average, 90 percent of cost, performance, and schedule goals 
established for their major acquisition programs; requires an agency to 
approve or define cost, performance, and schedule goals and to 
determine whether there is a continuing need for programs that are 
significantly behind schedule, over budget, or not in compliance with 
performance or capability requirements and to identify suitable 
actions to be taken. 

Date: 1996; 
Title: CCA: Clinger-Cohen Act of 1996, 40 U.S.C. §§ 11101–11704 (Supp. 
V 2005); 
Applicable agency: All; 
Notes: Requires agencies to base decisions about information technology 
investments on quantitative and qualitative factors associated with 
their costs, benefits, and risks and to use performance data to 
demonstrate how well expenditures support program improvements. 

Date: 2006; 
Title: Major Automated Information System Programs, 10 U.S.C. §§ 2445a 
– 2445d (2006); 
Applicable agency: DOD; 
Notes: Oversight requirements for DOD’s major automated information 
system (MAIS) programs, including estimates of development costs and 
full life-cycle costs as well as program baseline and variance 
reporting requirements. 

Date: 2006; 
Title: Federal Acquisition Regulation (FAR), Major Systems Acquisition, 
48 C.F.R. part 34, subpart 34.2, Earned Value Management System; 
Applicable agency: All; 
Notes: Earned Value Management System policy was added by Federal 
Acquisition Circular 2005-11, July 5, 2006, Item I—Earned Value 
Management System (EVMS) (FAR Case 2004-019). 

Date: 2008; 
Title: Defense Federal Acquisition Regulation Supplement; Earned Value 
Management Systems (DFARS Case 2005–D006), 73 Fed. Reg. 21,846 (April 
23, 2008), primarily codified at 48 C.F.R. subpart 234.2, and part 252 
(sections 252.234-7001 and 7002); 
Applicable agency: DOD; 
Notes: DOD’s final rule (1) amending the Defense Federal Acquisition 
Regulation Supplement (DFARS) to update requirements for DOD 
contractors to establish and maintain EVM systems and (2) eliminating 
requirements for DOD contractors to submit cost/schedule status 
reports. 

Policy: 

Date: 1976; 
Title: OMB, Major Systems Acquisitions, Circular A-109 (Washington, 
D.C.: Apr. 5, 1976); 
Applicable agency: All; 
Notes: [Empty]. 

Date: 1992; 
Title: OMB, Guidelines and Discount Rates for Benefit-Cost Analysis of 
Federal Programs, Circular No. A-94 Revised (Washington, D.C.: Oct. 29, 
1992); 
Applicable agency: All; 
Notes: [Empty]. 

Date: 1995; 
Title: DOD, Economic Analysis for Decisionmaking, Instruction No. 
7041.3 (Washington, D.C.: USD, Nov. 7, 1995); 
Applicable agency: DOD; 
Notes: [Empty]. 

Date: 2003; 
Title: DOD, The Defense Acquisition System, Directive No. 5000.1 
(Washington, D.C.: USD, May 12, 2003). Redesignated 5000.01 and 
certified current as of Nov. 20, 2007. 
Applicable agency: DOD; 
Notes: States that every program manager must establish program goals 
for the minimum number of cost, schedule, and performance parameters 
that describe the program over its life cycle and identify any 
deviations. 

Date: 2003; 
Title: DOD, Operation of the Defense Acquisition System, Instruction 
No. 5000.2 (Washington, D.C.: USD, May 12, 2003). Cancelled and 
reissued by Instruction No. 5000.02 on Dec. 8, 2008. 
Applicable agency: DOD; 
Notes: Describes the standard framework for defense acquisition 
systems: defining the concept, analyzing alternatives, developing 
technology, developing the system and demonstrating that it works, 
producing and deploying the system, and operating and supporting it 
throughout its useful life. 

Date: 2004; 
Title: National Security Space Acquisition Policy, Number 03-01, 
Guidance for DOD Space System Acquisition Process (Washington, D.C.: 
revised Dec. 27, 2004); 
Applicable agency: DOD; 
Notes: [Empty]. 

Date: 2005; 
Title: DOD, “Revision to DOD Earned Value Management Policy,” 
memorandum, Under Secretary of Defense, Acquisition, Technology, and 
Logistics (Washington, D.C.: Mar. 7, 2005); 
Applicable agency: DOD; 
Notes: [Empty]. 

Date: 2005 
Title: OMB, “Improving Information Technology (IT) Project Planning and 
Execution,” memorandum for Chief Information Officers No. M-05-23 
(Washington, D.C.: Aug. 4, 2005); 
Applicable agency: All; 
Notes: [Empty]. 

Date: 2006 
Title: DOD, The Program Manager’s Guide to DOD OMB, Capital Programming 
Guide, Supplement to Circular A-11, Part 7, Preparation, Submission, 
and Execution of the Budget (Washington, D.C.: Executive Office of the 
President, June 2006) 
Applicable agency: All; 
Notes: [Empty]. 

Date: 2006 
Title: DOD, Cost Analysis Improvement Group (CAIG), Directive No. 
5000.04 (Washington, D.C.: Aug. 16, 2006) 
Applicable agency: DOD; 
Notes: [Empty]. 

Guidance: 

Date: 1992; 
Title: DOD, The Program Manager’s Guide to 
CAIG, Operating and Support Cost-Estimating Guide (Washington, D.C.: 
DOD, Office of the Secretary, May 1992); 
Applicable agency: DOD; 
Notes: [Empty]. 

Date: 1992; 
Title: DOD, Cost Analysis Guidance and Procedures, DOD Directive 5000.4-
M (Washington, D.C.: OSD, Dec. 11, 1992); 
Applicable agency: DOD; 
Notes: [Empty]. 

Date: 2003; 
Title: DOD, The Program Manager’s Guide to the Integrated Baseline 
Review Process (Washington, D.C.: OSD, April 2003); 
Applicable agency: DOD; 
Notes: [Empty]. 

Date: 2004; 
Title: NDIA, National Defense Industrial Association (NDIA) Program 
Management Systems Committee (PMSC) Surveillance Guide (Arlington, Va.: 
October 2004); 
Applicable agency: All; 
Notes: [Empty]. 

Date: 2005 
Title: NDIA, National Defense Industrial All Association (NDIA) Program 
Management Systems Committee (PMSC) Earned Value Management Systems 
Intent Guide (Arlington, Va.: January 2005); 
Applicable agency: All; 
Notes: [Empty]. 

Date: 2006; 
Title: Defense Contract Management Agency, Department of Defense Earned 
Value Management Implementation Guide (Alexandria, Va.: October 2006); 
Applicable agency: DOD, FAA, NASA; 
Notes: [Empty]. 

Date: 2006; 
Title: National Defense Industrial Association, Program Management 
Systems Committee, “NDIA PMSC ANSI/EIA 748 Earned Value Management 
System Acceptance Guide,” draft, working release for user comment 
(Arlington, Va.: November 2006); 
Applicable agency: All; 
Notes: [Empty]. 

Date: 2007 
Title: American National Standards Institute, Information Technology 
Association of America, Earned Value Management Systems (ANSI/EIA 748-
B) (Arlington, Va.: July 9, 2007); 
Applicable agency: All; 
Notes: [Empty]. 

Date: 2007 
Title: National Defense Industrial Association, Program Management 
Systems Committee, “NDIA PMSC Earned Value Management Systems 
Application Guide,” draft, working release for user comment (Arlington, 
Va.: March 2007); 
Applicable agency: All; 
Notes: [Empty]. 

Source: GAO, DOD, and OMB. 

[End of table] 

Determining Data Reliability: 
 
Auditors need to collect data produced from both a program’s cost 
estimate and its EVM system. They can collect these data by 
questionnaires, structured interviews, direct observations, or 
computations, among other methods. (Appendix VI is a sample data 
collection instrument; appendix VII gives reasons why auditors need the 
information.) After auditors have collected their data, they must judge 
the data for integrity as well as for quality in terms of validity, 
reliability, and consistency with fact. 

For cost estimates, auditors must confirm that, at minimum, internal 
quality control checks show that the data are reliable and valid. To do 
this, they must have source data and must estimate the rationale for 
each cost element, to verify that: 

* the parameters (or input data) used to create the estimate are valid 
and applicable,[Footnote 22] 

* labor costs include a time-phased breakdown of labor hours and rates, 

* the calculations for each cost element are correct and the results 
make sense, 

* the program cost estimate is an accurate total of subelement costs, 
and; 

* escalation was properly applied to account for differences in the 
price of goods and services over time. 

Auditors should clarify with cost estimators issues about data and 
methodology. For example, they might ask what adjustments were made to 
account for differences between the new and existing systems with 
respect to design, manufacturing processes, and types of materials. In 
addition, auditors should look for multiple sources of data that 
converge toward the same number, in order to gain confidence in the 
data used to create the estimate. 

It is particularly important that auditors understand problems 
associated with the historical data—such as program redesign, schedule 
slips, and budget cuts—and whether the cost estimators “cleansed the 
data” to remove their effects. According to experts in the cost 
community, program inefficiencies should not be removed from historical 
data, since the development of most complex systems usually encounters 
problems. The experts stress that removing data associated with past 
problems is naïve and introduces unnecessary risk. (This topic is 
discussed in chapter 10.) 

With regard to EVM, auditors should request a copy of the system 
compliance or validation letter that shows the contractor’s ability to 
satisfy the 32 EVM guidelines (discussed in chapter 18).[Footnote 23] 
These guidelines are test points to determine the quality of a 
contractor’s EVM system. Contract performance reports (CPR) formally 
submitted to the agency should be examined for reasonableness, 
accuracy, and consistency with other program status reports as a 
continuous measure of the EVM system quality and robustness. Auditors 
should also request a copy of the integrated baseline review (IBR) 
results (also discussed in chapter 18) to see what risks were 
identified and whether they were mitigated. Auditors should request 
copies of internal management documents or reports that use EVM data to 
ensure that EVM is being used for management, not just for external 
reporting. Finally, to ensure that EVM data are valid and accurate, 
auditors should look for evidence that EVM analysis and surveillance 
are performed regularly by staff trained in this specialty. 

[End of Chapter 3] 

Chapter 4: 

Cost Analysis Overview: 

Although “cost estimating” and “cost analysis” are often used 
interchangeably, cost estimating is a specific activity within cost 
analysis. Cost analysis is a powerful tool, because it requires a 
rigorous and systematic analysis that results in a better understanding 
of the program being acquired. This understanding, in turn, leads to 
improved program management in applying resources and mitigating 
program risks. 

Differentiating Cost Analysis And Cost Estimating: 
 
Cost analysis, used to develop cost estimates for such things as 
hardware systems, automated information systems, civil projects, 
manpower, and training, can be defined as: 
 
* the effort to develop, analyze, and document cost estimates with 
analytical approaches and techniques;

* the process of analyzing, interpreting, and estimating the 
incremental and total resources required to support past, present, and 
future systems—an integral step in selecting alternatives; and; 

* a tool for evaluating resource requirements at key milestones and 
decision points in the acquisition process. 

Cost estimating involves collecting and analyzing historical data and 
applying quantitative models, techniques, tools, and databases to 
predict a program’s future cost. More simply, cost estimating combines 
science and art to predict the future cost of something based on known 
historical data that are adjusted to reflect new materials, technology, 
software languages, and development teams. 

Because cost estimating is complex, sophisticated cost analysts should 
combine concepts from such disciplines as accounting, budgeting, 
computer science, economics, engineering, mathematics, and statistics 
and should even employ concepts from marketing and public affairs. And 
because cost estimating requires such a wide range of disciplines, it 
is important that the cost analyst either be familiar with these 
disciplines or have access to an expert in these fields. 

Main Cost Estimate Categories: 

Auditors are likely to encounter two main cost estimate categories: 

* a life-cycle cost estimate (LCCE) that may include independent cost 
estimates, independent cost assessments, or total ownership costs, and, 
 
* a business case analysis (BCA) that may include an analysis of 
alternatives or economic analyses. 

Auditors may also review other types of cost estimates, such as 
independent cost assessments (ICA), nonadvocate reviews (NAR), and 
independent government cost estimates (IGCE). These types of estimates 
are commonly developed by civilian agencies. 

Life-Cycle Cost Estimate: 
 
A life-cycle cost estimate provides an exhaustive and structured 
accounting of all resources and associated cost elements required to 
develop, produce, deploy, and sustain a particular program. Life cycle 
can be thought of as a “cradle to grave” approach to managing a program 
throughout its useful life. This entails identifying all cost elements 
that pertain to the program from initial concept all the way through 
operations, support, and disposal. An LCCE encompasses all past (or 
sunk), present, and future costs for every aspect of the program, 
regardless of funding source. 

Life-cycle costing enhances decision making, especially in early 
planning and concept formulation of acquisition. Design trade-off 
studies conducted in this period can be evaluated on a total cost 
basis, as well as on a performance and technical basis. A life-cycle 
cost estimate can support budgetary decisions, key decision points, 
milestone reviews, and investment decisions. 

The LCCE usually becomes the program’s budget baseline. Using the LCCE 
to determine the budget helps to ensure that all costs are fully 
accounted for so that resources are adequate to support the program. 
DOD identifies four phases that an LCCE must address: research and 
development, procurement and investment, operations and support, and 
disposal. Civilian agencies may refer to the first two as development, 
modernization, and enhancement and may include in them acquisition 
planning and funding. Similarly, civilian agencies may refer to 
operations and support as “steady state” and include them in operations 
and maintenance activities. Although these terms mean essentially the 
same thing, they can differ from agency to agency. DOD’s four phases 
are described below. 

1. Research and development include development and design costs for 
system engineering and design, test and evaluation, and other costs for 
system design features. They include costs for development, design, 
startup, initial vehicles, software, test and evaluation, special 
tooling and test equipment, and facility changes. 

2. Procurement and investment include total production and deployment 
costs (e.g., site activation, training) of the prime system and its 
related support equipment and facilities. Also included are any related 
equipment and material furnished by the government, initial spare and 
repair parts, interim contractor support, and other efforts. 

3. Operations and support are all direct and indirect costs incurred in 
using the prime system—manpower, fuel, maintenance, and support—through 
the entire life cycle. Also included are sustaining engineering and 
other collateral activities. 

4. Disposal, or inactivation, includes the costs of disposing of the 
prime equipment after its useful life. 

Because they encompass all possible costs, LCCEs provide a wealth of 
information about how much programs are expected to cost over time. 
This information can be displayed visually to show what funding is 
needed at a particular time and when the program is expected to move 
from one phase to another. For example, figure 3 is a life-cycle cost 
profile for a hypothetical space system. 

Figure 3: Life-Cycle Cost Estimate for a Space System: 

Refer to PDF for image: line graph] 

Space system life cycle: 
Phase B ATP; 
Final design; 
O&M Support start; 
Launch 1; 
Launch 2; 
IOC; 
Launch 3; 
Launch N; 
FOC. 

RDT&E: Includes development and production of first two vehicles; 
Follow-on buys occur after final design verification; 
Procurement: Includes production of follow-on buys (typically lots of 2 
or 3 SVs); 
O&M staff in place before launch 1; 
O&M: Operators and controllers through system EOL. 

Source: DOD. 

Note: O&M = operations and maintenance; 
RDT&E = research, development, test, and evaluation; 
SV = space vehicle; 
EOL = end of life; 
IOC = initial operational capacity; 
FOC = full operational capacity. 

[End of figure] 

Figure 3 illustrates how space systems must invest heavily in research 
and development because once a system is launched into space, it cannot 
be retrieved for maintenance. Other systems such as aircraft, ships, 
and information technology systems typically incur hefty operations 
costs in relation to development and production costs. Such mission 
operations costs are very large because the systems can be retrieved 
and maintained and therefore require sophisticated logistics support 
and recurring broad-based training for large user populations. Thus, 
having full life-cycle costs is important for successfully planning 
program resource requirements and making wise decisions. 

Business Case Analysis: 
 
A business case analysis, sometimes referred to as a cost benefit 
analysis, is a comparative analysis that presents facts and supporting 
details among competing alternatives. A BCA considers not only all the 
life-cycle costs that an LCCE identifies but also quantifiable and 
nonquantifiable benefits. It should be unbiased by considering all 
possible alternatives and should not be developed solely for supporting 
a predetermined solution. Moreover, a BCA should be rigorous enough 
that independent auditors can review it and clearly understand why a 
particular alternative was chosen. 

A BCA seeks to find the best value solution by linking each alternative 
to how it satisfies a strategic objective. Each alternative should 
identify the: 
 
* relative life-cycle costs and benefits; 

* methods and rationale for quantifying the life-cycle costs and 
benefits; 

* effect and value of cost, schedule, and performance tradeoffs; 

* sensitivity to changes in assumptions; and; 

* risk factors. 

On the basis of this information, the BCA then recommends the best 
alternative. In addition to supporting an investment decision, the BCA 
should be considered a living document and should be updated often to 
reflect changes in scope, schedule, or budget. In this way, the BCA is 
a valuable tool for validating decisions to sustain or enhance the 
program. 

Auditors may encounter other estimates that fall into one of the two 
main categories of cost estimates. For example, an auditor may examine 
an independent cost estimate, independent cost assessment, independent 
government cost estimates, total ownership cost, or rough order of 
magnitude estimate—all variations of a life-cycle cost estimate. 
Similarly, instead of reviewing a business case analysis, an auditor 
may review an analysis of alternatives (AOA), a cost-effectiveness 
analysis (CEA), or an economic analysis (EA). Each of these analyses is 
a variation, in one form or another, of a BCA. Table 4 looks more 
closely at the different types of cost estimates that can be developed. 

Table 4: Life-Cycle Cost Estimates, Types of Business Case Analyses, 
and Other Types of Cost Estimates: 
 
Life-cycle cost estimate: 

Estimate type: Independent cost estimate; 
Level of effort: Usually requires a large team, may take many months to 
accomplish, 
and addresses the full LCCE; 
Description: An ICE, conducted by an organization independent of 
the acquisition chain of command, is based on the same 
detailed technical and procurement information used 
to make the baseline estimate—usually the program or 
project LCCE. ICEs are developed to support new programs 
or conversion, activation, modernization, or service life 
extensions and to support DOD milestone decisions for 
major defense acquisition programs.[A] 
 
An estimate might cover a program’s entire life cycle, 
one program phase, or one high-value, highly visible, or 
high-interest item within a phase. ICEs are used primarily 
to validate program or project LCCEs and are typically 
reconciled with them. 

Because the team performing the ICE is independent, it 
provides an unbiased test of whether the program office 
cost estimate is reasonable. It is also used to identify risks 
related to budget shortfalls or excesses 

Estimate type: Total ownership cost estimate; 
Level of effort: Requires a large team, may take many months to 
accomplish, and addresses the full LCCE; 
Description: Related to LCCE but broader in scope, a total ownership 
cost estimate consists of the elements of life-cycle cost plus some 
infrastructure and business process costs not necessarily 
attributable to a program. 

Infrastructure includes acquisition and central logistics 
activities; nonunit central training; personnel administration 
and benefits; medical care; and installation, communications, 
and information infrastructure to support military bases. It is 
normally found in DOD programs. 

Business case analysis: 

Estimate type: Analysis of alternatives and cost effectiveness 
analysis; 
Level of effort: Requires a large team, may take many months to 
accomplish, and addresses the full LCCE; 
Description: AOA compares the operational effectiveness, suitability, 
and LCCE of alternatives that appear to satisfy established capability 
needs. Its major components are a CEA and cost analysis. 

AOAs try to identify the most promising of several conceptual 
alternatives; analysis and conclusions are typically used to 
justify initiating an acquisition program. An AOA also looks at 
mission threat and dependencies on other programs. 

When an AOA cannot quantify benefits, a CEA is more 
appropriate. A CEA is conducted whenever it is unnecessary 
or impractical to consider the dollar value of benefits, as when 
various alternatives have the same annual monetary benefits. 
Both the AOA and CEA should address each alternative’s 
advantages, disadvantages, associated risks, and uncertainties 
and how they might influence the comparison. 
 
Estimate type: Economic analysis and cost benefit analysis; 
Level of effort: Requires a large team, may take many months to 
accomplish, and addresses the full LCCE; 
Description: EA is a conceptual framework for systematically 
investigating problems of choice. Posing various alternatives for 
reaching an objective, it analyzes the LCCE and benefits of each one, 
usually with a return on investment analysis. 

Present value is also an important concept: Since an LCCE 
does not consider the time value of money, it is necessary to 
determine when expenditures for alternatives will be made. 

EA expands cost analysis by examining the effects of the time 
value of money on investment decisions. After cost estimates 
have been generated, they must be time-phased to allow for 
alternative expenditure patterns. Assuming equal benefits, 
the alternative with the least present value cost is the most 
desirable: it implies a more efficient allocation of resources. 

Other: 

Estimate type: Rough order of magnitude; 
Level of effort: May be done by a small group or one person; can be 
done in hours, days, or weeks; and may cover only a portion of the 
LCCE; 
Description: Developed when a quick estimate is needed and few details 
are available. Usually based on historical ratio information, it is 
typically developed to support what-if analyses and can be developed 
for a particular phase or portion of an estimate to the entire cost 
estimate, depending on available data. It is helpful for examining 
differences in high[level alternatives to see which are the most 
feasible. Because it is developed from limited data and in a short 
time, a rough order of magnitude analysis should never be considered a 
budget-quality cost estimate. 

Estimate type: Independent cost assessment; 
Level of effort: Requires a small group; may take months to accomplish, 
depending on how much of the LCCE is being reviewed; 
Description: An ICA is an outside, nonadvocate’s evaluation of a cost 
estimate’s quality and accuracy, looking specifically at a program’s 
technical approach, risk, and acquisition strategy to ensure that the 
program’s cost estimate captures all requirements. 

Typically requested by a program manager or outside source, it may be 
used to determine whether the cost estimate reflects the program of 
record. It is not as formal as an ICE and does not have to be performed 
by an organization independent of the acquisition chain of command, 
although it usually is. 

An ICA usually does not address a program’s entire life cycle. 
 
Estimate type: Independent government cost estimate; 
Level of effort: Requires a small group, may take months to accomplish, 
and covers only the LCCE phase under contract; 
Description: An IGCE is conducted to check the reasonableness of a 
contractor’s cost proposal and to make sure that the offered prices are 
within the budget range for a particular program. 

The program manager submits it as part of a request for contract 
funding. It documents the government’s assessment of the program’s most 
probable cost and ensures that enough funds are available to execute 
it. It is also helpful in assessing the feasibility of individual tasks 
to determine if the associated costs are reasonable. 
 
Estimate type: Estimate at completion; 
Level of effort: Requires nominal effort once all EVM data are on hand 
and have been determined reliable; covers only the LCCE phase under 
contract; 
Description: An EAC is an independent assessment of the cost to 
complete authorized work based on a contractor’s historical EVM 
performance. 

It uses various EVM metrics to forecast the expected final cost: 
EAC = actual costs incurred + (budgeted cost for work remaining / EVM 
performance factor). 

The performance factor can be based on many different EVM metrics that 
capture cost and schedule status to date. 
 
Source: GAO, DOD, NIH, OMB, and SCEA. 

[A] For more detail, see app. V, ICEs, 10 U.S.C. § 2434. 

[End of table] 

The Overall Significance Of Cost Estimates: 

Not an end in itself, cost estimating is part of a total systems 
analysis. It is a critical element in any acquisition process and helps 
decision makers evaluate resource requirements at milestones and other 
important decision points. 

Cost estimates: 
 
* establish and defend budgets and, 

* drive affordability analysis. 

Cost estimates are integral to determining and communicating a 
realistic view of likely cost and schedule outcomes that can be used to 
plan the work necessary to develop, produce, install, and support a 
program. 

Cost estimating also provides valuable information to help determine 
whether a program is feasible, how it should be designed, and the 
resources needed to support it. Further, cost estimating is necessary 
for making program, technical, and schedule analyses and to support 
other processes such as: 
 
* selecting sources; 

* assessing technology changes, analyzing alternatives, and performing 
design trade-offs; and; 

* satisfying statutory and oversight requirements. 

Cost Estimates In Acquisition: 
 
An acquisition program focuses on the cost of developing and procuring 
an end item and whether enough resources and funding are available. The 
end product of the acquisition process is a program capability that 
meets its users’ needs at a reasonable price. During the acquisition 
process, decisions must be made on how best to consume labor, capital, 
equipment, and other finite resources. A realistic cost estimate allows 
better decision making, in that an adequate budget can accomplish the 
tasks that ultimately increase a program’s probability of success. 

Acquisition is an event-driven process, in that programs must typically 
pass through various milestones or investment reviews in which they are 
held accountable for their accomplishments. Cost estimates play an 
important role in these milestone or investment decisions. For example, 
in government programs, a cost estimate should be validated if a major 
program is to continue through its many acquisition reviews and other 
key decision points. 

Validation involves testing an estimate to see if it is reasonable and 
includes all necessary costs. Testing can be as simple as comparing 
results with historical data from similar programs or using another 
estimating method to see if results are similar. Industry requires 
similar scrutiny throughout development, in what is commonly referred 
to as passing through specific gates. 

Once a cost estimate has been accepted and approved, it should be 
updated periodically as the program matures and as schedules and 
requirements change. Updated estimates help give management control 
over a project’s resources when new requirements are called for under 
tight budget conditions. This is especially important early in a 
project, when less is known about requirements and the opportunity for 
change (and cost growth) is greater. As more knowledge is gained, 
programs can retire some risk and reduce the potential for unexpected 
cost and schedule growth. 

Cost estimates tend to become more certain as actual costs begin to 
replace earlier estimates. This happens when risks are either mitigated 
or realized. If risks actually occur, the resulting cost growth becomes 
absorbed by the cost estimate. 

For this reason, it is important to continually update estimates with 
actual costs, so that management has the best information available for 
making informed decisions. In addition, narrow risk ranges should be 
viewed as suspect, because more cost estimates tend to overrun than 
underrun. These processes are illustrated in what is commonly called 
the “cone of uncertainty,” which are depicted in figure 4. 

Figure 4: Cone of Uncertainty: 

[Refer to PDF for image: illustration] 

Cost estimate baseline; 

Concept refinement gate: 
Technology development gate: 
Start of program and start of system integration gate: 

Uncertainty about cost estimate is high; 
Estimate becomes more certain as program progesses; 
Estimate tends to grow over time as risks are realized; 
Uncertainty is low. 

Source: GAO. 

[End of figure] 

It is important to have a track record of the estimate so one can 
measure growth from what the estimate should have been. Therefore, 
tying growth and risk together is critical because the risk 
distribution identifies the range of anticipated growth. 

The Importance Of Cost Estimates In Establishing Budgets: 

A program’s approved cost estimate is often used to create the budget 
spending plan. This plan outlines how and at what rate the program 
funding will be spent over time. Since resources are not infinite, 
budgeting requires a delicate balancing act to ensure that the rate of 
spending closely mirrors available resources and funding. And because 
cost estimates are based on assumptions that certain tasks will happen 
at specific times, it is imperative that funding be available when 
needed so as to not disrupt the program schedule. 

Because a reasonable and supportable budget is essential to a program’s 
efficient and timely execution, a competent estimate is the key 
foundation of a good budget. For a government agency, accurate 
estimates help in assessing the reasonableness of a contractor’s 
proposals and program budgets. Credible cost estimates also help 
program offices justify budgets to the Congress, OMB, department 
secretaries, and others. Moreover, cost estimates are often used to 
help determine how budget cuts may hinder a program’s progress or 
effectiveness. 

Outside the government, contractors need accurate estimates of the 
costs required to complete a task in order to ensure maximum 
productivity and profitability. Estimates that are too low can reduce 
profits if the contract is firm fixed price, and estimates that are too 
high will diminish a contractor’s ability to compete in the 
marketplace. 

While contractors occasionally propose unrealistically low cost 
estimates for strategic purposes—for example, “buying-in”—such outcomes 
can be attributed to poor cost estimating. This sometimes happens when 
contractors are highly optimistic in estimating potential risks. As a 
program whose budget is based on such estimates is developed, it 
becomes apparent sooner or later that either the developer or the 
customer must pay for a cost overrun, as case study 14 indicates. 

Case Study 14: Realistic Estimates, from Defense Acquisitions, GAO-05-
183: 
 
In negotiating the contract for the first four Virginia class ships, 
program officials stated that they were constrained in negotiating the 
target price to the amount funded for the program, risking cost growth 
at the outset. The shipbuilders said that they accepted a challenge to 
design and construct the ships for $748 million less than their 
estimated costs, because the contract protected their financial risk. 
Despite the significant risk of cost growth, the Navy did not identify 
any funding for probable cost growth, given available guidance at the 
time. The fiscal year 2005 President’s Budget showed that budgets for 
the two Virginia class case study ships had increased by $734 million. 
However, on the basis of July 2004 data, GAO projected that additional 
cost growth on contracts for the two ships would be likely to reach 
$840 million, perhaps higher. In the fiscal year 2006 budget, the Navy 
requested funds to cover cost increases expected to reach to 
approximately $1 billion. 
 
Source: GAO, Defense Acquisitions: Improved Management Practices Could 
Minimize Cost Growth in Navy Shipbuilding Programs, [hyperlink, 
http://www.gao.gov/products/GAO-05-183], Washington, D.C.: Feb. 28, 
2005. 

[End of case study] 

Cost Estimates And Affordability: 
 
Affordability is the degree to which an acquisition program’s funding 
requirements fit within the agency’s overall portfolio plan. Whether a 
program is affordable depends a great deal on the quality of its cost 
estimate. Therefore, agencies can follow the 12-step estimating process 
we outlined in chapter 1 to ensure that they are creating and making 
decisions based on credible cost estimates. The 12-step process 
addresses best practices, including defining the program’s purpose, 
developing the estimating plan, defining the program’s characteristics, 
determining the estimating approach, identifying ground rules and 
assumptions, obtaining data, developing the point estimate, conducting 
sensitivity analysis, performing a risk or uncertainty analysis, 
documenting the estimate, presenting it to management for approval, and 
updating it to reflect actual costs and changes. Following these steps 
ensures that realistic cost estimates are developed and presented to 
management, enabling them to make informed decisions about whether 
the program is affordable within the portfolio plan. 

Decision makers should consider affordability at each decision point in 
a program’s life cycle. It is important to know the program’s cost at 
particular intervals, in order to ensure that adequate funding is 
available to execute the program according to plan. Affordability 
analysis validates that the program’s acquisition strategy has an 
adequate budget for its planned resources (see figure 5). 

Figure 5: An Affordability Assessment: 

[Refer to PDF for image: combined line graph] 

Source: DOD. 

[End of figure] 

In figure 5, seven programs A–G are plotted against time, with the 
resources they will need to support their goals. The benefit of 
plotting the programs together gives decision makers a high-level 
analysis of their portfolio and the resources they will need in the 
future. In this example, it appears that funding needs are relatively 
stable in fiscal years 1–12, but from fiscal year 12 to fiscal year 16, 
an increasing need for additional funding is readily apparent. This is 
commonly referred to as a bow-wave, meaning there is an impending spike 
in the requirement for additional funds. Whether these funds will be 
available will determine which programs remain within the portfolio. 
Because the programs must compete against one another for limited 
funds, it is considered a best practice to perform the affordability 
assessment at the agency level, not program by program. 

While approaches may vary, an affordability assessment should address 
requirements at least through the programming period and, preferably, 
several years beyond. Thus, LCCEs give decision makers important 
information in that not all programs require the same type of funding 
profile. In fact, different commodities require various outlays of 
funding and are affected by different cost drivers. Figure 6 
illustrates this point with typical funding curves by program phase. It 
shows that while some programs may cost less to develop—for example, 
research and development in construction programs differ from fixed-
wing aircraft—they may require more or less funding for investment, 
operations, and support in the out-years. 

Figure 6: Typical Capital Asset Acquisition Funding Profiles by Phase: 

[Refer to PDF for image: 5 vertical bar graphs] 

The bar graphs depict the percent of project cost for R&D, Investment, 
and O&S/Disposal for the following program types: 
Construction; 
Space; 
Ships; 
Surface vehicles; 
Fixed-wing aircraft. 

Source: GAO and DOD. 

[End of figure] 

Line graphs or sand charts like those in figure 5, therefore, are often 
used to show how a program fits within the organizational plan, both 
overall and by individual program components. Such charts allow 
decision makers to determine how and if the program fits within the 
overall budget. It is very important for LCCEs to be both realistic and 
timely, available to decision makers as early as possible. Case studies 
15 and 16 show how this often does not happen. 

Case Study 15: Importance of Realistic LCCEs, from Combating Nuclear 
Smuggling, GAO-07-133R: 
 
The Department of Homeland Security’s (DHS) Domestic Nuclear Detection 
Office (DNDO) had underestimated life-cycle costs for plastic 
scintillators and advanced spectroscopic portal monitors. Although 
DNDO’s analysis assumed a 5-year life cycle for both, DNDO officials 
told GAO that a 10-year life cycle was more reasonable. DNDO’s analysis 
had assumed annual maintenance costs at 10 percent of their procurement 
costs: maintenance costs for the scintillators would be about $5,500 
per year per unit, based on a $55,000 purchase price, and maintenance 
costs for the monitors would be about $38,000 per year per unit, based 
on a $377,000 purchase price. DNDO’s analysis had not accounted for 
about $181 million in potential maintenance costs for the monitors 
alone. With the much higher maintenance costs, and doubling the life 
cycle, the long-term implications would be magnified. 

Source: GAO, Combating Nuclear Smuggling: DHS’s Cost-Benefit Analysis 
to Support the Purchase of New Radiation Detection Portal Monitors Was 
Not Based on Available Performance Data and Did Not Fully Evaluate All 
the Monitors’ Costs and Benefits, GAO-07-133R (Washington, D.C.: Oct. 
17, 2006). 

[End of case study] 

Case Study 16: Importance of Realistic LCCEs, from Space Acquisitions, 
GAO-07-96: 

GAO has in the past identified a number of causes behind cost growth 
and related problems in DOD’s major space acquisition programs, but 
several consistently stand out. On a broad scale, DOD starts more 
weapons programs than it can afford, creating competition for funding 
that encourages low-cost estimating and optimistic scheduling, 
overpromising, suppressing bad news, and for space programs, forsaking 
the opportunity to identify and assess potentially better alternatives. 
Programs focus on advocacy at the expense of realism and sound 
management. 

With too many programs in its portfolio, DOD is invariably forced to 
shift funds to and from programs—particularly as programs experience 
problems that require more time and money. Such shifts, in turn, have 
had costly, reverberating effects. In previous testimony and reports, 
GAO has stressed that DOD could avoid costly funding shifts. 

It could do this by developing an overall investment strategy to 
prioritize systems in its space portfolio with an eye toward balancing 
investments between legacy systems and new programs, as well as between 
science and technology programs and acquisition investments. Such 
prioritizing would also reduce incentives to produce low estimates. 

Source: GAO, Space Acquisitions: DOD Needs to Take More Acton to 
Address Unrealistic Initial Cost Estimates of Space Systems, GAO-07-96, 
Washington, D.C.: Nov. 17, 2006. 

[End of case study] 

Evolutionary Acquisition And Cost Estimation: 

GAO has reported that evolutionary acquisition is in line with 
commercial best practices.[Footnote 24] In evolutionary acquisition, a 
program evolves to its ultimate capabilities on the basis of mature 
technologies and available resources. This approach allows commercial 
companies to develop and produce more sophisticated products faster and 
less expensively than their predecessors. 

Commercial companies have found that trying to capture the knowledge 
required to stabilize a product design that entails significant new 
technical content is an unmanageable task, especially if the goal is to 
reduce development cycle times and get the product to the marketplace 
as quickly as possible. Therefore, product features and capabilities 
that cannot be achieved in the initial development are planned for 
development in the product’s future generations, when the technology 
has proven mature and other resources are available. 

Figure 7 compares evolutionary to single-step acquisition, commonly 
called the big bang approach. An evolutionary environment for 
developing and delivering new products reduces risk and makes cost more 
predictable. While a customer may not initially receive an ultimate 
capability, the product is available sooner, with higher quality and 
reliability and at a lower and more predictable cost. With this 
approach, improvements can be planned for the product’s future 
generations. (See case study 17.) 

Figure 7: Evolutionary and Big Bang Acquisition Compared: 

[Refer to PDF for image: illustration] 

Evolutionary acquisition approach: 

Beginning: 

1st generation (5 years): 
* Basic stealth platform; 
Needed technologies are mature. 

2nd generation (10 years): 
* Basic stealth platform; 
* Advanced avionics; 
Needed technologies are mature. 

3rd generation (15 years): 
* Basic stealth platform; 
* Advanced avionics; 
* Advanced intelligence and communications. 

Single-step acquisition approach: 

1st generation (15 years): 
* Basic stealth platform; 
* Advanced avionics; 
* Advanced intelligence and communications. 

Source: GAO. 

[End of figure] 
 
Case Study 17: Evolutionary Acquisition and Cost Estimates, from Best 
Practices, GAO-03-645T: 

The U.S. Air Force F/A-22 tactical fighter acquisition strategy was, at 
the outset, to achieve full capability in a big bang approach. By not 
using an evolutionary approach, the F/A-22 took on significant risk and 
onerous technological challenges. While the big bang approach might 
have allowed the Air Force to compete more successfully for early 
funding, it hamstrung the program with many new, undemonstrated 
technologies, preventing the program from knowing cost and schedule 
ramifications throughout development. Cost, schedule, and performance 
problems resulted. 
 
Source: GAO, Best Practices: Better Acquisition Outcomes Are Possible 
If DOD Can Apply Lessons from FA-22 Program, GAO-03-645T, Washington, 
D.C.: Apr. 11, 2003. 

[End of case study] 

Two development processes support evolutionary acquisition: incremental 
development and spiral development. Both processes are based on 
maturing technology over time instead of trying to do it all at once, 
as in the big bang approach. Both processes allow for developing 
hardware and software in manageable pieces by inserting new technology 
and capability over time. This usually results in fielding an initial 
hardware or software increment (or block) of capability with steady 
improvements over less time than is possible with a full development 
effort. 

In incremental development, a desired capability is known at the 
beginning of the program and is met over time by developing several 
increments, each dependent on available mature technology. A core set 
of functions is identified and released in the first increment. Each 
new increment adds more functionality, and this process continues until 
all requirements are met. This assumes that the requirements are known 
up front and that lessons learned can be incorporated as the program 
matures. (See figure 8.) 

Figure 8: Incremental Development: 

[Refer to PDF for image: line graphs] 

Single step: 
Capability is plotted against time, with the following depicted: 
Technology base; 
Requirements; 
Capability; 
IOC; 
FOC. 
No capability. 

Incremental: 
Capability is plotted against time, with the following depicted: 
Technology base; 
Requirements; 
Capability. 
Initial operationally useful capability. 

Source: GAO. 

Note: 
IOC = initial operational capability; 
FOC = final operational capability. 

[End of figure] 

The advantages of incremental development are that a working product is 
available after the first increment and that each cycle results in 
greater capability. In addition, the program can be stopped when an 
increment is completed and still provide a usable product. Project 
management and testing can be easier, because the program is broken 
into smaller pieces. Its disadvantages are that the majority of the 
requirements must be known early, which is sometimes not feasible. In 
addition, cost and schedule overruns may result in an incomplete system 
if the program is terminated, because each increment only delivers a 
small part of the system at a time. Finally, operations and support for 
the program are often less efficient because of the need for additional 
learning for each increment release. (See case study 18.) 

Case Study 18: Incremental Development, from Customs Service 
Modernization, GAO/AIMD-99-41: 

The U.S. Customs Service was developing and acquiring the Automated 
Commercial Environment (ACE) program in 21 increments. At the time of 
GAO’s review, Customs defined the functionality of only the first 2 
increments, intending to define more later. Customs had nonetheless 
estimated costs and benefits for and had committed to investing in all 
21 increments. It had not estimated costs and benefits for each 
increment and did not know whether each increment would produce a 
reasonable return on investment. Furthermore, once it had deployed an 
increment at a pilot site for evaluation, Customs was not validating 
that estimated benefits had actually been achieved. It did not even 
know whether the program’s first increment, being piloted at three 
sites, was producing expected benefits or was cost-effective. Customs 
could determine only whether the first increment was performing at a 
level “equal to or better than” the legacy system. 

Source: GAO, Customs Service Modernization: Serious Management and 
Technical Weaknesses Must Be Corrected, GAO/AIMD-99-41 (Washington, 
D.C.: Feb. 26, 1999). 

[End of case study] 

Spiral Development: 

In spiral development, a desired capability is identified but the end-
state requirements are not yet known. These requirements are refined 
through demonstration and risk management, based on continuous user 
feedback. This approach allows each increment to provide the best 
possible capability. Spiral development is often used in the commercial 
market, because it significantly reduces technical risk while 
incorporating new technology. The approach can, however, lead to 
increased cost and schedule risks. Spiral development can also present 
contract challenges due to repeating phases, trading requirements, and 
redefining deliverables. 

The advantage of spiral development is that it provides better risk 
management, because user needs and requirements are better defined. Its 
disadvantage is that the process is a lot harder to manage and usually 
results in increased cost and longer schedule. 

While both incremental and spiral development have advantages and 
disadvantages, their major difference is the knowledge of the final 
product available to the program from the outset. With incremental 
development, the program office is aware of the final product to be 
delivered but develops it in stages. With spiral development, the final 
version of the product remains undetermined until the final stage has 
been completed—that is, the final product design is not known while the 
system is being built. 

Even though it is a best practice to follow evolutionary development 
rather than the big bang approach, it often makes cost estimating more 
difficult, because it requires that cost estimates be developed more 
frequently. In some cases, cost estimates made for programs are valid 
only for the initial increment or spiral, because future increments and 
spirals are not the product they were at the outset. Nevertheless, this 
approach is considered a best practice because it helps avoid 
unrealistic cost estimates, resulting in more realistic long-range 
investment funding and more effective resource allocation. Moreover, 
realistic cost estimates help management decide between competing 
options and increase the probability that the programs will succeed. 

1. Best Practices Checklist: The Estimate: 

* The cost estimate type is clearly defined and is appropriate for its 
purpose. 

* The cost estimate contains all elements suitable to its type—ICA, 
ICE, IGCE, LCCE, rough order of magnitude, total ownership cost: 
development, procurement, operating and support, disposal costs, and 
all sunk costs. 
- AOA, CEA, EA, cost-benefit analysis: consistently evaluate all 
alternatives. 
- EA, cost-benefit analysis: portray estimates as present values. 

* All program costs have been estimated, including all life-cycle 
costs. 

* The cost estimate is independent of funding source and 
appropriations. 

* An affordability analysis has been performed at the agency level to 
see how the program fits within the overall portfolio. 
- The agency has a process for developing cost estimates that includes 
the 12-step best practice process outlined in chapter 1. 
- An overall agency portfolio sand chart displays all costs for every 
program. 

* The estimate is updated as actual costs become available from the EVM 
system or requirements change. 

* Post mortems and lessons learned are continually documented. 

[End of Chapter 4] 

Chapter 5: The Cost Estimate’s Purpose, Scope, And Schedule: 

A cost estimate is much more than just a single number. It is a 
compilation of many lower-level cost element estimates that span 
several years, based on the program schedule. Credible cost estimates 
are produced by following the rigorous 12 steps outlined in chapter 1 
and are accompanied by detailed documentation. The documentation 
addresses the purpose of the estimate, the program background and 
system description, its schedule, the scope of the estimate (in terms 
of time and what is and is not included), the ground rules and 
assumptions, all data sources, estimating methodology and rationale, 
the results of the risk analysis, and a conclusion about whether the 
cost estimate is reasonable. Therefore, a good cost estimate—while 
taking the form of a single number—is supported by detailed 
documentation that describes how it was derived and how the expected 
funding will be spent in order to achieve a given objective. 

Purpose: 

The purpose of a cost estimate is determined by its intended use, and 
its intended use determines its scope and detail. Cost estimates have 
two general purposes: (1) to help managers evaluate affordability and 
performance against plans, as well as the selection of alternative 
systems and solutions, and (2) to support the budget process by 
providing estimates of the funding required to efficiently execute a 
program. 

More specific applications include providing data for trade studies, 
independent reviews, and baseline changes. Regardless of why the cost 
estimate is being developed, it is important that the program’s purpose 
link to the agency’s missions, goals, and strategic objectives. The 
purpose of the program should also address the benefits it intends to 
deliver, along with the appropriate performance measures for 
benchmarking progress. 

Scope: 

To determine an estimate’s scope, cost analysts must identify the 
customer’s needs. That is, the cost estimator must determine if the 
estimate is required by law or policy or is requested. For example, 10 
U.S.C. § 2434 requires an independent cost estimate before a major 
defense acquisition program can advance into system development and 
demonstration or production and deployment. The statute specifies 
that the full life-cycle cost—all costs of development, procurement, 
military construction, and operations and support, without regard to 
funding source or management control—must be provided to the decision 
maker for consideration. 

In other cases, a program manager might want initially to address 
development and procurement, with estimates of operations and support 
to follow. However, if an estimate is to support the comparative 
analysis of alternatives, all cost elements of each alternative should 
be estimated to make each alternative’s cost transparent in relation to 
the others. 

Where appropriate, the program manager and the cost estimating team 
should work together to determine the scope of the cost estimate. The 
scope will be determined by such issues as the time involved, what 
elements of work need to be estimated, who will develop the cost 
estimates, and how much cost estimating detail will be included. Where 
the program is in its life cycle will influence the quantity of detail 
for the cost estimate as well as the amount of data to be collected. 
For example, early in the life cycle the project may have a concept 
with no solid definition of the work involved. A cost estimate at this 
point in the life cycle will probably not require extensive detail. As 
the program becomes better defined, more detailed estimates should be 
prepared. 

Once the cost analysts know the context of the estimate or the 
customer’s needs, they can determine the estimate’s scope by its 
intended use and the availability of data. For example, if an 
independent cost analyst is typically given the time and other 
resources needed to conduct a thorough analysis, the analysis is 
expected to be more detailed than a what-if exercise. For either, 
however, more data are likely to be available for a system in 
production than for one that is in the early stages of development. 

More detail, though, does not necessarily mean greater accuracy. 
Pursuing too much detail too early may be detrimental to an estimate’s 
quality. If a detailed technical description of the system being 
analyzed is lacking, along with detailed cost data, analysts will find 
it difficult to identify and estimate all the cost elements. It may be 
better to develop the estimate at a relatively high system level to 
ensure capturing all the lower-level elements. This is the value of so-
called parametric estimating tools, which operate at a higher level of 
detail and are used when a system lacks detailed technical definition 
and cost data. These techniques also allow the analyst to link cost and 
schedule to measures of system size, functionality, or complexity in 
advance of detailed design definition. 

Analysts should develop, and tailor, an estimate plan whose scope 
coincides with data availability and the estimate’s ultimate use. For a 
program in development, which is estimated primarily with parametric 
techniques and factors, the scope might be at a higher level of the 
WBS. (WBS is discussed in chapter 8.) As the program enters production, 
a lower level of detail would be expected. 

As the analysts develop and revise the estimating plan, they should 
keep management informed of the initial approach and any changes in 
direction or method.[Footnote 25] Since the plan serves as an agreement 
between the customer and cost estimating team, it must clearly reflect 
the approved approach and should be distributed formally to all 
participants and organizations involved. 

Schedule: 

Regardless of an estimate’s ultimate use and its data availability, 
time can become an overriding constraint on its detail. When defining 
the elements to be estimated and when developing the plan, the cost 
estimating team must consider its time constraints relative to team 
staffing. Without adequate time to develop a competent estimate, the 
team may be unable to deliver a product of sufficiently high quality. 
For example, a rough-order-of-magnitude estimate could be developed in 
days, but a first-time budget-quality estimate would likely require 
many months. If, however, that budget estimate were simply an update to 
a previous estimate, it could be done faster. The more detail required, 
the more time and staff the estimate will require. It is important, 
therefore, that auditors understand the context of the cost 
estimate—why and how it was developed and whether it was an initial or 
follow-on estimate. (See case study 19.) 

Case Study 19: The Estimate’s Context, from DOD Systems 
Modernization, GAO-06-215: 

Program officials told GAO that they had not developed the 2004 cost 
estimate in accordance with all SEI’s cost estimating criteria, because 
they had only a month to complete the economic analysis. By not 
following practices associated with reliable estimates—by not making a 
reliable estimate of system life-cycle costs—the Navy had decided on a 
course of action not based on sound and prudent decision making. This 
meant that the Navy’s investment decision was not adequately justified 
and that to the extent that program budgets were based on cost 
estimates, the likelihood of funding shortfalls and inadequate funding 
reserves was increased. 

Source: GAO, DOD Systems Modernization: Planned Investment in the Naval 
Tactical Command Support System Needs to Be Reassessed, GAO-06-215, 
Washington, D.C.: Dec. 5, 2005. 

[End of case study] 

After the customer has defined the task, the cost estimating team 
should create a detailed schedule that includes realistic key decision 
points or milestones and that provides margins for unforeseen, but not 
unexpected, delays. The team must ensure that the schedule is not 
overly optimistic. If the team wants or needs to compress the schedule 
to meet a due date, compression is acceptable as long as additional 
resources are available to complete the effort that fewer analysts 
would have accomplished in the longer period of time. If additional 
resources are not available, the estimate’s scope must be reduced. 

The essential point is that the team must attempt to ensure that the 
schedule is reasonable. When this is not possible, the schedule must be 
highlighted as having curtailed the team’s depth of analysis and the 
estimate’s resulting confidence level. 

2. Best Practices Checklist: Purpose, Scope, and Schedule: 

* The estimate’s purpose is clearly defined. 

* Its scope is clearly defined. 

* The level of detail the estimate is to be conducted at is consistent 
with the level of detail available for the program. For example, an 
engineering buildup estimate should be conducted only on a well-defined 
program. 

* The team has been allotted adequate time and resources to develop the 
estimate. 

[End of Chapter 5] 

Chapter 6: 

The Cost Assessment Team: 

Cost estimates are developed with an inexact knowledge of what the 
final technical solution will be. Therefore, the cost assessment team 
must manage a great deal of risk—especially for programs that are 
highly complex or on technology’s cutting edge. Since cost estimates 
seek to define what a given solution will ultimately cost, the estimate 
must be bound by a multitude of assumptions and an interpretation of 
what the historical data represent. This tends to be a subjective 
effort, and these important decisions are often left to a cost 
analyst’s judgment. A cost analyst must possess a variety of skills to 
develop a high-quality cost estimate that satisfies the 12 steps 
identified in chapter 1, as figure 9 illustrates. 

Figure 9: Disciplines and Concepts in Cost Analysis: 

[Refer to PDF for image: illustration] 

Cost Analysis: 
 
Economics: 
* Break-even analysis; 
* Foreign exchange rates; 
* Industrial base analysis; 
* Inflation;
* Labor agreements; 
* Present value analysis.

Budgeting: 
* Budget appropriations; 
* Internal company (industry); 
* Program specific. 

Engineering: 
* Design; 
* Materials; 
* Performance parameters; 
* Production engineering; 
* Production process; 
* Program development test; 
* Scheduling; 
* System integration. 

Computer science/mathematics: 
* Analysis of commercial models; 
* Analysis of proposals; 
* Development of cost estimating relationship; 
* Model development; 
* Programming. 

Statistics: 
* Forecasting; 
* Learning curve applications; 
* Regression analysis; 
* Risk/uncertainty analysis; 
* Sensitivity analysis. 

Accounting: 
* Cost data analysis; 
* Financial analysis; 
* Overhead analysis; 
* Proposal analysis. 

Interpersonal skills: 
* Approach; 
* Estimate; 
* Knowledge. 
 
Public and government affairs: 
* Appropriations process; 
* Auditors; 
* Legislative issues; 
* Outside factors. 

Source: GAO. 

[End of figure] 

Each discipline in figure 9 applies to cost estimating in its own 
unique way. For example, having an understanding of economics and 
accounting will help the cost estimator better understand the 
importance of inflation effects and how different accounting systems 
capture costs. Budgeting knowledge is important for knowing how to 
properly allocate resources over time so that funds are available when 
needed. Because cost estimates are often needed to justify enhancing 
older systems, having an awareness of engineering, computer science, 
mathematics, and statistics will help identify cost drivers and the 
type of data needed to develop the estimate. It also helps for the cost 
estimator to have adequate technical knowledge when meeting with 
functional experts so that credibility and a common understanding of 
the technical aspects of the program can be quickly established. 
Finally, cost estimators who are able to “sell” and present their 
estimate by defending it with solid facts and reliable data stand a 
better chance of its being used as a basis for program funding. In 
addition, cost estimators need to have solid interpersonal skills, 
because working and communicating with subject matter experts is vital 
for understanding program requirements. 

Team Composition And Organization: 

Program office cost estimates are normally prepared by a 
multidisciplinary team whose members have functional skills in 
financial management, engineering, acquisition and logistics, 
scheduling, and mathematics, in addition to communications.[Footnote 
26] The team should also include participants or reviewers from the 
program’s operating command, product support center, maintenance depot, 
and other units affected in a major way by the estimate.[Footnote 27] 
Team members might also be drawn from other organizations. In the best 
case, the estimating team is composed of persons who have experience in 
estimating all cost elements of the program. Since this is seldom 
possible, the team leader should be familiar with the team members’ 
capabilities and assign tasks accordingly. If some are experienced in 
several areas, while others are relatively inexperienced in all areas, 
the team leader should assign the experienced analysts responsibility 
for major sections of the estimate while the less experienced analysts 
work under their supervision. 

An analytic approach to cost estimates typically entails a written 
study plan detailing a master schedule of specific tasks, responsible 
parties, and due dates. For complex efforts, the estimating team might 
be organized as a formal, integrated product team. For independent 
estimates, the team might be smaller and less formal. In either case, 
the analysis should be coordinated with all stakeholders, and the study 
plan should reflect each team member’s responsibilities. 

What is required of a cost estimating team depends on the type and 
purpose of the estimate and the quantity and quality of the data. More 
detailed estimates generally require larger teams, more time and 
effort, and more rigorous techniques. For example, a rough-order-of-
magnitude estimate—a quick, high-level cost estimate—generally requires 
less time and effort than a budget-quality estimate. In addition, the 
estimating team must be given adequate time to develop the estimate. 
Following the 12 steps takes time and cannot be rushed—rushing would 
significantly risk the quality of the results. 

One of the most time consuming steps in the cost estimating process is 
step 6: obtaining the data. Enough time should be scheduled to collect 
the data, including visiting contractor sites to further understand the 
strengths and limitations of the data that have been collected. If 
there is not enough time to develop the estimate, then the schedule 
constraint should be clearly identified in the ground rules and 
assumptions, so that management understands the effect on the 
estimate’s quality and confidence. 

Cost estimating requires good organizational skills, in order to pull 
together disparate data for each cost element and to package it in a 
meaningful way. It also requires engineering and mathematical skills, 
to fully understand the quality of the data available. Excellent 
communication skills are also important for clarifying the technical 
aspects of a program with technical specialists. If the program has no 
technical baseline description, or if the cost estimating team must 
develop one, it is essential that the team have access to the subject 
matter experts—program managers, system and software engineers, test 
and evaluation analysts—who are familiar with the program or a program 
like it. Moreover, team members need good communication skills to 
interact with these experts in ways that are meaningful and productive. 

Cost Estimating Team Best Practices: 

Centralizing the cost estimating team and process—cost analysts working 
in one group but supporting many programs—represents a best practice, 
according to the experts we interviewed. Centralization facilitates the 
use of standardized processes, the identification of resident experts, 
a better sharing of resources, commonality and consistency of tools and 
training, more independence, and a career path with more opportunities 
for advancement. Centralizing cost estimators and other technical and 
business experts also allows for more effective deployment of technical 
and business skills while ensuring some measure of independence. 

A good example is in the Cost Analysis Improvement Group (CAIG) in the 
Office of the Secretary of Defense. Its cost estimates are produced by 
a centralized group of civilian government personnel to ensure long-
term institutional knowledge and no bias toward results. Some in the 
cost estimating community consider a centralized cost department that 
provides cost support to multiple program offices, with a strong 
organizational structure and support from its leadership, to be a 
model. 

In contrast, decentralization often results in ad hoc processes, 
limited government resources (requiring contractor support to fill the 
gaps), and decreased independence, since program offices typically fund 
an effort and since program management personnel typically rate the 
analysts’ performance. The major advantage of a decentralized process 
is that analysts have better access to technical experts. Under a 
centralized process, analysts should thus make every effort to 
establish contacts with appropriate technical experts. 

Finally, organizations that develop their own centralized cost 
estimating function but outside the acquiring program represent the 
best practice over organizations that develop their cost estimates in a 
decentralized or ad hoc manner under the direct control of a program 
office. One of the many benefits of centralized structure is the 
ability to resist pressure to lower the cost estimate when it is higher 
than the allotted budget. Furthermore, reliance on support contractors 
raises questions from the cost estimating community about whether 
numbers and qualifications of government personnel are sufficient to 
provide oversight of and insight into contractor cost estimates. Other 
experts in cost estimating suggested that reliance on support 
contractors can be a problem if the government cannot evaluate how good 
a cost estimate is or if the ability to track it is lacking. Studies 
have also raised the concern that relying on support contractors makes 
it more difficult to retain institutional knowledge and instill 
accountability. Therefore, to mitigate any bias in the cost estimate, 
government customers of contractor-produced cost estimates must have a 
high enough level of experience to determine whether the cost estimate 
conforms to the best practices outlined in this Guide. 

Certification And Training for Cost Estimating And EVM Analysis: 

Since the experience and skills of the members of a cost estimating 
team are important, various organizations have established training 
programs and certification procedures. For example, SCEA’s 
certification program provides a professional credential to both 
members and nonmembers for education, training, and work experience and 
a written examination on basic concepts and methods for cost 
estimating. Another example is the earned value professional 
certification offered by the Association for the Advancement of Cost 
Engineering International that PMI’s College of Performance Management 
endorses; it requires candidates to have the requisite experience and 
the ability to pass a rigorous written exam. 

Under the Defense Acquisition Workforce Improvement Act, DOD 
established a variety of certification programs through the Defense 
Acquisition University (DAU).[Footnote 28] DAU provides a full range of 
basic, intermediate, and advanced certification training; assignment-
specific training; performance support; job-relevant applied research; 
and continuous learning opportunities. Although DAU’s primary mission 
is to train DOD employees, all federal employees are eligible to attend 
as space is available. One career field is in business, cost 
estimating, and financial management. Certification levels are based on 
education, experience, and training. Since this certification is 
available to all federal employees, it is considered a minimum training 
requirement for cost estimators. 

In addition to the mandatory courses in table 5, DAU encourages 
analysts to be trained in courses identified in its Core Plus 
Development Guide. These courses cover a wide range of cost estimating 
and earned value topics, such as acquisition reporting concepts and 
policy requirements, analysis of alternatives, baseline maintenance, 
basic software acquisition management, business case analysis, business 
management modernization, contract source selection, cost as an 
independent variable, economic analysis, EVM system validation and 
surveillance, integrated acquisition for decision makers, operating 
and support cost analysis, principles of schedule management, program 
management tools, and risk management. The standards for the business, 
cost estimating, and financial management levels of certification are 
shown in table 5. 

Table 5: Certification Standards in Business, Cost Estimating, and 
Financial Management in the Defense Acquisition Education, Training, 
and Career Development Program: 
 
Level: I, Desired; 
Education: Baccalaureate. 
 
Level: I, Mandatory; 
Experience: 1 year of acquisition in business, cost estimating, or 
financial management; 
Training: 
ACQ 101: Fundamentals of Systems Acquisition Management and 2 of the 
following: 
BCF 101: Fundamentals of Cost Analysis; 
BCF 102: Fundamentals of Earned Value; 
BCF 103: Fundamentals of Business Financial Management. 
 
Level: II, Desired: 
Education: Baccalaureate; 
Experience: 2 additional years in business, cost estimating, or 
financial management. 
 
Level: II, Mandatory; 
Experience: 2 years of acquisition in business, cost estimating, or 
financial management; 
Training: 
ACQ 201: (Parts A & B) Intermediate Systems Acquisition and; 
BCF 205: Contractor Business Strategies and, if not taken at Level I, 
BCF 101: Fundamentals of Cost Analysis or, 
BCF 102: Fundamentals of Earned Value Management or, 
BCF 103: Fundamentals of Business Financial Management and one of the 
following: 
BCF 203: Intermediate Earned Value Management or, 
BCF 204: Intermediate Cost Analysis or, 
BCF 211: Acquisition Business Management. 
 
Level: III, Desired; 
Education: Baccalaureate or 24 semester hours among 10 courses[A] or 
Master’s; 
Experience: 4 additional years of acquisition in business, cost 
estimating, or financial management. 
 
Level: III, Mandatory; 
Training: BCF 301: Business, Cost Estimating, and Financial Management 
Workshop. 

Source: DAU. 

[A] The 10 courses are accounting, business finance, contracts, 
economics, industrial management, law, marketing, organization and 
management, purchasing, and quantitative methods. 

[End of table] 

When reviewing an agency’s cost estimate, an auditor should question 
the cost estimators about whether they have both the requisite formal 
training and substantial on-the-job training to develop cost estimates 
and keep those estimates updated with EVM analysis. Continuous learning 
by participating in cost estimating and EVM conferences is important 
for keeping abreast of the latest techniques and maximizing lessons 
learned. Agency cost estimators and EVM analysts, as well as GAO’s 
auditors, should attend such conferences to keep their skills current. 
Maintaining skills is essential if subject matter experts are to be 
relied on to apply best practices in their roles. 

While formal training is important, so is on-the-job training and first-
hand knowledge from participating in plant and site visits. On-site 
visits to see what is being developed and how engineering and 
manufacturing are executed are invaluable to cost estimators and 
auditors. To understand the complexity of the tasks necessary to 
deliver a product, site visits should always be included in the audit 
plan. 

SEI’s Checklists and Criteria for Evaluating the Cost and Schedule 
Estimating Capabilities of Software Organizations lists six requisites 
for reliable estimating and gives examples of evidence needed to 
satisfy them. It also contains a checklist for estimating whether an 
organization provides its commitment and support to the estimators. 
SEI’s criteria are helpful for determining whether cost estimators have 
the skills and training to effectively develop credible cost estimates. 
(See appendix VIII for a link to SEI’s material.) 

While much of this Cost Guide’s focus is on cost estimating, in chapter 
18 we focus on EVM and how it follows the cost estimate through its 
various phases and determines where there are cost and schedule 
variances and why. This information is vitally important to keeping the 
estimate updated and for keeping abreast of program risks. Because of 
performance measurement requirements (including the use of EVM), OMB 
issued policy guidance in August 2005 to agency chief information 
officers on improving information technology projects. OMB stated that 
the Federal Acquisition Institute (co-located with DAU) was expanding 
EVM system training to the program management and contracting 
communities and instructed agencies to refer to DAU’s Web site for a 
community of practice that includes the following resources:[Footnote 
29] 
 
* 6 hours of narrated EVM tutorials (Training Center), 

* descriptions and links to EVM tools (Tools), 

* additional EVM-related references and guides (Community Connection), 

* DOD policy and contracting guidance (Contract Documents and DOD 
Policy and Guidance), 

* a discussion forum (Note Board), and 

* an on-line reference library (Research Library). 

Such resources are important for agencies and auditors in understanding 
what an EVM system can offer for improving program management. 

3. Best Practices Checklist: Cost Assessment Team: 

* The estimating team’s composition is commensurate with the assignment 
(see SEI’s checklists for more details). 
- The team has the proper number and mix of resources. 
- Team members are from a centralized cost estimating organization. 
- The team includes experienced and trained cost analysts. 
- The team includes, or has direct access to, analysts experienced in 
the program’s major areas. 
- Team members’ responsibilities are clearly defined. 
- Team members’ experience, qualifications, certifications, and 
training are identified. 
- The team participated in on-the-job training, including plant and 
site 
visits. 

* A master schedule with a written study plan has been developed. 

* The team has access to the necessary subject matter experts. 

[End of Chapter 6] 

Chapter 7: 

Technical Baseline Description Definition And Purpose: 

Key to developing a credible estimate is having an adequate 
understanding of the acquisition program—the acquisition strategy, 
technical definition, characteristics, system design features, and 
technologies to be included in its design. The cost estimator can use 
this information to identify the technical and program parameters that 
will bind the cost estimate. The amount of information gathered 
directly affects the overall quality and flexibility of the estimate. 
Less information means more assumptions must be made, increasing the 
risk associated with the estimate. Therefore, the importance of this 
step must be emphasized, because the final accuracy of the cost 
estimate depends on how well the program is defined. 

The objective of the technical baseline is to provide in a single 
document a common definition of the program—including a detailed 
technical, program, and schedule description of the system—from which 
all LCCEs will be derived—that is, program and independent cost 
estimates. At times, the information in the technical baseline will 
drive or facilitate the use of a particular estimating approach. 
However, the technical baseline should be flexible enough to 
accommodate a variety of estimating methodologies. It is also critical 
that the technical baseline contain no cost data, so that it can be 
used as the common baseline for independently developed estimates. 
[Footnote 30] 
 
In addition to providing a comprehensive program description, the 
technical baseline is used to benchmark life-cycle costs and identify 
specific technical and program risks. In this way, it helps the 
estimator focus on areas or issues that could have a major cost effect. 

Process: 

In general, program offices are responsible for developing and 
maintaining the technical baseline throughout the life cycle, since 
they know the most about their program. A best practice is to assign an 
integrated team of various experts—system engineers, design experts, 
schedulers, test and evaluation experts, financial managers, and cost 
estimators—to develop the technical baseline at the beginning of the 
project. The program manager and the senior executive oversight 
committee approve the technical baseline to ensure that it contains all 
information necessary to define the program’s systems and develop 
the cost estimate. 

Furthermore, the technical baseline should be updated in preparation 
for program reviews, milestone decisions, and major program changes. 
The credibility of the cost estimate will suffer if the technical 
baseline is not maintained. Without explicit documentation of the basis 
of a program’s estimates, it is difficult to update the cost estimate 
and provide a verifiable trace to a new cost baseline as key 
assumptions change during the course of the program’s life. 

It is normal and expected that early program technical baselines will 
be imprecise or incomplete and that they will evolve as more 
information becomes known. However, it is essential that the technical 
baseline provide the best available information at any point in time. 
To try to create an inclusive view of the program, assumptions should 
be made about the unknowns and should be agreed on by management. These 
assumptions and their corresponding justifications should be documented 
in the technical baseline, so their risks are known from the beginning. 

Schedule: 

The technical baseline must be available in time for all cost 
estimating activities to proceed on schedule. This often means that it 
is submitted as a draft before being made final. The necessary lead 
time will vary by organization. One example is the CAIG in the Office 
of the Secretary of Defense, which requires that the Cost Analysis 
Requirements Description be submitted in draft 180 days before the 
Defense Acquisition Board milestone and in final form 45 days before 
the milestone review. 

Contents: 
 
Since the technical baseline is intended to serve as the baseline for 
developing LCCEs, it must provide information on development, testing, 
procurement, installation and replacement, operations and support, 
planned upgrades, and disposal. In general, a separate technical 
baseline should be prepared for each alternative; as the program 
matures, the number of alternatives and, therefore, technical baselines 
decreases. Although technical baseline content varies by program (and 
possibly even by alternative), it always entails a number of sections, 
each focusing on a particular aspect of the program being assessed. 
Table 6 describes typical technical baseline elements. 

Table 6: Typical Technical Baseline Elements: 
 
Element: System purpose; 
Description: Describes the system’s mission and how it fits into the 
program; should give the estimator a concept of its complexity and 
cost. 

Element: Detailed technical system and performance characteristics; 
Description: Includes key functional requirements and performance 
characteristics; the replaced system (if applicable); who will develop, 
operate, and maintain the system; descriptions of hardware and software 
components (including interactions, technical maturity of critical 
components, and standards); system architecture and equipment 
configurations (including how the program will interface with other 
systems); key performance parameters; information assurance; 
operational concept; reliability analysis; security and safety 
requirements; test and evaluation concepts and plans. 

Element: Work breakdown structure; 
Description: Identifies the cost and technical data needed to develop 
the estimate. 

Element: Description of legacy or similar systems; 
Description: A legacy (or heritage or predecessor) system has 
characteristics similar to the system being estimated; often the new 
program is replacing it. The technical baseline includes a detailed 
description of the legacy hardware and software components; technical 
protocols or standards; key performance parameters; operational and 
maintenance logistics plan; training plan; phase-out plan; and the 
justification for replacing the system. 

Element: Acquisition plan or strategy; 
Description: Includes the competition strategy, whether multiyear 
procurement will be used, and whether the program will lease or buy 
certain items; it should identify the type of contract awarded or to be 
awarded and, if known, the contractor responsible for developing and 
implementing the system. 

Element: Development, test, and production quantities and program 
schedule; 
Description: Includes quantities required for development, test (e.g., 
test assets), and production; lays out an overall development and 
production schedule that identifies the years of its phases—the 
schedule should include a standard Gantt chart with major events such 
as milestone reviews, design reviews, and major tests—and that 
addresses, at a high level, major program activities, their duration 
and sequence, and the critical path. 

Element: System test and evaluation plan; 
Description: Includes the number of tests and test assets, criteria for 
entering into testing, exit criteria for passing the test, and where 
the test will be conducted. 

Element: Deployment details; 
Description: Includes standard platform and site configurations for all 
scenarios (peacetime, contingency, war) and a transition plan between 
legacy and new systems Safety plan Includes any special or unique 
system safety considerations that may relate to specific safety goals 
established through standards, laws, regulations, and lessons learned 
from similar systems. 
 
Element: Training plan; 
Description: Includes training for users and maintenance personnel, any 
special certifications required, who will provide the training, where 
it will be held, and how often it will be offered or required. 

Element: Disposal and environmental effect; 
Description: Includes identification of environment impact, mitigation 
plan, and disposal concept. 
 
Element: Operational concept; 
Description: Includes program management details, such as how, where, 
and when the system will be operated; the platforms on which it will be 
installed; and the installation schedule. 

Element: Personnel requirements; 
Description: Includes comparisons to the legacy system (if possible) in 
salary levels, skill-level quantity requirements, and where staff will 
be housed. 

Element: Logistics support details; 
Description: Includes maintenance and sparing plans, as well as planned 
upgrades. 

Element: Changes from the previous technical baseline; 
Description: Includes a tracking of changes, with a summary of what 
changed and why. 

Source: DOD, DOE, and SCEA. 

[End of table] 

Programs following an incremental development approach should have a 
technical baseline that clearly states system characteristics for the 
entire program. In addition, the technical baseline should define the 
characteristics to be included in each increment, so that a rigorous 
LCCE can be developed. For programs with a spiral development approach, 
the technical baseline tends to evolve as requirements become better 
defined. In earlier versions of a spiral development program, the 
technical baseline should clearly state the requirements that are 
included and those that have been excluded. This is important, since a 
lack of defined requirements can lead to cost increases and delays in 
delivering services, as case study 20 illustrates. 

Case Study 20: Defining Requirement, from United States Coast Guard, 
GAO-06-623: 

The U.S. Coast Guard contracted in September 2002 to replace its search 
and rescue communications system, installed in the 1970s, with a new 
system known as Rescue 21. The acquisition and initial implementation 
of Rescue 21, however, resulted in significant cost overruns and 
schedule delays. By 2005, its estimated total acquisition cost had 
increased to $710.5 million from 1999’s $250 million, and the schedule 
for achieving full operating capability had been delayed from 2006 to 
2011. GAO reported in May 2006 on key factors contributing to the cost 
overruns and schedule delays, including requirements management. 
Specifically, GAO found that the Coast Guard did not have a rigorous 
requirements management process. 

Although the Coast Guard had developed high-level requirements, it 
relied solely on the contractor to manage them. According to Coast 
Guard acquisition officials, they had taken this approach because of 
the performance-based contract vehicle. GAO’s experience in reviewing 
major systems acquisitions has shown that it is important for 
government organizations to exercise strong leadership in managing 
requirements, regardless of the contracting vehicle. 

Besides not effectively managing requirements, Rescue 21 testing 
revealed numerous problems linked to incomplete and poorly defined user 
requirements. For example, a Coast Guard usability and operability 
assessment of Rescue 21 stated that most of the operational 
advancements envisioned for the system had not been achieved, 
concluding that these problems could have been avoided if the contract 
had contained user requirements. 

A key requirement was to “provide a consolidated regional geographic 
display.” The contractor provided a capability based on this 
requirement but, during testing, the Coast Guard operators believed 
that the maps did not display sufficient detail. Such discrepancies led 
to an additional statement of work that defined required enhancements 
to the system interface, such as screen displays. 

GAO reported that if deploying Rescue 21 were to be further delayed, 
Coast Guard sites and services would be affected in several ways. Key 
functionality, such as improved direction finding and improved coverage 
of coastal areas, would not be available as planned. Coast Guard 
personnel at those sites would continue to use outdated legacy 
communications systems for search and rescue operations, and coverage 
of coastal regions would remain limited. In addition, delays could 
result in costly upgrades to the legacy system in order to address 
communications coverage gaps, as well as other operational concerns. 

Source: GAO, United States Coast Guard: Improvements Needed in 
Management and Oversight of Rescue System Acquisition, GAO-06-623, 
Washington, D.C.: May 31, 2006. 

[End of case study] 

Fully understanding requirements up front helps increase the accuracy 
of the cost estimate. While each program should have a technical 
baseline that addresses each element in table 6, each program’s aspects 
are unique. In the next section, we give examples of system 
characteristics and performance parameters typically found in 
government cost estimates, including military weapon systems and 
civilian construction and information systems. 

Key System Characteristics and Performance Parameters: 

Since systems differ, each one has unique physical and performance 
characteristics. Analysts need specific knowledge about them before 
they can develop a cost estimate for a weapon system, an information 
system, or a construction program. 

While the specific physical and performance characteristics for a 
system being estimated will be dictated by the system and the 
methodology used to perform the estimate, several general 
characteristics have been identified in the various guides we reviewed. 
Table 7 lists general characteristics shared within several system 
types. 

Table 7: General System Characteristics: 

System: Aircraft; 
Characteristics: 
* Breakdown of airframe unit weight by material type; 
* Combat ceiling and speed; 
* Internal fuel capacity; 
* Length; 
* Load factor; 
* Maximum altitude; 
* Maximum speed (knots at sea level); 
* Mission and profile; 
* Weight; 
- Type: Airframe unit weight, combat, empty, maximum gross, 
payload, structure; 
* Wetted area; 
* Wing; 
- Type: Wingspan, wing area, wing loading. 

System: Automated information systems; 
Characteristics: 
* Architecture; 
* Commercial off-the-shelf software used; 
* Customization of commercial off-the-shelf software; 
* Expansion factors; 
* Memory size; 
* Processor type; 
* Proficiency of programmers; 
* Programming language used; 
* Software sizing metric. 

System: Construction; 
Characteristics: 
* Changeover; 
* Environmental impact; 
* Geography; 
* Geology; 
* Liability; 
* Location: 
- Type: Land value, proximity to major roads, relocation expenses; 
* Material type: 
- Type: Composite, masonry, metal, tile, wood shake; 
* Number of stories; 
* Permits; 
* Public acceptance; 
* Square feet; 
* Systemization. 

System: Missiles; 
Characteristics: 
* Height; 
* Length; 
* Payload; 
* Propulsion type; 
* Range; 
* Sensors; 
* Weight; 
* Width. 
 
System: Ships; 
Characteristics: 
* Acoustic signature; 
* Full displacement; 
* Full load weight; 
* Length overall; 
* Lift capacity; 
* Light ship weight; 
* Margin; 
* Maximum beam; 
* Number of screws; 
* Payload; 
* Propulsion type; 
* Shaft horsepower. 

System: Space; 
Characteristics: 
* Attitude; 
* Design life and reliability; 
* Launch vehicle; 
* Mission and duration; 
* Orbit type; 
* Pointing accuracy; 
* Satellite type; 
* Thrust; 
* Weight and volume. 

System: Tanks and trucks; 
Characteristics: 
* Engine; 
* Height; 
* Horsepower; 
* Length; 
* Weight; 
* Width; 
* Payload. 
 
Source: DOD and GAO. 

[End of table] 

Once a system’s unique requirements have been defined, they must be 
managed and tracked continually throughout the program’s development. 
If requirements change, both the technical baseline and cost estimate 
should be updated so that users and management can understand the 
effects of the change. When requirements are not well managed, users 
tend to become disillusioned, and costs and schedules can spin out of 
control, as case study 21 demonstrates. 

Case Study 21: Managing Requirements, from DOD Systems Modernization, 
GAO-06-215: 
 
The Naval Tactical Command Support System (NTCSS) was started in 1995 
to help U.S. Navy personnel manage ship, submarine, and aircraft 
support activities. At the time of GAO’s review, about $1 billion had 
been spent to partially deploy NTCSS to about half its intended sites. 
In December 2005, GAO reported that the Navy had not adequately 
conducted requirements management and testing activities for the 
system. For example, requirements had not been prioritized or traced to 
related documentation to ensure that the system’s capabilities would 
meet users’ needs. As a result, failures in developmental testing had 
prevented NTCSS’s latest component from passing operational testing 
twice over the preceding 4 years. From the Navy’s data, the recent 
trend in key indicators of system maturity, such as the number and 
nature of reported system problems and change proposals, showed that 
problems with NTCSS had persisted and that they could involve costly 
rework. In addition, the Navy did not know the extent to which NTCSS’s 
optimized applications were meeting expectations—even though the 
applications had been deployed to 229 user sites since 1998—because 
metrics to demonstrate that the expectations had been met had not been 
defined and collected. 

Source: GAO, DOD Systems Modernization: Planned Investment in the Naval 
Tactical Command Support System Needs to Be Reassessed, GAO-06-215 
Washington, D.C.: Dec. 5, 2005. 

[End of case study] 

Case study 21 shows that an inability to manage requirements leads to 
additional costs and inefficient management of resources. To manage 
requirements, they must first be identified. The bottom line is that 
the technical baseline should document the underlying technical and 
program assumptions necessary to develop a cost estimate and update 
changes as they occur. Moreover, the technical baseline should also 
identify the level of risk associated with the assumptions so that the 
estimate’s credibility can be determined. As we stated previously, the 
technical baseline should mature in the same manner as the program 
evolves. Because it is evolutionary, earlier versions of the technical 
baseline will necessarily include more assumptions and, therefore, more 
risk, but these should decline as risks become either realized or 
retired. 

4. Best Practices Checklist: Technical Baseline Description: 

* There is a technical baseline: 
- The technical baseline has been developed by qualified personnel such 
as system engineers. 
- It has been updated with technical, program, and schedule changes, 
and it contains sufficient detail of the best available information at 
any given time. 
- The information in the technical baseline generally drives the cost 
estimate and the cost estimating methodology. 
- The cost estimate is based on information in the technical baseline 
and has been approved by management. 

* The technical baseline answers the following: 
- What the program is supposed to do—requirements; 
- How the program will fulfill its mission—purpose; 
- What it will look like—technical characteristics; 
- Where and how the program will be built—development plan; 
- How the program will be acquired—acquisition strategy; 
- How the program will operate—operational plan; 
- Which characteristics affect cost the most—risk. 

[End of Chapter 7] 

Chapter 8: Work breakdown Structure: 

A work breakdown structure is the cornerstone of every program because 
it defines in detail the work necessary to accomplish a program’s 
objectives. For example, a typical WBS reflects the requirements, what 
must be accomplished to develop a program, and provides a basis for 
identifying resources and tasks for developing a program cost estimate. 
A WBS is also a valuable communication tool between systems 
engineering, program management, and other functional organizations 
because it provides a clear picture of what needs to be accomplished 
and how the work will be done. Accordingly, it is an essential element 
for identifying activities in a program’s integrated master schedule. 
In addition, it provides a consistent framework for planning and 
assigning responsibility for the work. Initially set up when the 
program is established, the WBS becomes successively detailed over time 
as more information becomes known about the program. 

A WBS is a necessary program management tool because it provides a 
basic framework for a variety of related activities like estimating 
costs, developing schedules, identifying resources, determining where 
risks may occur, and providing the means for measuring program status 
using EVM. Furthermore, a well structured WBS helps promote 
accountability by identifying work products that are independent of one 
another. It also provides the framework to develop a schedule and cost 
plan that can easily track technical accomplishments—in terms of 
resources spent in relation to the plan as well as completion of 
activities and tasks—enabling quick identification of cost and schedule 
variances. 

Best Practice: Product-Oriented WBS: 
 
A WBS deconstructs a program’s end product into successive levels with 
smaller specific elements until the work is subdivided to a level 
suitable for management control. By breaking work down into smaller 
elements, management can more easily plan and schedule the program’s 
activities and assign responsibility for the work. It also facilitates 
establishing a schedule, cost, and EVM baseline. Establishing a product-
oriented WBS is a best practice because it allows a program to track 
cost and schedule by defined deliverables, such as a hardware or 
software component. This allows a program manager to more precisely 
identify which components are causing cost or schedule overruns and to 
more effectively mitigate the root cause of the overruns. 

A WBS breaks down product-oriented elements into a hierarchical 
structure that shows how elements relate to one another as well as to 
the overall end product. A 100 percent rule is followed that states 
that “the next level of decomposition of a WBS element (child level) 
must represent 100 percent of the work applicable to the next higher 
(parent) element.”[Footnote 31] This is considered a best practice by 
many experts in cost estimating, because a product-oriented WBS 
following the 100 percent rule ensures that all costs for all 
deliverables are identified. Failing to include all work for all 
deliverables can lead to schedule delays and subsequent cost increases. 
It can also result in confusion among team members. To avoid these 
problems, standardizing the WBS is a best practice in organizations 
where there is a set of program types that are standard and typical. 
This enables an organization to simplify the development of the top-
level program work breakdown structures by publishing the standard. It 
also facilitates an organization’s ability to collect and share data 
from common WBS elements among many programs. The more data that are 
available for creating the cost estimate, the higher the confidence 
level will be. 

Its hierarchical nature allows the WBS to logically sum the lower-level 
elements that support the measuring of cost, schedule, and technical 
analysis in an EVM system. A good WBS clearly defines the logical 
relationship of all program elements and provides a systematic and 
standardized way for collecting data across all programs. Therefore, a 
WBS is an essential part of developing a program’s cost estimate and 
enhancing an agency’s ability to collect data necessary to support 
future cost estimates. Moreover, when appropriately integrated with 
systems engineering, cost estimating, EVM, and risk management, a WBS 
provides the basis to allow program managers to have a better view into 
a program’s status, facilitating continual improvement. 

A WBS is developed and maintained by a systems engineering process that 
produces a product-oriented family tree of hardware, software, 
services, data, and facilities. It can be thought of as an illustration 
of what work will be accomplished to satisfy a program’s requirements. 
The WBS diagrams the effort in small discrete pieces, or elements, to 
show how each one relates to the others and to the program as a whole. 
These elements such as hardware, software, and data are further broken 
down into specific lower-level elements. The lowest level of the WBS is 
defined as the work package level. 

The number of levels for a WBS varies from program to program and 
depends on a program’s complexity and risk. Work breakdown structures 
need to be expanded to a level of detail that is sufficient for 
planning and successfully managing the full scope of work. However, 
each WBS should, at the very least, include three levels. The first 
level represents the program as a whole and therefore contains only one 
element—the program’s name. The second level contains the major program 
segments, and level three contains the lower-level components or 
subsystems for each segment. These relationships are illustrated in 
figure 10, which depicts a very simple automobile system WBS. 

Figure 10: A Product-Oriented Work Breakdown Structure: 

[Refer to PDF for image: illustration] 
 
Level 1: 
Automobile system. 

Level 2: 
Chassis;
Shell;
Interior; 
Exterior; 
Powertrain. 

Level 3: 
Subcomponent; 
Subcomponent; 
Subcomponent. 

Source: © 2005 MCR, LLC, “Developing a Work Breakdown Structure.” 

[End of figure] 

In figure 10, all level 2 elements would also have level 3 
subcomponents; chassis is the example in the figure. For some level 2 
elements, level 3 would be the lowest level of breakdown; for others, 
still lower levels would be required. The elements at each lower level 
of breakdown are called “children” of the next higher level, which are 
the “parents.” The parent–child relationship allows for logical 
connections and relationships to emerge and a better understanding of 
the technical effort involved. It also helps improve the ability to 
trace relationships within the cost estimate and EVM system. 

In the example in figure 10, the chassis would be a child of the 
automobile system but the parent of subcomponents 1–3. In constructing 
a WBS, the 100 percent rule always applies. That is, the sum of a 
parent’s children must always equal the parent. Thus, in figure 10, the 
sum of chassis, shell, interior, and so on must equal the automobile 
system. In this way, the WBS makes sure that each element is defined 
and related to only one work effort, so that all activities are 
included and accounted for. It also helps identify the specialists who 
are needed to complete the work and who will be responsible so that 
effort is not duplicated. 

It is important to note that a product-oriented WBS reflects cost, 
schedule, and technical performance on specific portions of a program, 
while a functional WBS does not provide that level of detail. For 
example, an overrun on a specific item in figure 10 (for example, 
powertrain) might cause program management to change a specification, 
shift funds, or modify the design. If the WBS were functionally based 
(for example, in manufacturing, engineering, or quality control), then 
management would not have the right information to get to the root 
cause of the problem. Therefore, since only a product-oriented WBS 
relates costs to specific hardware elements—the basis of most cost 
estimates—it represents a cost estimating best practice. Case study 22 
highlights problems that can occur by not following this best practice. 


Case Study 22: Product-Oriented Work Breakdown Structure, from Air 
Traffic Control, GAO-08-756: 

Federal Aviation Administration (FAA) required the use of EVM on its 
major information technology investments. GAO found key components not 
fully consistent with best practices. We reported that leading 
organizations establish EVM policies that require programs to use a 
product-oriented structure for defining work products. FAA’s policy and 
guidance are not consistent with best practices because it requires its 
programs to establish a standard WBS using a function-oriented 
structure. FAA work is thus delineated by functional activities, such 
as design engineering, requirements analysis, and quality control. A 
product-oriented WBS would reflect cost, schedule, and technical 
performance on specific deliverables. 

Without a product-oriented approach, program managers may not have the 
detailed information needed to make decisions on specific program 
components. For example, cost overruns associated with a specific radar 
component could be quickly identified and addressed using a product-
oriented structure. If a function-oriented structure were used, these 
costs could be spread out over design, engineering, etc. 

FAA program managers using a product-oriented WBS need to transfer 
their data to FAA’s required function-oriented WBS when reporting to 
management. EVM experts agree that such mapping efforts are time-
consuming, subject to error, and not always consistent. Until FAA 
establishes a standard product-oriented WBS, program officials may not 
be obtaining the information they need. 

Source: GAO, Air Traffic Control: FAA Uses Earned Value Techniques to 
Help Manage Information Technology Acquisitions, but Needs to Clarify 
Policy and Strengthen Oversight, GAO-08-756, Washington, D.C.: July 18, 
2008. 

[End of case study] 

Since best practice is for the WBS prime mission elements to be product-
oriented, the WBS should not be structured or organized at a second or 
third level according to any element not a product or not being in or 
itself a deliverable: 

* design engineering, requirements analysis, logistics, risk, quality 
assurance, and test engineering (all functional engineering efforts), 
aluminum stock (a material resource), and direct costs (an accounting 
classification);[Footnote 32] 
 
* program acquisition phases (for example, development and procurement) 
and types of funds used in those phases (for example, research, 
development, test, and evaluation); 
 
* rework, retesting, and refurbishing, which should be treated as 
activities of the WBS element; 
 
* nonrecurring and recurring classifications, for which reporting 
requirements should be structured to ensure that they are segregated; 
 
* cost saving efforts—such as total quality management initiatives and 
acquisition reform initiatives—included in the elements they affect, 
not captured separately;

* the organizational structure of the program office or contractor; 

* the program schedule—instead the WBS will drive the necessary 
schedule activities; 
 
* meetings, travel, and computer support, which should be included in 
the WBS elements they are associated with; 
 
* generic terms (terms for WBS elements should be as specific as 
possible); and; 

* tooling, which should be included with the equipment being produced. 

While functional activities are necessary for supporting a product’s 
development, the WBS should not be organized around them. Only products 
should drive the WBS, not common support activities. Moreover, the WBS 
dictionary should state where the functional elements fall within the 
products and how the statement of work elements come together to make 
specific products. 

Common WBS Elements: 

In addition to including product-oriented elements, every WBS includes 
program management as a level 2 element and other common elements like 
integration and assembly, government furnished equipment, and 
government testing. Table 8 lists and describes common elements that 
support the program. For instance, systems engineering, program 
management, integration, and testing are necessary support functions 
for developing, testing, producing, and fielding hardware or software 
elements. 

Table 8: Common Elements in Work Breakdown Structures: 

Common element: Integration, assembly, test, and checkout; 
Description: All effort of technical and functional activities 
associated with the design, development, and production of mating 
surfaces, structures, equipment, parts, materials, and software 
required to assemble level 3 equipment (hardware and software) elements 
into level 2 mission equipment (hardware and software) 

Common element: System engineering; 
Description: The technical and management efforts of directing and 
controlling a totally integrated engineering effort of a system or 
program. 

Common element: Program management; 
Description: The business and administrative planning, organizing, 
directing, coordinating, controlling, and approval actions designated 
to accomplish overall program objectives not associated with specific 
hardware elements and not included in systems engineering. 

Common element: Training; 
Description: Deliverable training services, devices, accessories, aids, 
equipment, and parts used to facilitate instruction in which personnel 
will learn to operate and maintain the system with maximum efficiency. 

Common element: Data; 
Description: The deliverable data that must be on a contract data 
requirements list, including technical publications, engineering data, 
support data, and management data needed to configure management, cost, 
schedule, contractual data management, and program management. 

Common element: System test and evaluation; 
Description: The use of prototype, production, or specifically 
fabricated hardware and software to obtain or validate engineering data 
on the performance of the system in developing program (in DOD, 
normally funded from research, development, test, and evaluation 
appropriations); also includes all effort associated with design and 
production of models, specimens, fixtures, and instrumentation in 
support of the system-level test program. 

Common element: Peculiar support equipment; 
Description: Equipment uniquely needed to support the program: 
vehicles, equipment, tools, and the like to fuel, service, transport, 
hoist, repair, overhaul, assemble and disassemble, test, inspect, or 
otherwise maintain mission equipment, as well as equipment or software 
required to maintain or modify the software portions of the system. 

Common element: Common support equipment;
Description: Equipment not unique to the program and available in 
inventory for use by many programs. 

Common element: Operational and site activation; 
Description: Installation of mission and support equipment in the 
operations or support facilities and complete system checkout or 
shakedown to ensure operational status; may include real estate, 
construction, conversion, utilities, and equipment to provide all 
facilities needed to house, service, and launch prime mission 
equipment. 

Common element: Facilities; 
Description: Includes construction, conversion, or expansion of 
existing industrial facilities for production, inventory, and 
contractor depot maintenance required as a result of the specific 
system. 

Common element: Initial spares and repair parts; 
Description: Includes the deliverable spare components, assemblies, and 
subassemblies used for initial replacement purposes in the materiel 
system equipment end item. 

Source: DOD. 

[End of table] 
 
Therefore, in addition to having a product-oriented WBS for the prime 
mission equipment that breaks down the physical pieces of, for example, 
an aircraft, information technology system, or satellite, the WBS 
should include these common elements to ensure that all effort is 
identified at the outset. This, in turn, will facilitate planning and 
managing the overall effort, since the WBS should be the starting point 
for developing the detailed schedule. Figure 11 shows a program WBS, 
including common elements, for an aircraft system. 

Figure 11: A Work Breakdown Structure with Common Elements: 

[Refer to PDF for image: illustration] 

Level 1: 
Aircraft system; 

Level 2: 
Air vehicle; 
System engineering/Program management; 
System test and evaluation; 
Data; 
Training. 

Level 3: 
Airframe; 
Propulsion;
Fire control.
 
Source: © 2005 MCR, LLC, “Developing a Work Breakdown Structure.” 

[End of figure] 

While the top-level WBS encompasses the whole program, the contractor 
must also develop a contract WBS that extends the lower-level 
components to reflect its responsibilities. See figure 12. 

Figure 12: A Contract Work Breakdown Structure: 

[Refer to PDF for image: illustration] 

Source: DOD. 

[End of figure] 

Figure 12 shows how a prime contractor may require its subcontractor to 
use the WBS to report work progress. In this example, the fire control 
effort (a level 3 element in the prime contractor’s WBS) is the first 
level for the subcontractor. Thus, all fire control expenditures at 
level 1 of the subcontractor’s contract WBS would map to the fire 
control element at level 3 in the program WBS. This shows how a 
subcontractor would break a level 3 item down to lower levels to 
accomplish the work, which when rolled up to the prime WBS, would show 
effort at levels 4–7. Always keep in mind that the structure provided 
by the prime contractor WBS will identify the work packages that are 
the responsibility of the subcontractor. The subcontractor will also 
need to decompose the work further in its own WBS as well. 

WBS Development: 

A WBS should be developed early to provide for a conceptual idea of 
program size and scope. As the program matures, so should the WBS. Like 
the technical baseline, the WBS should be considered a living document. 
Therefore, as the technical baseline becomes further defined with time, 
the WBS will also reflect more detail. For example, as specification 
requirements become better known and the statement of work is updated, 
the WBS will include more elements. As more elements are added to the 
WBS, the schedule is capable of greater definition, giving more insight 
into the program’s cost, schedule, and technical relationships. 

It is important that each WBS be accompanied by a dictionary of the 
various WBS elements and their hierarchical relationships. A WBS 
dictionary is simply a document that describes in brief narrative 
format what work is to be performed in each WBS element. Each element 
is presented in an outline to show how it relates to the next higher 
element and what is included to ensure clear relationships. With minor 
changes and additions the WBS dictionary can be converted into a 
statement of work. Although not the normal approach, the dictionary may 
also be expanded by the program manager to describe the resources and 
processes necessary for producing each element in cost, technical, and 
schedule terms. Also, since the WBS is product related, it is closely 
related to, and structured somewhat the same as, an indented bill of 
materials for the primary product. Like the WBS, its dictionary should 
be updated when changes occur. After the program is baselined, updating 
the WBS should be part of a formal process, as in configuration 
management. 

Standardized WBS: 

Standardizing the WBS is considered a best practice because it enables 
an organization to collect and share data among programs. Standardizing 
work breakdown structures results in more consistent cost estimates, 
allows data to be shared across organizations, and leads to more 
efficient program execution. WBS standardization also facilitates cost 
estimating relationship development and allows for common cost measures 
across multiple contractors and programs. Not standardizing WBSs causes 
extreme difficulty in comparing costs from one contractor or program to 
another, resulting in substantial expense to government estimating 
agencies when collecting and reconciling contractor cost and technical 
data in consistent format. 

The standardized WBS logic should support the engineering perspective 
on how the program is being built. The WBS should be a communication 
tool that can be used across all functions within the program. To 
foster flexibility, WBS standardization should occur at a high 
level—such as WBS level 3—so that lower levels can be customized to 
reflect how the specific program’s work will be managed. For high-risk 
or costly elements, however, management can make decisions to 
standardize the WBS to whatever level is necessary to properly gain 
insight. Thus, the WBS should be standard at a high level, with 
flexibility in the lower levels to allow detailed planning once the 
schedule is laid out. Furthermore, the same standard WBS should be used 
for developing the cost estimate and the program schedule and setting 
up the EVM performance measurement baseline. Relying on a standard WBS 
can enable program managers to better plan and manage their work and 
helps in updating the cost estimate with actual costs—the final 
critical step in our twelve steps to a high-quality cost estimate. 

A standardized product-oriented WBS can help define high-level 
milestones and cost driver relationships that can be repeated in future 
applications. In addition to helping the cost community, standard WBSs 
can result in better portfolio management. Programs reporting to a 
standard WBS enable leadership to make better decisions about where to 
apply contingency reserve and where systemic problems are occurring, 
like integration and test. Using this information, management can take 
action by adjusting investment and obtaining lessons learned. As a 
result, it is easier to manage programs if they are reporting in the 
same format. 

Besides the common elements shown in table 8, DOD has identified, for 
each defense system, a standard combination of hardware and software 
that defines the end product for that system. In its 2005 updated WBS 
handbook, DOD defined and described the WBS, provided instructions on 
how to develop one, and defined specific defense items.[Footnote 33] 
The primary purpose of the handbook is to develop the top levels of the 
WBS with uniform definitions and a consistent approach. Developed 
through the cooperation of the military services, with assistance from 
industry associations, its benefit is improved communication throughout 
the acquisition process. 

In addition to defining a standard WBS for its weapon systems, DOD has 
developed a common cost element structure that, while not a product-
oriented WBS, standardizes the vocabulary for cost elements for 
automated information systems undergoing DOD review.[Footnote 34] The 
cost element structure is also designed to standardize the systems, 
facilitating the validation process. Furthermore, DOD requires that all 
the cost elements be included in LCCEs for automated information 
systems submitted for review. Table 9 gives an example of the cost 
element structure for an automated information system. 

Table 9: Cost Element Structure for a Standard DOD Automated 
Information System: 

Element 1 and subelements: 
1.0 Investment: 
1.1 Program management;
1.1.1 Personnel;
1.1.2 Travel;
1.1.3 Other government support;
1.1.4 Other;
1.2 Concept exploration; 
1.2.1 Engineering analysis investment & specification;
1.2.2 Concept exploration hardware;
1.2.3 Concept exploration software;
1.2.4 Concept exploration data; 
1.2.5 Exploration documentation; 
1.2.6 Concept exploration testing; 
1.2.7 Facilities;
1.2.8 Other;
1.3 System development;
1.3.1 System design & specification; 
1.3.2 Prototype & test site investment; 
1.4 System procurement; 
1.4.1 Deployment hardware; 
1.4.2 System deployment software; 
1.4.3 Initial documentation; 
1.4.4 Logistics support equipment; 
1.4.5 Initial spares; 
1.4.6 Warranties; 
1.5 Outsource investment; 
1.5.1 Capital investment; 
1.5.2 Software development; 
1.5.3 System user investment; 
1.6 System implementation; 
1.6.1 Training; 
1.6.2 Integration, test, acceptance;
1.6.3 Common support equipment;
1.6.4 Site activation & facilities;
1.6.5 Initial supplies;
1.6.6 Engineering change;
1.6.7 Initial logistics support;
1.6.8 Office furniture & furnishings;
1.6.9 Data upload & transition;
1.6.10 Communication;s
1.6.11 Other;
1.7 Upgrades;
1.7.1 Upgrade development;
1.7.2 Life cycle upgrades;
1.7.3 Central mega center upgrades;
1.8 Disposal & reuse;
1.8.1 Capital recoupment;
1.8.2 Retirement;
1.8.3 Environmental & hazardous,

Element 2 and subelements: 
2.0 System operations & support; 
2.1 System management;
2.1.1 Personnel;
2.1.2 Travel;
2.1.3 Other government support;
2.1.4 Other; 
2.2 Annual operations; 
2.2.1 Maintenance investment; 
2.2.2 Replenishment spares; 
2.2.3 Replenishment supplies; 
2.3 Hardware maintenance; 
2.3.1 Hardware maintenance; 
2.3.2 Maintenance support; 
2.3.3 Other hardware maintenance; 
2.4 Software maintenance; 
2.4.1 Commercial off-the-shelf software; 
2.4.2 Application & mission software;
2.4.3 Communication software; 
2.5 Megacenter maintenance; 
2.6 Data maintenance; 
2.6.1 Mission application data; 
2.6.2 Standard administrative data; 
2.7 Site operations; 
2.7.1 System operational personnel; 
2.7.2 Utility requirement; 
2.7.3 Fuel 
2.7.4 Facilities lease & maintenance; 
2.7.5 Communications; 
2.7.6 Base operating & support; 
2.7.7 Recurring training & fielding; 
2.7.8 Miscellaneous support; 
2.8 Environmental & acceptance hazardous; 
2.9 Contract leasing.

Element 3 and subelements: 
3.0 Legacy system phase-out; 
3.1 System management; 
3.1.1 Personnel; 
3.1.2 Travel; 
3.1.3 Other government support;
3.2 Phase-out investment; 
3.2.1 Hardware; 
3.2.2 Software; 
3.2.3 Hazardous material handling; 
3.3 Phase-out operations & support; 
3.3.1 Hardware maintenance;
3.3.2 Software maintenance; 
3.3.3 Unit & subunit operations; 
3.3.4 Megacenter operations; 
3.3.5 Phase-out contracts. 

Source: DOD. 

[End of table] 

This standard WBS should be tailored to fit each program. In some 
cases, the cost element structure contains built-in redundancies that 
provide flexibility in accounting for costs. For example, logistics 
support costs could occur in either investment or operations and 
support. However, it is important that the cost element structure of 
the automated information system not double count costs that could be 
included in more than one cost element. While the structure is 
flexible, the same rules as those of a WBS apply, in that children are 
assigned to only one parent. (Appendix IX contains numerous examples 
of standard work breakdown structures for, among others, surface, sea, 
and air transportation systems; military systems; communications 
systems; and systems for construction and utilities.) 

WBS And Scheduling: 

The WBS should be used as the outline for the integrated master 
schedule, using the levels of indenture down to the work package level. 
Since the WBS defines the work in lower levels of detail, its framework 
provides the starting point for defining all activities and tasks that 
will be used to develop the program schedule. 

The lowest level of the WBS is the work package. Within the work 
packages, the activities are defined and scheduled. When developing the 
program schedule, the WBS—in outline form—should be simply cut and 
pasted into the software. From there, the lower-level work packages and 
subsequent activities and tasks are defined. 

Accordingly, the WBS provides a logical and orderly way to begin 
preparing the detailed schedule, determining the relationships between 
activities, and identifying resources required to accomplish the 
tasks. Therefore, high-level summary tasks and all the detailed tasks 
in the schedule should map directly to the WBS to ensure that the 
schedule encompasses the entire work effort. 

WBS and EVM: 

By breaking the work into smaller, more manageable work elements, a WBS 
can be used to integrate the scheduled activities and costs for 
accomplishing each work package at the lowest level of the WBS. This is 
essential for developing the resource-loaded schedule that forms the 
foundation for the EVM performance measurement baseline. Thus, a WBS is 
an essential part of EVM cost, schedule, and technical monitoring, 
because it provides a consistent framework from which to measure 
progress. This framework can be used to monitor and control costs based 
on the original baseline and to track where and why there were 
differences. In this way, the WBS serves as the common framework for 
analyzing the original cost estimate and the final cost outcome. 

When analysts use cost, schedule, and technical information organized 
by the WBS hierarchical structure, they can summarize data to provide 
management valuable information at any phase of the program. 
Furthermore, because a WBS addresses the entire program, managers at 
any level can assess their progress against the cost estimate plan. 
This helps keep program status current and visible so that risks can be 
managed or mitigated quickly. Without a WBS, it would be much more 
difficult to analyze the root cause of cost, schedule, and technical 
problems and to choose the optimum solution to fix them. 

The WBS also provides a common thread between EVM and the integrated 
master schedule (IMS)—the time-phased schedule DOD and other agencies 
use for assessing technical performance. This link to the WBS can allow 
for further understanding of program cost and schedule variances. When 
the work is broken down into small pieces, progress can be linked to 
the IMS for better assessments of cost, technical, schedule, and 
performance issues. The WBS also enhances project control by tying the 
contractual work scope to the IMS, which DOD commonly uses to develop a 
program’s technical goals and plans. 

WBS And Risk Management: 

The WBS is also valuable for identifying and monitoring risks. During 
the cost estimating phase, the WBS is used to flag elements likely to 
encounter risks, allowing for better contingency planning. During 
program execution, the WBS is used to monitor risks using the EVM 
system, which details plans to a level that is needed to accomplish all 
tasks.

In scheduling the work, the WBS can help identify activities in the 
schedule that are at risk because resources are lacking or because too 
many activities are planned in parallel to one another. In addition, 
risk items can be mapped to activities in the schedule and the results 
can be examined through a schedule risk analysis (more detail is in 
appendix X). 

WBS Benefits: 

Elements of a WBS may vary by phase, since different activities are 
required for development, production, operations, and support. 
Establishing a master WBS as soon as possible for the program’s life 
cycle that details the WBS for each phase provides many program 
benefits: 
 
* segregating work elements into their component parts; 

* clarifying relationships between the parts, the end product, and the 
tasks to be completed; 

* facilitating effective planning and assignment of management and 
technical responsibilities; 

* helping track the status of technical efforts, risks, resource 
allocations, expenditures, and the cost and schedule of technical 
performance within the appropriate phases, since the work in phases 
frequently overlaps; 

* helping ensure that contractors are not unnecessarily constrained in 
meeting item requirements; and; 

* providing a common basis and framework for the EVM system and the 
IMS, facilitating consistency in understanding program cost and 
schedule performance and assigning to the appropriate phase. Since the 
link between the requirements, WBS, the statement of work, IMS, and the 
integrated master plan provides specific insights into the relationship 
between cost, schedule, and performance, all items can be tracked to 
the same WBS elements. 

As the program or system matures, engineering efforts should focus on 
system-level performance requirements—validating critical technologies 
and processes and developing top-level specifications. As the 
specifications are further defined, the WBS will better define the 
system in terms of its specifications. After the system concept has 
been determined, major subsystems can be identified and lower-level 
functions determined, so that lower-level system elements can be 
defined, eventually completing the total system definition. The same 
WBS can be used throughout, updating and revising it as the program or 
system development proceeds and as the work in each phase progresses. 
One of the outputs of each phase is an updated WBS covering the 
succeeding phases. 

In summary, a well-developed WBS is essential to the success of all 
acquisition programs. A comprehensive WBS provides a consistent and 
visible framework that improves communication; helps in the planning 
and assignment of management and technical responsibilities; and 
facilitates tracking engineering efforts, resource allocations, cost 
estimates, expenditures, and cost and technical performance. Without 
one, a program is most likely to encounter problems, as case studies 23 
and 24 illustrate. 

Case Study 23: Developing Work Breakdown Structure, from NASA, GAO-04-
642: 
 
For more than a decade, GAO had identified NASA’s contract management 
as a high-risk area. NASA had been unable to collect, maintain, and 
report the full cost of its programs and projects. Because of 
persistent cost growth in a number of NASA programs, GAO was asked to 
assess 27 programs—10 in detail. GAO found that only 3 of the 10 had 
provided a complete breakdown of the work to be performed, despite 
agency guidance calling for projects to break down the work into 
smaller units to facilitate cost estimating and program management and 
to help ensure that relevant costs were not omitted. Underestimating 
full life-cycle costs creates the risk that a program may be 
underfunded and subject to major cost overruns. It may be reduced in 
scope, or additional funding may have to be appropriated to meet 
objectives. Overestimating life-cycle costs creates the risk that a 
program will be thought unaffordable and it could go unfunded. Without 
a complete WBS, NASA’s programs cannot ensure that its LCCEs capture 
all relevant costs, which can mean cost overruns. Inconsistent WBS 
estimates across programs can cause double counting or, worse, costs 
can be underestimated when historical program costs are used for 
projecting future costs for similar programs. Among its multiple 
recommendations, GAO recommended that NASA base its cost estimates for 
each program on a WBS that encompassed both in-house and contractor 
efforts and develop procedures that would prohibit proposed projects 
from proceeding through review and approval if they did not address the 
elements of recommended cost estimating practices. 

Source: GAO, NASA: Lack of Disciplined Cost-Estimating Processes 
Hinders Effective Program Management, GAO-04-642, Washington, D.C.: 
May 28, 2004. 

[End of case study] 

Case Study 24: Developing Work Breakdown Structure, from 
Homeland Security, GAO-06-296: 

The Department of Homeland Security (DHS) established U.S. Visitor and 
Immigrant Status Indicator Technology (US–VISIT) to collect, maintain, 
and share information, including biometric identifiers, on selected 
foreign nationals entering and exiting the United States. Having 
reported that the program had not followed effective cost estimating 
practices, GAO recommended that DHS follow effective practices for 
estimating future increments. 

GAO then reported on the cost estimates for the latest increment in 
February 2006, finding US–VISIT’s cost estimates still insufficient. 
For example, they did not include a detailed WBS and they omitted 
important cost elements such as system testing. The uncertainties 
associated with the latest system increment cost estimate were not 
identified. Uncertainty analysis provides the basis for adjusting 
estimates to reflect unknown facts and circumstances that could affect 
costs, and it identifies risk associated with the cost estimate. 

Program officials stated that they recognized the importance of 
developing reliable cost estimates and initiated actions to more 
reliably estimate the costs of future system increments. For example, 
US–VISIT chartered a cost analysis process action team to develop, 
document, and implement a cost analysis policy, process, and plan for 
the program. Program officials had also hired additional contracting 
staff with cost estimating experience. 

Source: GAO, Homeland Security: Recommendations to Improve Management 
of Key Border Security Program Need to Be Implemented, GAO-06-296, 
Washington, 
D.C.: Feb. 14, 2006). 

[End of case study] 

5. Best Practices Checklist: Work Breakdown Structure: 

* A product-oriented WBS represents best practice. 

* It reflects the program work that needs to be done. It: 
- clearly outlines the end product and major work for the program; 
- contains at least 3 levels of indenture; 
- is flexible and tailored to the program. 

* The 100 percent rule applies: the sum of the children equals the 
parent. 
- The WBS defines all work packages, which in turn include all cost 
elements and deliverables. 
- In addition to hardware and software elements, the WBS contains 
program management and other common elements to make sure all the work 
is covered. 

* Each system has one program WBS but may have several contract WBSs 
that are extended from the program WBS, depending on the number of 
subcontractors. 

* The WBS is standardized so that cost data can be collected and used 
for estimating future programs. It: 
- facilitates portfolio management, including lessons learned; 
- matches schedule, cost estimate, and EVM at a high level; 
- is updated as changes occur and the program becomes better defined; 
- includes functional activities within each element that are needed to 
support each product deliverable; 
- is the starting point for developing the program’s detailed schedule; 
- provides a framework for identifying and monitoring risks and the 
effectiveness of contingency plans; 
- provides for a common language between the government program 
management office, technical specialists, prime contractors, and 
subcontractors. 

* The WBS has a dictionary that: 
- defines each element and how it relates to others in the hierarchy; 
- clearly describes what is included in each element; 
- describes resources and functional activities needed to produce the 
element product;
- links each element to other relevant technical documents. 

[End of Chapter 8] 

Chapter 9: Ground Rules and Assumptions: 

Cost estimates are typically based on limited information and therefore 
need to be bound by the constraints that make estimating possible. 
These constraints usually take the form of assumptions that bind the 
estimate’s scope, establishing baseline conditions the estimate will be 
built from. Because of the many unknowns, cost analysts must create a 
series of statements that define the conditions the estimate is to be 
based on. These statements are usually made in the form of ground rules 
and assumptions (GR&A). By reviewing the technical baseline and 
discussing the GR&As with customers early in the cost estimating 
process, analysts can flush out any potential misunderstandings. GR&As: 

* satisfy requirements for key program decision points, 

* answer detailed and probing questions from oversight groups, 

* help make the estimate complete and professional, 

* present a convincing picture to people who might be skeptical, 

* provide useful estimating data and techniques to other cost 
estimators, 

* provide for reconstruction of the estimate when the original 
estimators are no longer available, and, 

* provide a basis for the cost estimate that documents areas of 
potential risk to be resolved. 

Ground Rules: 
 
Ground rules and assumptions, often grouped together, are distinct. 
Ground rules represent a common set of agreed on estimating standards 
that provide guidance and minimize conflicts in definitions. When 
conditions are directed, they become the ground rules by which the team 
will conduct the estimate. The technical baseline requirements 
discussed in chapter 7 represent cost estimate ground rules. Therefore, 
a comprehensive technical baseline provides the analyst with all the 
necessary ground rules for conducting the estimate. 

Assumptions: 

Without firm ground rules, the analyst is responsible for making 
assumptions that allow the estimate to proceed. In other words, 
assumptions are required only where no ground rules have been provided. 
Assumptions represent a set of judgments about past, present, or future 
conditions postulated as true in the absence of positive proof. The 
analyst must ensure that assumptions are not arbitrary, that they are 
founded on expert judgments rendered by experienced program and 
technical personnel. Many assumptions profoundly influence cost; the 
subsequent rejection of even a single assumption by management could 
invalidate many aspects of the estimate. Therefore, it is imperative 
that cost estimators brief management and document all assumptions 
well, so that management fully understands the conditions the estimate 
was structured on. Failing to do so can lead to overly optimistic 
assumptions that heavily influence the overall cost estimate, to cost 
overruns, and to inaccurate estimates and budgets. (See case study 25.) 
 
Case Study 25: The Importance of Assumptions, from Space 
Acquisitions, GAO-07-96: 

Estimated costs for DOD’s major space acquisition programs increased 
about $12.2 billion, nearly 44 percent, above initial estimates for 
fiscal years 2006 through 2011. Such growth has had a dramatic effect 
on DOD’s overall space portfolio. To cover the added costs of poorly 
performing programs, DOD shifted scarce resources from other programs, 
creating a cascade of cost and schedule inefficiencies. 

GAO’s case study analyses found that program office cost estimates—
specifically, assumptions they were based on—were unrealistic in eight 
areas, many interrelated. In some cases, such as assumptions regarding 
weight growth and the ability to gain leverage from legacy systems, 
past experiences or contrary data were ignored. In others, such as when 
contractors were given more program management responsibility or when 
growth in the commercial market was predicted, estimators assumed that 
promises of reduced cost and schedule would be borne out but did not 
have the benefit of experience to factor them into their work. 

GAO also identified flawed assumptions that reflected deeper flaws in 
acquisition strategies or development approaches. For example, five of 
six programs GAO reviewed assumed that technologies would be 
sufficiently mature when needed, even though they began without a 
complete understanding of how long it would take or how much it would 
cost to ensure that they could work as intended. In four programs, 
estimators assumed few delays, even though the programs adopted highly 
aggressive schedules while attempting to make ambitious leaps in 
capability. In four programs, estimators assumed funding would stay 
constant, even though space and weapons programs frequently experienced 
funding shifts and the Air Force was in the midst of starting a number 
of costly new space programs to replenish older ones. 

Source: GAO, Space Acquisitions: DOD Needs to Take More Action to 
Address Unrealistic Initial Cost Estimates of Space Systems, GAO-07-96, 
Washington, D.C.: Nov. 17, 2006. 

[End of case study] 

Global And Element-Specific Ground Rules And Assumptions: 

GR&As are either global or element specific. Global GR&As apply to the 
entire estimate; element-specific GR&As are driven by each WBS 
element’s detailed requirements. GR&As are more pronounced for 
estimates in the development phase, where there are more unknowns; they 
become less prominent as the program moves through development into 
production. 

While each program has a unique set of GR&As, some are general enough 
that each estimate should address them. For example, each estimate 
should at a minimum define the following global GR&As: program 
schedule, cost limitations (for example, unstable funding stream or 
staff constraints), high-level time phasing, base year, labor rates, 
inflation indexes, participating agency support, and government 
furnished equipment.[Footnote 35] 

One of the most important GR&As is to define a realistic schedule. It 
may be difficult to perform an in-depth schedule assessment early to 
uncover the frequent optimism in initial program schedules. Ideally, 
members from manufacturing and the technical community should be 
involved in developing the program schedule, but often information is 
insufficient and assumptions must be made. In this case, it is 
important that this GR&A outline the confidence the team has in the 
ability to achieve the schedule so that it can be documented and 
presented to management. 

One major challenge in setting realistic schedules is that the 
completion date is often set by external factors outside the control of 
the program office before any analysis has been performed to determine 
whether it is feasible. Another predominant problem is that schedule 
risk is often ignored or not analyzed—or when it is analyzed, the 
analysis is biased. This can occur on the government (customer) or 
contractor side or both. Risk analysis conducted by a group independent 
of the project manager has a better chance of being unbiased than one 
conducted by the program manager. However, it should also be noted that 
many organizations are not mature enough to acknowledge or to apply 
program schedule or cost risk realism because of the possible 
repercussions. For example, a contractor may be less likely to identify 
schedule or cost risk if it fears negative reaction from the customer. 
Likewise, the customer may be unwilling to report cost or schedule risk 
from fear that the program could be canceled. 

Sometimes, management imposes cost limitations because of budget 
constraints. The GR&A should then clearly explain the limitation and 
how it affects the estimate. Usually, cost limitations are handled by 
delaying program content or by a funding shortfall if program content 
cannot be delayed. In many cases, such actions will both delay the 
program and increase its final delivered cost. Either way, management 
needs to be fully apprised of how this GR&A affects the estimate. 

Estimates are time phased because program costs usually span many 
years. Time phasing spreads a program’s expected costs over the years 
in which they are anticipated to aid in developing a proper budget. 
Depending on the activities in the schedule for each year, some years 
may have more costs than others. Great peaks or valleys in annual 
funding should be investigated and explained, however, since staffing 
is difficult to manage with such variations from one year to another. 
Anomalies are easily discovered when the estimate is time phased. Cost 
limitations can also affect an estimate’s time phasing, if there are 
budget constraints for a given fiscal year. Additionally, changes in 
program priority will affect funding and timing—often a program starts 
with high priority but that priority erodes as it proceeds, causing 
original plans to be modified and resulting in later delivery and 
higher cost to the government. These conditions should be addressed by 
the estimate and their effects adequately explained. 

The base year is used as a constant dollar reference point to track 
program cost growth. Expressing an estimate in base year dollars 
removes the effects of economic inflation and allows for comparing 
separate estimates “apples to apples.” Thus, a global ground rule is to 
define the base year dollars that the estimate will be presented in and 
the inflation index that will be used to convert the base year costs 
into then-year dollars that include inflation. At a minimum, the 
inflation index, source, and approval authority should be clearly 
explained in the estimate documentation. Escalation rates should be 
standardized across similar programs, since they are all conducted in 
the same economic environment, and priority choices between them should 
not hinge on different assumptions about what is essentially an 
economic scenario common to all programs. 

Some programs result from two or more agencies joining together to 
achieve common program goals. When this happens, agreements should lay 
out each agency’s area of responsibility. An agency’s failing to meet 
its responsibility could affect the program’s cost and schedule. In the 
GR&A section, these conditions should be highlighted to ensure that 
management is firmly aware that the success of the estimate depends on 
the participation of other agencies. 

Equipment that the government agrees to provide to a contractor can 
range from common supply items to complex electronic components to 
newly developed engines for aircraft. Because the estimator cannot 
predict whether deliveries of such equipment will be timely, 
assumptions are usually made that it will be available when needed. It 
is important that the estimate reflect the items that it assumes 
government will furnish, so that the risk to the estimate if items are 
delayed can be modeled and presented to management. In general, 
schedules represent delivery of material from external sources, 
including the government, with date-constrained milestones. A better 
approach is to include the supplier’s work to produce the product by a 
summary activity in the schedule, examine the possibility of delayed 
delivery, include that risk in a schedule risk analysis, and monitor 
the work of the supplier as the date approaches. 

In addition to global GR&As, estimate-specific GR&As should be tailored 
for each program, including: 

* life-cycle phases and operations concept; 

* maintenance concepts; 

* acquisition strategy, including competition, single or dual sourcing, 
and contract or incentive type; 

* industrial base viability; 

* quantities for development, production, and spare and repair parts; 

* use of existing facilities, including any modifications or new 
construction; 

* savings for new ways of doing business; 

* commonality or design inheritance assumptions; 

* technology assumptions and new technology to be developed; 

* technology refresh cycles; 

* security considerations that may affect cost; and 

* items specifically excluded from the estimate. 

The cost estimator should work with members from the technical 
community to tailor these specific GR&As to the program. Information 
from the technical baseline and WBS dictionary help determine some of 
these GR&As, like quantities and technology assumptions. The element-
specific GR&As carry the most risk and therefore should be checked for 
realism and should be well documented in order for the estimate to be 
considered credible. 

Assumptions, Sensitivity, And Risk Analysis: 

Every estimate is uncertain because of the assumptions that must be 
made about future projections. Sensitivity analysis that examines how 
changes to key assumptions and inputs affect the estimate helps 
mitigate uncertainty. Best practice cost models incorporate the ability 
to perform sensitivity analyses without altering the model so that the 
effect of varying inputs can be quickly determined (more information is 
in chapters 13 and 14). For example, suppose a decision maker 
challenges the assumption that 5 percent of the installed equipment 
will be needed for spares, asking that the factor be raised to 10 
percent. A sensitivity analysis would show the cost impact of this 
change. Because of the implications that GR&As can have when 
assumptions change, the cost estimator should always perform a 
sensitivity analysis that portrays the effects on the cost and schedule 
of an invalid assumption. Such analysis often provides management with 
an invaluable perspective on its decision making. 

In addition to sensitivity analysis, factors that will affect the 
program’s cost, schedule, or technical status should be clearly 
identified, including political, organizational, or business issues. 
Because assumptions themselves can vary, they should always be inputs 
to program risk analyses of cost and schedule. A typical approach to 
risk analysis emphasizes the breadth of factors that may be uncertain. 
In a risk identification exercise, the goal is to identify all 
potential risks stemming from a broad range of sources. A good starting 
point would be to examine the program’s risk management database to 
determine which WBS elements these risks could affect. Another option 
would be to examine risks identified during a program’s integrated 
baseline review—a risk based assessment of the program plan to see 
whether the requirements can be met within cost and schedule 
assumptions. Regardless of what method is used to identify risk, it is 
important that more than just cost, schedule, and technical risks are 
examined. For example, budget and funding risks, as well as risks 
associated with start-up activities, staffing, and organizational 
issues, should also be considered. Therefore, risks from all sources 
such as external, organizational, and even project management 
practices, in addition to the technical challenges, need to be 
addressed. 

Well-supported assumptions should include documentation of an 
assumption’s source and should discuss any weaknesses or risks. Solid 
assumptions are measurable and specific. For example, an assumption 
that states “transaction volume will average 500,000 per month and is 
expected to grow at an annual rate of 5 percent” is measurable and 
specific, while “transaction volumes will grow greatly over the next 5 
years” is not as helpful. By providing more detail, cost estimators can 
perform risk and sensitivity analysis to quantify the effects of 
changes in assumptions. 

Assumptions should be realistic and valid. This means that historical 
data should back them up to minimize uncertainty and risk. 
Understanding the level of certainty around an estimate is imperative 
to knowing whether to keep or discard an assumption. Assumptions tend 
to be less certain earlier in a program, and become more reliable as 
more information is known about them. A best practice is to place 
all assumptions in a single spreadsheet tab so that risk and 
sensitivity analysis can be performed efficiently and quickly. Explicit 
assumptions should be available, but assumptions are also sometimes 
implicit—implicit assumptions should be documented as well. 

Certain ground rules should always be tested for risk. For example, the 
effects of the program schedule’s slipping on both cost and schedule 
should always be modeled and the results presented to management. This 
is especially important if the schedule was known to be aggressive or 
was not assessed for realism. Too often, we have found that when 
schedules are compressed, for instance to satisfy a potential 
requirements gap, the optimism in the schedule does not hold and the 
result is greater costs and schedule delays. Case study 26 gives 
examples of what happens in such situations. 

Case Study 26: Testing Ground Rules for Risk, from Space Acquisitions, 
GAO-07-96: 

Advanced Extremely High Frequency Satellite Program. The first AEHF 
launch was originally scheduled for June 2006. In response to a 
potential gap in satellite coverage because of the launch failure of 
the third Milstar satellite, DOD accelerated the schedule by 18 months, 
aiming for December 2004. An unsolicited contractor proposal stated 
that it could meet this date, even though not all AEHF’s requirements 
had been fully determined. The program office thus knew that the 
proposed schedule was overly optimistic, but the decision was made at 
high levels in DOD to award the contract. DOD did not, however, commit 
the funding to support the activities and manpower needed to design and 
build the satellites more quickly. Funding issues further hampered 
development efforts, increased schedule delays, and contributed to cost 
increases. 

National Polar-orbiting Operational Environmental Satellite System. 
When the NPOESS estimate was developed, the system was expected to be 
heavier, require more power, and have more than twice as many sensors 
as heritage satellites. Yet the program office estimated that the new 
satellites would be developed, integrated, and tested in less time than 
heritage satellites. Independent cost estimators highlighted to the 
NPOESS program office that the proposed integration schedule was 
unrealistic, compared to historical satellite programs. Later, the CAIG 
cautioned the program office that the system integration assembly and 
test schedule were unrealistic and the assumptions used to develop the 
estimate were not credible. 

Space Based Infrared System High Program. The SBIRS schedule proposed 
in 1996 did not allow enough time for geosynchronous Earth orbit system 
integration. And it did not anticipate the program design and 
workmanship flaws that eventually cost the program considerable delays. 
The schedule was also optimistic with regard to ground software 
productivity and time needed to calibrate and assess satellite health. 
Delivery of highly elliptical orbit sensors was delayed by almost 3 
years, the launch of the first geosynchronous Earth orbit satellite by 
6 years. 

Wideband Gapfiller Satellites. The request for proposals specified that 
the available WGS budget was $750 million for three satellites and that 
the ground control system was to be delivered within 36 months. 
Competing contractors were asked to offer maximum capacity, coverage, 
and connectivity in a contract that would use existing commercial 
practices and technologies. However, greater design complexity and 
supplier quality issues caused the WGS schedule to stretch to 78 months 
for the first expected launch. DOD’s history had been 55–79 months to 
develop satellites similar to WGS, so that while DOD’s experience was 
within the expected range, the original 36-month schedule was 
unrealistic. 

Source: GAO, Space Acquisitions: DOD Needs to Take More Action to 
Address Unrealistic Initial Cost Estimates of Space Systems; GAO-07-96, 
Washington, D.C.: Nov. 17, 2006. 

[End of case study] 

Above and beyond the program schedule, some programs can be affected by 
the viability of the industrial base. Case study 27 illustrates. 

Case Study 27: The Industrial Base, from Defense Acquisitions, GAO-05-
183: 

For the eight case study ships GAO examined, cost analysts relied on 
the actual cost of previously constructed ships, without adequately 
accounting for changes in the industrial base, ship design, or 
construction methods. Cost data available to Navy cost analysts were 
based on higher ship construction rates from the 1980s. These data were 
based on lower costs because of economies of scale, which did not 
reflect the lower procurement rates after 1989. 

According to the shipbuilder, material cost increases on the CVN 76 and 
CVN 77 in the Nimitz class of aircraft carriers could be attributed to 
a declining supplier base and commodity price increases. Both carriers’ 
material costs had been affected by more than a 15 percent increase in 
metals costs that in turn increased costs for associated components. 

Moreover, many of the materials used in the construction of aircraft 
carriers are highly specialized and unique—often produced by only one 
manufacturer. With fewer manufacturers competing in the market, the 
materials were highly susceptible to cost increases. 

After the Seawolf submarine program was cancelled and, over a period of 
6 years, submarine production had decreased from three to four 
submarines per year to one, many vendors left the nuclear submarine 
business to focus on more lucrative commercial product development. 
Prices for highly specialized material increased, since competition and 
business had diminished. 

For example, many vendors were reluctant to support the Virginia class 
submarine contract because costs associated with producing small 
quantities of highly specialized materials were not considered worth 
the investment—especially for equipment with no other military or 
commercial applications. 

Source: GAO, Defense Acquisitions: Improved Management Practices Could 
Minimize Cost Growth in Navy Shipbuilding Programs, GAO-05-183, 
Washington, D.C.: Feb. 28, 2005. 

[End of case study] 

Another area in which assumptions tend to be optimistic is technology 
maturity. Having reviewed the experiences of DOD and commercial 
technology development, GAO has found that programs that relied on 
technologies that demonstrated a high level of maturity were in a 
better position to succeed than those that did not. Simply put, the 
more mature technology is at the start of a program, the more likely it 
is that the program will meet its objectives. 

Technologies that are not fully developed represent a significant 
challenge and add a high degree of risk to a program’s schedule and 
cost. Programs typically assume that the technology required will 
arrive on schedule and be available to support the effort. While this 
assumption allows the program to continue, the risk that it will prove 
inaccurate can greatly affect cost and schedule. Case studies 28 and 29 
provide examples. 

Case Study 28: Technology Maturity, from Defense Acquisitions, GAO-05-
183: 

The lack of design and technology maturity led to rework, increasing 
the number of labor hours for most of the case study ships. For 
example, the design of the LPD 17, in the San Antonio class of 
transports, continued to evolve even as construction proceeded. When 
construction began on the DDG 91 and DDG 92, in the Arleigh Burke class 
of destroyers—the first ships to incorporate the remote mine hunting 
system—the technology was still being developed. As a result, workers 
were required to rebuild completed ship areas to accommodate design 
changes. 

Source: GAO, Defense Acquisitions: Improved Management Practices Could 
Minimize Cost Growth in Navy Shipbuilding Programs, GAO-05-183, 
Washington, D.C.: Feb. 28, 2005. 

[End of case study] 

Case Study 29: Technology Maturity, from Space Acquisitions, GAO-07-96: 

The Advanced Extremely High Frequency (AEHF) program of communications 
satellites faced several problems of technology maturity. They included 
developing a digital processing system that would support 10 times the 
capacity of Milstar’s medium data rate, the predecessor satellite, 
without self-interference and using phased array antennas at extremely 
high frequencies, which had never been done before. In addition, the 
change from a physical process to an electronic process for crypto 
rekeys had not been expected at the start of AEHF. Milstar had required 
approximately 2,400 crypto rekeys per month and had been done 
physically. AEHF’s proposed capability was approximately 100,000—too 
large for physical processing. Changing the rekeys to electronic 
processing was revolutionary and led to unexpected cost and schedule 
growth. 

Source: GAO, Space Acquisitions: DOD Needs to Take More Action to 
Address Unrealistic Initial Cost Estimates of Space Systems, GAO-07-96, 
Washington, D.C.: Nov. 17, 2006. 

[End of case study] 

Cost estimators and auditors should not get trapped by overly 
optimistic technology forecasts. It is well known that program 
advocates tend to underestimate the technical challenge facing the 
development of a new system. Estimators and auditors alike should 
always seek to uncover the real risk by performing an uncertainty 
analysis. In doing so, it is imperative that cost estimators and 
auditors meet with engineers familiar with the program and its new 
technology to discuss the level of risk associated with the technical 
assumptions. Only then can they realistically model risk distributions 
using an uncertainty analysis and analyze how the results affect the 
overall cost estimate. 

Once the risk uncertainty and sensitivity analyses are complete, the 
cost estimator should formally convey the results of changing 
assumptions to management as early and as far up the line as possible. 
The estimator should also document all assumptions to help management 
understand the conditions the estimate was based on. When possible, the 
analyst should request an updated technical baseline in which the new 
assumptions have been incorporated as ground rules. Case study 30 
illustrates an instance of management’s not knowing the effects of 
changing assumptions. 

Case Study 30: Informing Management of Changed Assumptions, 
from Customs Service Modernization, GAO/AIMD-99-41: 

The Automated Commercial Environment (ACE) was a major U.S. Customs 
Service information technology system modernization effort. In November 
1997, it was estimated that ACE would cost $1.05 billion to develop, 
operate, and maintain between 1994 and 2008. GAO found that the agency 
lacked a reliable estimate of what ACE would cost to build, deploy, and 
maintain. 

The cost estimates were understated, benefit estimates were overstated, 
and both were unreliable. Customs’ August 1997 cost-benefit analysis 
estimated that ACE would produce cumulative savings of $1.9 billion 
over a 10-year period. The analysis identified $644 million in 
savings—33 percent of the total estimated savings—resulting from 
increased productivity. Because this estimate was driven by Customs’ 
assumption that every minute “saved” by processing transactions or 
analyzing data faster using ACE rather than its predecessor system 
would be productively used by all workers, it was viewed as a best 
case upper limit on estimated productivity improvements. 

Given the magnitude of the potential savings, even a small change in 
the assumption translated into a large reduction in benefits. For 
example, conservatively assuming that three-fourths of each minute 
saved would be used productively by three-fourths of all workers, the 
expected benefits would be reduced by about $282 million. Additionally, 
the analysis excluded costs for hardware and systems software upgrades 
at each port office. Using Customs’ estimate for acquiring the initial 
suite of port office hardware and systems software, and assuming a 
technology refreshment cycle of every 3 to 5 years, GAO estimated this 
cost at $72.9 million to $171.8 million. 

Because Customs did not have reliable information on ACE costs and 
benefits and had not analyzed viable alternatives, it did not have 
adequate assurance that ACE was the optimal approach. In fact, it had 
no assurance at all that ACE would be cost-effective. Furthermore, it 
had not justified the return on its investment in each ACE increment 
and therefore would not be able to demonstrate whether ACE would be 
cost-effective until it had spent hundreds of millions of dollars to 
acquire the entire system. 

GAO recommended that Customs rigorously analyze alternative approaches 
to building ACE and, for each increment, use disciplined processes to 
prepare a robust LCCE, prepare realistic and supportable benefit 
expectations, and validate actual costs and benefits once an increment 
had been piloted. 

Source: GAO, Customs Service Modernization: Serious Management and 
Technical Weaknesses Must Be Corrected, GAO/AMD-99-41, Washington, 
D.C.: Feb. 26, 1999. 

[End of case study] 

6. Best Practices Checklist: Ground Rules and Assumptions: 

* All ground rules and assumptions have been: 
- Developed by estimators with input from the technical community. 
- Based on information in the technical baseline and WBS dictionary. 
- Vetted and approved by upper management. 
- Documented to include the rationale behind the assumptions and 
historical data to back up any claims. 
- Accompanied by a level of risk of each assumption’s failing and its 
effect on the estimate. 

* To mitigate risk, 
- All GR&As have been placed in a single spreadsheet tab so that risk 
and sensitivity analysis can be performed quickly and efficiently. 
- All potential risks including cost, schedule, technical, and 
programmatic (e.g., risks associated with budget and funding, start up 
activities, staffing, and organizational issues) have been identified 
and traced to specific WBS elements. 
-- A schedule risk analysis has been performed to determine the 
program schedule’s realism. 
-- A cost risk analysis, incorporating the results of the schedule risk 
analysis, has been performed to determine the program’s cost estimate 
realism. 

* Budget constraints, as well as the effect of delaying program 
content, have been defined. 
- Peaks and valleys in time-phased budgets have been explained. 
- Inflation index, source, and approval authority have been identified. 
- Dependence on participating agencies, the availability of government 
furnished equipment, and the effects if these assumptions do not hold 
have been identified. 
- Items excluded from the estimate have been documented and 
explained.
- Technology was mature before it was included; if its maturity was 
assumed, the estimate addresses the effect of the assumption’s failure 
on cost and schedule. 

* Cost estimators and auditors met with technical staff to determine 
risk distributions for all assumptions; the distributions were used in 
sensitivity and uncertainty analyses of the effects of invalid 
assumptions. Management has been briefed, and the results have been 
documented. 

[End of Chapter 9] 

Chapter 10: Data: 

Data are the foundation of every cost estimate. How good the data are 
affects the estimate’s overall credibility. Depending on the data 
quality, an estimate can range anywhere from a mere guess to a highly 
defensible cost position. Credible cost estimates are rooted in 
historical data. Rather than starting from scratch, estimators usually 
develop estimates for new programs by relying on data from programs 
that already exist and adjusting for any differences. Thus, collecting 
valid and useful historical data is a key step in developing a sound 
cost estimate. The challenge in doing this is obtaining the most 
applicable historical data to ensure that the new estimate is as 
accurate as possible. One way of ensuring that the data are applicable 
is to perform checks of reasonableness to see if the results are 
similar. Different data sets converging toward one value provides a 
high degree of confidence in the data. 

Performing quality checks takes time and requires access to large 
quantities of data. This is often the most difficult, time-consuming, 
and costly activity in cost estimating. It can be exacerbated by a 
poorly defined technical baseline or WBS. However, by gathering 
sufficient data, cost estimators can analyze cost trends on a variety 
of related programs, which gives insight into cost estimating 
relationships that can be used to develop parametric models. 

Before collecting data, the estimator must fully understand what needs 
to be estimated. This understanding comes from the purpose and scope of 
the estimate, the technical baseline description, the WBS, and the 
ground rules and assumptions. Once the boundaries of the estimate are 
known, the next step is to establish an idea of what estimating 
methodology will be used. Only after these tasks have been performed 
should the estimator begin to develop an initial data collection plan. 

Data Collection: 

Data collection is a lengthy process and continues throughout the 
development of a cost estimate and through the program execution 
itself. Many types of data need to be collected—technical, schedule, 
program, and cost data. Once collected, the data need to be normalized. 
Data can be collected in a variety of ways, such as from databases of 
past projects, engineering build-up estimating analysis, interviews, 
surveys, data collection instruments, and focus groups. After the 
estimate is complete, the data need to be well documented, protected, 
and stored for future use in retrievable databases. Cost estimating 
requires a continual influx of current and relevant cost data to remain 
credible. The cost data should be managed by estimating professionals 
who understand what the historical data are based on, can determine 
whether the data have value in future projections, and can make the 
data part of the corporate history. 

Cost data should be continually supplemented with written vendor 
quotes, contract data, and actual cost data for each new program. 
Moreover, cost estimators should know the program acquisition plans, 
contracting processes, and marketplace conditions, all of which can 
affect the data. This knowledge provides the basis for credibly using, 
modifying, or rejecting the data in future cost estimates.

Knowing the factors that influence a program’s cost is essential for 
capturing the right data. Examples are equivalent source lines of code, 
number of interfaces for software development, number of square feet 
for construction, and the quantity of aircraft to be produced. To 
properly identify cost drivers, it is imperative that cost estimators 
meet with the engineers and other technical experts. In addition, by 
studying historical data, cost estimators can determine through 
statistical analysis the factors that tend to influence overall cost. 
Furthermore, seeking input from schedule analysts can provide valuable 
knowledge about how aggressive a program’s schedule may be. 

Cost estimates must be based on realistic schedule information. Some 
costs such as labor, quality, supervision, rented space and equipment, 
and other time-related overheads depend on the duration of the 
activities they support. Often the cost estimators are in synch with 
the baseline schedule with the early estimates, but they also have to 
keep in touch with changes in the schedule, since schedule changes can 
lead to cost changes. 

In addition to data for the estimate, backup data should be collected 
for performing cross-checks. This takes time and usually requires 
travel to meet with technical experts. It is important to plan ahead 
and schedule the time for these activities. Scheduling insufficient 
time can affect the estimator’s ability to collect and understand the 
data, which can then result in a less confident cost estimate. 

Common issues in data collection include inconsistent data definitions 
in historical programs compared to the new program. Understanding what 
the historical data include is vital to data reliability. For example, 
are the data skewed because they are for a program that followed an 
aggressive schedule and therefore instituted second and third shifts to 
complete the work faster? Or was a new manufacturing process 
implemented that was supposed to generate savings but resulted in more 
costs because of initial learning curve problems? Knowing the history 
behind the data will allow for its proper allocation for future 
estimates. 

Another issue is whether the data are even available. Some agencies may 
not have any cost databases. Data may be accessible at higher levels 
but information may not be sufficient to break them down to the 
lower levels needed to estimate various WBS elements. Data may be 
incomplete. For instance, they may be available for the cost to build a 
component, but the cost to integrate the component may be missing. 
Similarly, if data are in the wrong format, they may be difficult to 
use. For example, if the data are only in dollars and not hours, they 
may not be as useful if the labor and overhead rates are not available. 

Sometimes data are available, but the cost estimator cannot gain access 
to them. This can happen when the data are highly classified or 
considered competition sensitive. When this is the case, the cost 
estimator may have to change the estimating approach to fit the data 
that are available. Case study 31 gives an example. 

Case Study 31: Fitting the Estimating Approach to the Data, from 
Space Acquisitions, GAO-07-96: 

The lack of reliable technical source data hampers cost estimating. 
Officials GAO spoke with believed that cost estimation data and 
databases on which to base cost estimates were incomplete, 
insufficient, and outdated. They cited the lack of reliable historical 
and current cost, technical, and program data and expressed concern 
that available cost, schedule, technical, and risk data were not 
similar to the systems they were developing cost estimates for. In 
addition, some expressed concern that relevant classified and 
proprietary commercial data might exist but were not usually available 
to the costestimating community working on unclassified programs. Some 
believed that Air Force cost estimators needed to be able to use all 
relevant data, including those contained in National Reconnaissance 
Office cost databases, since the agency builds highly complex, 
classified satellites in comparable time and at comparable costs per 
pound. 

Source: GAO, Space Acquisitions: DOD Needs to Take More Action to 
Address Unrealistic Initial Cost Estimates of Space Systems, GAO-07-96, 
Washington, D.C.: Nov. 17, 2006. 

[End of case study] 

Types Of Data: 
 
In general, the three main types of data are cost data, schedule or 
program data, and technical data. Cost data generally include labor 
dollars (with supporting labor hours and direct costs and overhead 
rates), material and its overhead dollars, facilities capital cost of 
money, and profit associated with various activities. Program cost 
estimators often do not know about specific dollars, so they tend to 
focus mostly on hours of resources needed by skill level. These 
estimates of hours are often inputs to specialized databases to convert 
them to cost estimates in dollars. 

Schedule or program data provide parameters that directly affect the 
overall cost. For example, lead-time schedules, start and duration of 
effort, delivery dates, outfitting, testing, initial operational 
capability dates, operating profiles, contract type, multiyear 
procurement, and sole source or competitive awards must all 
be considered in developing a cost estimate. 

Technical data define the requirements for the equipment being 
estimated, based on physical and performance attributes, such as 
length, width, weight, horsepower, and size. When technical data are 
collected, care must be taken to relate the types of technologies and 
development or production methodologies to be used. These change over 
time and require adjustments when estimating relationships are being 
developed. 

Cost data must often be derived from program and technical data. 
Moreover, program and technical data provide context for cost data, 
which by themselves may be meaningless. Consider the difference between 
these two examples: 
 
* Operations and maintenance utilities cost $36,500. 

* The Navy consumes 50,000 barrels of fuel per day per ship. 

In the operations and maintenance example, the technical and program 
descriptors are missing, requiring follow-up questions like: What 
specific utilities cost $36,500? Gas or electricity or telephone? What 
time does this cost represent? A month or a year? and When were these 
costs accrued? In the current year or 5 years ago? In the Navy example, 
a cost estimator would need to investigate what type of ship consumes 
50,000 barrels per day—aircraft carrier? destroyer?—and what type of 
fuel is consumed.[Footnote 36] 

It is essential that cost estimators plan for and gain access, where 
feasible, to cost and technical and program data in order to develop a 
complete understanding of what the data represent. Without this 
understanding, a cost estimator may not be able to correctly interpret 
the data, leading to greater risk that the data can be misapplied. 

Sources Of Data: 

Since all cost estimating methods are data-driven, analysts must know 
the best data sources. Table 10 lists some basic sources. Analysts 
should use primary data sources whenever possible. Primary data are 
obtained from the original source, can usually be traced to an audited 
document, are considered the best in quality, and are ultimately the 
most useful. Secondary data are derived rather than obtained directly 
from a primary source. Since they were derived, and thus changed, from 
the original data, their overall quality is lower and less useful. In 
many cases, secondary data are actual data that have been “sanitized” 
to obscure their proprietary nature. Without knowing the details, 
analysts will find such data of little use. 

Table 10: Basic Primary and Secondary Data Sources: 
Data type: 
Primary 
Secondary: 

Data type: Basic accounting records; 
Source: Primary. 

Data type: Data collection input forms; 
Source: Primary. 

Data type: Cost reports; 
Source: Primary, Secondary.

Data type: Historical databases; 
Source: Primary, Secondary.

Data type: Interviews; 
Source: Primary, Secondary.

Data type: Program briefs; 
Source: Primary, Secondary.

Data type: Subject matter experts; 
Source: Primary, Secondary.

Data type: Technical databases; 
Source: Primary, Secondary.

Data type: Other organizations; 
Source: Primary, Secondary.

Data type: Contracts or contractor estimates; 
Source: Secondary.

Data type: Cost proposals; 
Source: Secondary.

Data type: Cost studies; 
Source: Secondary.

Data type: Focus groups; 
Source: Secondary.

Data type: Research papers; 
Source: Secondary.

Data type: Surveys; 
Source: Secondary. 

Source: DOD and NASA. 

[End of table] 

Cost estimators must understand whether and how data were changed 
before deciding whether they will be useful. Furthermore, it is always 
better to use actual costs rather than estimates as data sources, since 
actual costs represent the most accurate data available. 

While secondary data should not be the first choice, they may be all 
that is available. Therefore, the cost estimator must seek to 
understand how the data were normalized, what the data represent, how 
old they are, and whether they are complete. If these questions can be 
answered, the secondary data may be useful for estimating and would 
certainly be helpful for cross-checking the estimate for reasonableness.

Sources of historical data include business plans, catalog prices, 
contract performance reports, contract funds status reports, cost and 
software data reports, forward pricing rate agreements, historical cost 
databases, market research, program budget and accounting data from 
prior programs, supplier cost information, historical or current vendor 
quotes, and weight reports. In the operating and support area, common 
data sources include DOD’s Visibility and Management of Operating and 
Support Costs management information system. Cost estimators should 
collect actual cost data from a list of similar and legacy programs. 
Since most new programs are improvements over existing ones, data 
should be available that share common characteristics with the new 
program. 

Historical data provide the cost estimator insight into actual costs on 
similar programs, including any cost growth since the original 
estimate. As a result, historical data can be used to challenge 
optimistic assumptions. For example, a review of the average labor 
rates for similar tasks on other programs could be a powerful reality 
check against assumptions of skill mixes and overall effort. In 
addition, historical data from a variety of contractors can be used to 
establish generic program costs or they can be used to establish cost 
trends of a specific contractor across a variety of programs. 

Historical data also provide contractor cost trends relative to 
proposal values, allowing the cost estimator to establish adjustment 
factors if relying on proposal data for estimating purposes. 
Additionally, insights can be obtained on cost accounting structures to 
allow an understanding of how a certain contractor charges things like 
other direct costs and overhead. 

However, historical cost data also contain information from past 
technologies, so it is essential that appropriate adjustments are made 
to account for differences between the new system and the existing 
system with respect to such things as design characteristics, 
manufacturing processes (automation versus hands-on labor), and types 
of material used. This is where statistical methods, like regression, 
that analyze cost against time and performance characteristics can 
reveal the appropriate technology-based adjustment. 

CPRs and cost and software data reports are excellent sources of 
historical cost data for DOD programs. The CPR is the primary report of 
cost and schedule progress on contracts containing EVM compliance 
requirements. It contains the time-phased budget, the actual cost, and 
earned value, which is the budgeted value of completed work. 

By reviewing CPR data, the cost analyst can gain valuable insights into 
performance issues that may be relevant to future procurements. For 
instance, CPR data can provide information about changes to the 
estimate to complete (or the total expected cost of the program) and 
the performance measurement baseline, and it explains the reason for 
any variances. Before beginning any analysis of such reports, the 
analyst should perform a cursory assessment to ensure that the 
contractor has prepared them properly. 

The several ways of analyzing cost data reports all use three basic 
elements in various combinations: 

* budgeted cost for work scheduled (BCWS), or the amount of budget 
allocated to complete a specific amount of work at a particular time;

* budgeted cost for work performed (BCWP), also known as earned value, 
which represents budgeted value of work accomplished; and; 

* actual cost of work performed (ACWP), or actual costs incurred for 
work accomplished.[Footnote 37] 

Cost data reports are often used in estimating analogous programs, from 
the assumption that it is reasonable to expect similar programs at 
similar contractors’ plants to incur similar costs. This analogy may 
not hold for the costs of hardware or software but may hold in the 
peripheral WBS areas of data, program management, or systems 
engineering. If the analyst can then establish costs for the major 
deliverables, such as hardware or software, a factor may be applied for 
each peripheral area of the WBS, based on historical data available 
from cost reports. Sometimes, the data listed in the WBS include 
elements that the analyst may not be using in the present 
estimate—spares, training, support equipment. In such cases, these 
elements should be removed before the data are analyzed. 

Rate and factor agreements contain rates and factors agreed to by the 
contractor and the appropriate government negotiator. Because the 
contractor’s business base may be fluid, with direct effect on these 
rates and factors, such agreements do not always exist. Information in 
them represents negotiated direct labor, overhead, general and 
administrative data, and facilities capital cost of money. These 
agreements could cover myriad factors, depending on each contractor’s 
accounting and cost estimating structure. Typical factors are material 
scrap, material handling, quality control, sustaining tooling, and 
miscellaneous engineering support factors. 

The scope of the estimate often dictates the need to consult with other 
organizations for raw data. Once government test facilities have been 
identified, for example, those organizations can be contacted for 
current cost data, support cost data, and the like. Other government 
agencies could also be involved with the development of similar 
programs and can be potential sources of data. Additionally, a number 
of government agencies and industry trade associations publish cost 
data that are useful in cost estimating. 

The Defense Contract Management Agency (DCMA) and the Defense Contract 
Audit Agency (DCAA) help DOD cost analysts obtain validated data. Both 
agencies have on-site representatives at most major defense contractor 
facilities. Navy contractor resident supervisors of shipbuilding, for 
example, help obtain validated data. Before a contract is awarded, DCMA 
provides advice and services to help construct effective solicitations, 
identify potential risks, select the most capable contractors, and 
write contracts that meet customers’ needs. In evaluating contract 
proposals, DCMA assists in the review of the proposal assumptions to 
identify how tightly scope was constrained to reduce risk premiums in 
the proposed cost. After a contract is awarded, DCMA monitors 
contractors’ performance and management systems to ensure that cost, 
product performance, and delivery schedules comply with the contract’s 
terms and schedule. It is common for DCMA auditors to be members of 
teams assembled to review elements of proposals, especially in areas of 
labor and overhead rates, cost, and supervision of man-hour 
percentages. 

DCMA analysts often provide independent estimates at completion for 
programs; they are another potential source of information for cost 
analysts.

DCAA performs necessary contract audits for DOD. It provides accounting 
and advisory services for contracts and subcontracts to all DOD 
components responsible for procurement and contract administration. 
Cost analysts should establish and nurture contacts with these 
activities, so that a continual flow of current cost-related 
information can be maintained. Although civil agencies have no 
comparable organizations, DCMA and DCAA occasionally provide support to 
them. 

Another area of potential cost data are contractor proposals. Analysts 
should remember that a contractor proposal as a source of data is a 
proposal—a document that represents the contractor’s best estimate of 
cost. Proposals also tend to be influenced by the amount the customer 
has to spend. When this is the case, the proposal data should be viewed 
as suspect, and care should be taken to determine if the proposal data 
are supportable. Because of this, an estimate contained in a 
contractor’s proposal should be viewed with some caution. During source 
selection in a competitive environment, for instance, lower proposed 
costs may increase the chances of receiving a contract award. This 
being so, it is very important to analyze the cost data for realism. A 
proposal can nonetheless provide much useful information and should be 
reviewed, when available, for the following: 

* structure and content of the contractor’s WBS; 

* contractor’s actual cost history on the same or other programs; 

* negotiated bills of material; 

* subcontracted items; 

* government-furnished equipment compared to contractor-furnished 
equipment lists; 

* contractor rate and factor data, based on geography and makeup of 
workforce; 

* a self-check to ensure that all pertinent cost elements are included; 

* top-level test of reasonableness; 

* technological state-of-the-art assumptions; and 

* estimates of management reserve and level of risk. 

Because of the potential for bias in proposal data, the estimator must 
test the data to see whether they deviate from other similar data 
before deciding whether they are useful for estimating. This can be 
done through a plant visit, where the cost estimator visits the 
contractor to discuss the basis for the proposal data. As with any 
potential source of data, it is critical to ensure that the data apply 
to the estimating task and are valid for use. In the next two sections, 
we address how a cost estimator should perform these important 
activities. 

Data Applicability: 

Because cost estimates are usually developed with data from past 
programs, it is important to examine whether the historical data apply 
to the program being estimated. Over time, modifications may have 
changed the historical program so that it is no longer similar to the 
new program. For example, it does not make sense to use data from an 
information system that relied on old mainframe technology when the new 
program will rely on server technology that can process data at much 
higher speeds. Having good descriptive requirements of the data is 
imperative in determining whether the data available apply to what 
is being estimated. 

To determine the applicability of data to a given estimating task, the 
analyst must scrutinize them in light of the following issues: 

* Do the data require normalization to account for differences in base 
years, inflation rates (contractor compared to government), or calendar 
year rather than fiscal year accounting systems? 

* Is the work content of the current cost element consistent with the 
historical cost element? 
 
* Have the data been analyzed for performance variation over time (such 
as technological advances)? Are there unambiguous trends between cost 
and performance over time? 
 
* Do the data reflect actual costs, proposal values, or negotiated 
prices and has the type of contract been considered? 

Proposal values are usually extremely optimistic and can lead to overly 
optimistic cost estimates and budgets. Furthermore, negotiated prices 
do not necessarily equate to less optimistic cost estimates. 
 
* Are sufficient cost data available at the appropriate level of detail 
to use in statistical measurements? 

* Are cost segregations clear, so that recurring data are separable 
from nonrecurring data and functional elements (manufacturing, 
engineering) are visible?

* Have risk and uncertainty for each data element been taken into 
account? High-risk elements usually cause optimistic cost estimates. 
 
* Have legal or regulatory changes affected cost for the same 
requirement? 

* When several historical values are available for the same concept, 
are they in close agreement or are they dispersed? 

If they are in close agreement, as long as the definitions agree they 
should provide valuable insight. If they are different, perhaps the 
issues are not settled, the approaches are still at variance, and 
historical data may not be as useful for estimating current programs’ 
costs. 

Once these questions have been answered, the next step is to assess the 
validity of the data before they can be used to confidently predict 
future costs. 

Validating And Analyzing The Data: 

The cost analyst must consider the limitations of cost data before 
using them in an estimate. Historical cost data have two predominant 
limitations: 
 
* the data represent contractor marketplace circumstances that must be 
known if they are to have future value, and 

* current cost data eventually become dated. 

The first limitation is routinely handled by recording these 
circumstances as part of the data collection task. To accommodate the 
second limitation, an experienced cost estimator can either adjust the 
data (if applicable) or decide to collect new data. In addition, the 
contract type to be used in a future procurement—for example, firm 
fixed-price, fixed-price incentive, or cost plus award fee—may differ 
from that of the historical cost data. Although this does not preclude 
using the data, the analyst must be aware of such conditions, so that 
an informed data selection decision can be made. A cost analyst must 
attempt to address data limitations by: 

* ensuring that the most recent data are collected, 

* evaluating cost and performance data together to identify 
correlation, 

* ensuring a thorough knowledge of the data’s background, and 

* holding discussions with the data provider. 

Thus, it is best practice to continuously collect new data so they can 
be used for making comparisons and determining and quantifying trends. 
This cannot be done without background knowledge of the data. This 
knowledge allows the estimator to confidently use the data directly, 
modify them to be more useful, or simply reject them. 

Once the data have been collected, the next step is to create a scatter 
plot to see what they look like. Scatter plotting provides a 
wealth of visual information about the data, allowing the analyst to 
quickly determine outliers, relationships, and trends. In scatter 
charts, cost is typically treated as the dependent variable and is 
plotted on the y axis, while various independent variables are plotted 
on the x axis. These independent variables depend on the data collected 
but are typically technical—weight, lines of code, speed—or operational 
parameters—crew size, flying hours. These statistics provide 
information about the amount of dispersion in the data set, which is 
important for determining risk. 

The cost estimator should first decide which independent variables are 
most likely to be cost drivers and then graph them separately. The 
extent to which the points are scattered will determine how likely it 
is that each independent variable is a cost driver. The less scattered 
the points are, the more likely it is that the variable is a cost 
driver. Eventually, the analyst will use statistical techniques to 
distinguish cost drivers, but using scatter charts is an excellent way 
to reduce their number. 

The cost estimator should also examine each scatter chart in unit space 
to determine if a linear relationship exists. Many relationships are 
not linear; in such cases, the estimator can often perform a 
transformation to make the data linear. If the data appear to be 
exponential when plotted in unit space, the analyst should try plotting 
the natural log of the independent variable on the y axis. If the data 
appear to represent a power function, the analyst should try plotting 
the natural log of both the cost and the independent variable. In both 
cases, the goal is to transform the data appropriately to reveal a 
linear relationship, because most cost estimating relationships are 
based on linear regression. 

After analyzing the data through a scatter plot, the estimator should 
calculate descriptive statistics to characterize and describe the data 
groups. Important statistics include sample size, mean, standard 
deviation, and coefficient of variation. Calculating the mean provides 
the estimator with the best estimate, because it is the average of the 
historical data. To determine the dispersion within the data set, the 
estimator must calculate the standard deviation. Finally, the estimator 
should calculate the coefficient of variation so that variances between 
data sets can be compared. 

The coefficient of variation is calculated by dividing the standard 
deviation by the mean.[Footnote 38] This provides a percentage that can 
be used to examine which data set has the least variation. Once the 
statistics have been derived, creating visual displays of them helps 
discern differences among groups. Bar charts, for example, are often 
useful for comparing averages. Histograms can be used to examine the 
distribution of different data sets in relation to their frequency. 
They can also be used for determining potential outliers. (Chapter 11 
has more information on statistical approaches.) 

Many times, estimates are not based on actual data but are derived by 
subjective engineering judgment. All engineering judgments should be 
validated before being used in a cost estimate. Validation involves 
cross-checking the results, in addition to analyzing the data and 
examining the documentation for the judgment. Graphs and scatter charts 
can often help validate an engineering judgment, because they can 
quickly point out any outliers. 

It is never a good idea to discard an outlier without first 
understanding why a data point is outside the normal range. An outlier 
is a data point that is typically defined as falling outside the 
expected range of three standard deviations. Statistically speaking, 
outliers are rare, occurring only 0.3 percent of the time. If a data 
point is truly an outlier, it should be removed from the data set, 
because it can skew the results. However, an outlier should not be 
removed simply because it appears too high or too low compared to the 
rest of the data set. Doing so is naïve. Instead, a cost estimator 
should provide adequate documentation as to why an outlier was removed 
and this documentation should include comparisons to historical data 
that show the outlier is in fact an anomaly. If possible, the 
documentation should describe why the outlier exists; for example, 
there might have been a strike, a program restructuring, or a natural 
disaster that skewed the data. If the historical data show the outlier 
is just an extreme case, the cost estimator should retain the data 
point; otherwise, it will appear that the estimator was trying to 
manipulate the data. This should never be done, since all available 
historical data are necessary for capturing the natural variation 
within programs. 

EVM Data Reliability: 

In chapter 3, we discussed top-level EVM data reliability tasks such 
as: 

* requesting a copy of the EVM system compliance letter showing the 
contractor’s ability to satisfy the 32 guidelines; 

* requesting a copy of the IBR documentation and final briefing to see 
what risks were identified and what weaknesses, if any, were found; 

* determining whether EVM surveillance is being done by qualified and 
independent staff; and; 

* determining the financial accounting status of the contractor’s EVM 
system to see whether any adverse opinions would call into question the 
reliability of the accounting data. 

In addition to these tasks, auditors should perform a sanity check to 
see if the data even make sense. For example, the auditor should review 
all WBS elements in the CPR to determine whether there are any data 
anomalies such as: 

* negative values for BCWS, BCWP, ACWP, estimate at completion (EAC), 
or budget at completion (BAC);

* large month-to-month performance swings (BCWP) not attributable to 
technical or schedule problems (may indicate cost collection issues); 

* BCWS and BCWP data with no corresponding ACWP; 

* BCWP with no BCWS or ACWP; 

* ACWP with no BCWS or BCWP; 

* large and continuing unexplained variances between ACWP and BCWP; 

* inconsistencies between EAC and BAC (for example, EAC with no BAC or 
BAC with no EAC); 

* ACWP greater than EAC; 

* BCWP or BCWS exceed the BAC. 

Despite the fact that these anomalies should be rare and fully 
explained in the variance analysis portion of the report, unfortunately 
we have found programs that submit CPRs with these types of errors. 
Case study 32 highlights this issue. 

Case Study 32: Data Anomalies, from Cooperative Threat Reduction, GAO-
06-692: 

The EVM system the contractor was using to record, predict, and monitor 
progress contained flawed and unreliable data. GAO found serious 
discrepancies in the data, such as improper calculations and accounting 
errors. For example, from September 2005 through January 2006 the 
contractor’s EVM reports had not captured almost $29 million in actual 
costs for the chemical weapons destruction facility project. EVM 
current period data were not accurate because of historical data 
corruption, numerous mistakes in accounting accruals, and manual budget 
adjustments. The mistakes underestimated the true cost of the project 
by ignoring cost variances that had already occurred. 

For example, the Moscow project management task had been budgeted at a 
cost of $100,000. According to the January 2006 EVM report, the work 
was complete, but the actual cost was $2.6 million—an overrun of 
approximately $2.5 million that the EVM report failed to capture. Such 
data were misleading and skewed the project’s overall performance. 
Unreliable EVM data limited DOD’s efforts to accurately measure 
progress on the Shchuch’ye project and estimate its final completion 
date and cost. 

GAO recommended that the Secretary of Defense direct the Defense Threat 
Reduction Agency, in conjunction with the U.S. Army Corps of Engineers, 
to ensure that the contractor’s EVM system contain valid, reliable data 
and that the system reflect actual cost and schedule conditions; 
withhold a portion of the contractor’s award fee until the EVM system 
produced reliable data; and require the contractor to perform an IBR 
after awarding the contract for completing Building 101. 

Source: GAO, Cooperative Threat Reduction: DOD Needs More Reliable Data 
to Better Estimate the Cost and Schedule of the Shchuch’ye Facility, 
GAO-06-692, Washington, D.C.: May 31, 2006. 

[End of case study] 

Data Normalization: 

The purpose of data normalization (or cleansing) is to make a given 
data set consistent with and comparable to other data used in the 
estimate. Since data can be gathered from a variety of sources, they 
are often in many different forms and need to be adjusted before being 
used for comparison analysis or as a basis for projecting future costs. 
Cost data are adjusted in a process called normalization, stripping 
out the effect of certain external influences. The objective of data 
normalization is to improve data consistency, so that comparisons and 
projections are more valid and other data can be used to increase the 
number of data points. Data are normalized in several ways. 

Cost Units: 

Cost units primarily adjust for inflation. Because the cost of an item 
has a time value, it is important to know the year in which funds were 
spent. For example, an item that cost $100 in 1990 is more expensive 
than an item that cost $100 in 2005 because of the effects of inflation 
over the 15 years that would make the 1990 item more expensive when 
converted to a 2005 equivalent cost. Costs may also be adjusted for 
currency conversions. 

In addition to inflation, the cost estimator needs to understand what 
the cost represents. For example, does it represent only direct labor 
or does it include overhead and the contractor’s profit? Finally, cost 
data have to be converted to equivalent units before being used in a 
data set. That is, costs expressed in thousands, millions, or billions 
of dollars must be converted to one format—for example, all costs 
expressed in millions of dollars. 

Sizing Units: 

Sizing units normalize data to common units—for example, cost per foot, 
cost per pound, dollars per software line of code. When normalizing 
data for unit size, it is very important to define exactly what the 
unit represents: What constitutes a software line of code? Does it 
include carriage returns or comments? The main point is to clearly 
define what the sizing metric is so that the data can be converted to a 
common standard before being used in the estimate. 

Key Groupings: 

Key groupings normalize data by similar missions, characteristics, or 
operating environments by cost type or work content. Products with 
similar mission applications have similar characteristics and traits, 
as do products with similar operating environments. For example, space 
systems exhibit characteristics different from those of submarines, but 
the space shuttle has characteristics distinct from those of a 
satellite even though they may share common features. Costs should also 
be grouped by type. For example, costs should be broken out between 
recurring and nonrecurring or fixed and variable costs. 

Technology Maturity: 

Technology maturity normalizes data for where a program is in its life 
cycle; it also considers learning and rate effects. The first unit of 
something would be expected to cost more than the 1,000th unit, just 
as a system procured at one unit per year would be expected to cost 
more per unit than the same system procured at 1,000 units per year. 
Technology normalization is the process of adjusting cost data for 
productivity improvements resulting from technological advancements 
that occur over time. 

In effect, technology normalization is the recognition that technology 
continually improves, so a cost estimator must make a subjective 
attempt to measure the effect of this improvement on historical program 
costs. For instance, an item developed 10 years ago may have been 
considered state of the art and the costs would be higher than normal. 
Today, that item may be available off the shelf and therefore the costs 
would be considerably less. 

Therefore, technology normalization is the ability to forecast 
technology by predicting the timing and degree of change of 
technological parameters associated with the design, production, and 
use of devices. Being able to adjust the cost data to reflect where the 
item is in its life cycle, however, is very subjective, because it 
requires identifying the relative state of technology at different 
points in time. 

Homogeneous Groups: 

Using homogeneous groups normalizes for differences between historical 
and new program WBS elements in order to achieve content consistency. 
To do this type of normalization, a cost estimator needs to gather cost 
data that can be formatted to match the desired WBS element definition. 
This may require adding and deleting certain items to get an apples-to-
apples comparison. A properly defined WBS dictionary is necessary to 
avoid inconsistencies. 

Recurring And Nonrecurring Costs: 

Embedded within cost data are recurring and nonrecurring costs. These 
are usually estimated separately to keep one-time nonrecurring costs 
from skewing the costs for recurring production units. For this 
reason, it is important to segregate cost data into nonrecurring and 
recurring categories. 

Nonrecurring Costs: 

SCEA defines nonrecurring costs as the elements of development and 
investment costs that generally occur only once in a system’s life 
cycle. They include all the effort required to develop and qualify an 
item, such as defining its requirements and its allocation, design, 
analysis, development, qualification, and verification. Costs for the 
following are generally nonrecurring: 

* manufacturing and testing development units, both breadboard and 
engineering, for hardware, as well as qualification and life-test 
units; 

* retrofitting and refurbishing development hardware for 
requalification; 

* developing and testing virtually all software before beginning 
routine system operation; nonrecurring integration and test efforts 
usually end when qualification tests are complete;
 
* providing services and some hardware, such as engineering, before and 
during critical design review; 

* developing, acquiring, producing, and checking all tooling, ground 
handling, software, and support equipment and test equipment. 

Recurring Costs: 

As defined by SCEA, recurring costs are incurred for each item produced 
or each service performed. For example, the costs associated with 
producing hardware—that is, manufacturing and testing, providing 
engineering support for production, and supporting that hardware with 
spare units or parts—are recurring costs. Recurring integration and 
testing, including the integration and acceptance testing of production 
units at all WBS levels, also represent recurring costs. In addition, 
refurbishing hardware for operational or spare units is a recurring 
cost, as is maintaining test equipment and production support software. 
In contrast, maintaining system operational software, although 
recurring in nature, is often considered part of operating and support 
costs, which might also have nonrecurring components. 

Similar to nonrecurring and recurring costs are fixed and variable 
costs. Fixed costs are static, regardless of the number of quantities 
to be produced. An example of a fixed cost is the cost to rent a 
facility. A variable cost is directly affected by the number of units 
produced and includes such things as the cost of electricity or 
overtime pay. Knowing what the data represent is important for 
understanding anomalies that can occur as the result of production unit 
cuts. 

The most important reason for differentiating recurring from 
nonrecurring costs is in their application to learning curves. Simply 
put, learning curve theory applies only to recurring costs. Cost 
improvement or learning is generally associated with repetitive actions 
or processes, such as those directly tied to producing an item again 
and again. Categorizing as recurring or variable costs that are 
affected by the quantity of units being produced adds more clarity to 
the data. An analyst who knows only the total cost of something does 
not know how much of that cost is affected by learning. 

Inflation Adjustments: 

In the development of an estimate, cost data must be expressed in like 
terms. This is usually accomplished by inflating or deflating cost data 
to express them in a base year that will serve as a point of reference 
for a fixed price level. Applying inflation is an important step in 
cost estimating. If a mistake is made or the inflation amount is not 
correct, cost overruns can result, as case study 33 illustrates. 

Case Study 33: Inflation, from Defense Acquisitions, GAO-05-183: 

Inflation rates can significantly affect ship budgets. Office of the 
Secretary of Defense (OSD) and OMB inflation indexes are based on a 
forecast of the implicit price deflator for the gross domestic product. 
Until recently, the Navy had used OSD and OMB inflation rates; 
shipbuilding industry rates were historically higher. As a result, 
contracts were signed and executed using industry-specific inflation 
rates while budgets were based on the lower inflation rates, creating a 
risk of cost growth from the outset. For the ships reviewed, this 
difference in inflation rates explained 30 percent of the $2.1 billion 
cost growth. The Navy had changed its inflation policy in February 
2004, directing program offices to budget with what the Navy believed 
were more realistic inflation indexes, anticipating that this would 
help curtail requests for prior-year completion funds. 

Source: GAO, Defense Acquisitions: Improved Management Practices Could 
Minimize Cost Growth in Navy Shipbuilding Programs, GAO-05-183, 
Washington, D.C.: Feb. 28, 2005. 

[End of case study] 

Applying inflation correctly is necessary if the cost estimate is to be 
credible. In simple terms, inflation reflects the fact that the cost of 
an item usually continues to rise over time. Inflation rates are used 
to convert a cost from its current year into a constant base year so 
that the effects of inflation are removed. When cost estimates are 
stated in base-year dollars, the implicit assumption is that the 
purchasing power of the dollar has remained unchanged over the period 
of the program being estimated. Cost estimates are normally prepared in 
constant dollars to eliminate the distortion that would otherwise be 
caused by price level changes. This requires the transformation of 
historical or actual cost data into constant dollars. 

For budgeting purposes, however, the estimate must be expressed in then-
year dollars to reflect the program’s projected annual costs by 
appropriation. This requires applying inflation to convert from base-
year to then-year dollars. Cost estimators must make assumptions about 
what inflation indexes to use, since any future inflation index is 
uncertain. In cases in which inflation decreases over time, applying 
the wrong inflation rate will result in a higher cost estimate. Worse 
is the situation in which the inflation is higher than projected, 
resulting in costs that are not sufficient to keep pace with inflation, 
as illustrated in case study 33. Thus, it is imperative that inflation 
assumptions be well documented and that the cost estimator always 
perform uncertainty and sensitivity analysis to study the effects of 
changes on the assumed rates. 

Selecting The Proper Indexes: 

The cost estimator will not have to construct an index to apply 
inflation but will select one to apply to cost data. Often, the index 
is directed by higher authority, such as OMB. In this way, all programs 
can be compared and aggregated with the same escalation rate, since 
they are all being executed in the same economic circumstances. This 
doesn’t mean that the forward escalation rates are correct—in fact, 
escalation rates are difficult to forecast—but that program comparisons 
will at least not be confused by different assumptions about 
escalation. When the index is not directed, a few general guidelines 
can help the cost estimator select the correct index. Because all 
inflation indexes measure the average rate of inflation for a 
particular market basket of goods, the objective in making a choice is 
to select the one whose market basket most closely matches the program 
to be estimated. The key is to use common sense and objective judgment. 
For example, the consumer price index would be a poor indicator of 
inflation for a new fighter aircraft, because the market baskets 
obviously do not match. Labor escalation would be affected by different 
factors than, say, fuel or steel costs. Although the selected index 
will never exactly match the market basket of costs, the closer the 
match, the better the estimate. 

Weighted indexes are used to convert constant, base-year, dollars to 
then-year dollars and vice versa. Raw indexes are used to change the 
economic base of constant dollars from one base year to another. 
Contract prices are stated in then-year dollars, and weighted indexes 
are appropriate for converting them to base-year dollars. Published 
historical cost data are frequently, but not always, normalized to a 
common base year, and raw indexes are appropriate for changing the base 
year to match that of the program being estimated. It is important that 
the cost estimator determine what year dollars cost data are expressed 
in, so that normalization for inflation can be done properly. 

Schedule risk can affect the magnitude of escalation in a cost 
estimate. The escalation dollars are often estimated by applying a 
monthly escalation rate (computed so that compounding monthly values 
equates to the forecasted annual rate) to dollars forecasted to be 
spent in each month. If the schedule is delayed, a dollar that would 
have been escalated by, say, 30 months might now be escalated for 36 
months. Even if the cost estimate in today’s dollars is an accurate 
estimate, a schedule slip would affect the amount of escalation. 

In addition, the question of escalating the contingency reserve arises. 
Some cost estimating systems calculate the contingency on base-year 
dollars but do not escalate the contingency, perhaps because they do 
not have a way to determine when the dollars will be spent. In a cost 
risk analysis, in contrast, the contingency reserve is computed during 
the simulation using the risk in the line-item costs. If the simulated 
line-item costs are then subjected to escalation during the same 
simulation, the process effectively escalates the contingency. This is 
appropriate, since contingency money is just more money needed to be 
spent on the statement of work, and it should be affected by escalation 
as is any other money spent. 

Data Documentation: 

After the data have been collected, analyzed, and normalized, they must 
be documented and stored for future use. One way to keep a large amount 
of historical data viable is to continually supplement them with every 
new system’s actual return costs and with every written vendor quote or 
new contract. Although data have many sources, the predominant sources 
are the manufacturers who make the item or similar items. It can take 
years for a cost estimator to develop an understanding of these sources 
and to earn the trust of manufacturers regarding the use of their 
proprietary and business-sensitive data. Once trust has been 
established and maintained for some time, the cost estimator can 
normally expect a continual flow of useful data. 

All data collection activities must be documented as to source, work 
product content, time, units, and assessment of accuracy and 
reliability. Comprehensive documentation during data collection greatly 
improves quality and reduces subsequent effort in developing and 
documenting the estimate. The data collection format should serve two 
purposes. First, the format should provide for the full documentation 
and capture of information to support the analysis. Second, it should 
provide for standards that will aid in mapping other forms of cost 
data. 

Previously documented cost estimates may provide useful data for a 
current estimate. Relying on previous estimates can save the cost 
estimator valuable time by eliminating the need to research and conduct 
statistical analyses that have already been conducted. For example, a 
documented program estimate may provide the results of research on 
contractor data, identification of significant cost drivers, or actual 
costs, all of which are valuable to the cost estimator. Properly 
documented estimates describe the data used to estimate each WBS 
element, and this information can be used as a good starting point for 
the new estimate. Moreover, relying on other program estimates can be 
valuable in understanding various contractors and providing cross-
checks for reasonableness. 

Because many cost documents are secondary sources of information, the 
cost estimator should be cautious. When using information from 
documented cost estimates, the analyst should fully understand the 
data. For example, if a factor was constructed from CPRs, the cost 
estimator should ask the following questions to see if the data are 
valid for the new program: 

* What was the base used in the ratio? 

* Are the WBS elements consistent with those of the system being 
estimated—for example, is data management included in the data or the 
systems engineering and program management element? 

* Was the factor computed from the ACWP or the EAC? 

* What percentage complete is the contract? 

7. Best Practices Checklist: Data: 
 
* As the foundation of an estimate, data: 
- Have been gathered from historical actual cost, schedule and program, 
and technical sources; 
- Apply to the program being estimated; 
- Have been analyzed for cost drivers; 
- Have been collected from primary sources, if possible, and secondary 
sources as the next best option, especially for cross-checking results; 
- Have been adequately documented as to source, content, time, units, 
assessment of accuracy and reliability, and circumstances affecting the 
data; 
- Have been continually collected, protected, and stored for future 
use; 
- Were assembled as early as possible, so analysts can participate in 
site visits to understand the program and question data providers. 

* Before being used in a cost estimate, the data were: 
- Fully reviewed to understand their limitations and risks; 
- Segregated into nonrecurring and recurring costs; 
- Validated, using historical data as a benchmark for reasonableness; 
- Current and found applicable to the program being estimated; 
- Analyzed with a scatter plot to determine trends and outliers; 
- Analyzed with descriptive statistics; 
- Normalized to account for cost and sizing units, mission or 
application, technology maturity, and content so they are consistent 
for comparisons; 
- Normalized to constant base-year dollars to remove the effects of 
inflation, and the inflation index was documented and explained. 

[End of Chapter 10] 

Chapter 11: Developing A Point Estimate: 

In this chapter, we discuss step 7 in the high-quality estimating 
process. Step 7 pulls all the information together to develop the point 
estimate—the best guess at the cost estimate, given the underlying 
data. High-quality cost estimates usually fall within a range of 
possible costs, the point estimate being between the best and worst 
case extremes. (We explain in chapter 14 how to develop this range of 
costs using risk and uncertainty analysis.) The cost estimator must 
perform several activities to develop a point estimate: 

* develop the cost model by estimating each WBS element, using the best 
methodology, from the data collected; 

* include all estimating assumptions in the cost model; 

* express costs in constant-year dollars; 

* time-phase the results by spreading costs in the years they are 
expected to occur, based on the program schedule; and; 
 
* add the WBS elements to develop the overall point estimate. 

Having developed the overall point estimate, the cost estimator must 
then: 

* validate the estimate by looking for errors like double counting and 
omitted costs and ensuring that estimates are comprehensive, accurate, 
well-documented, and credible (more information on validation is in 
chapter 15); 

* compare the estimate against the independent cost estimate and 
examine where and why there are differences; 

* perform cross-checks on cost drivers to see if results are similar; 
and; 

* update the model as more data become available or as changes occur 
and compare the results against previous estimates.

We have already discussed how to develop a WBS and GR&As, collect and 
normalize the data into constant base-year dollars, and time-phase the 
results. Once all the data have been collected, analyzed, and 
validated, the cost estimator must select a method for developing the 
cost estimate. 

Cost Estimating Methods: 

The three commonly used methods for estimating costs are analogy, 
engineering build-up, and parametric. An analogy uses the cost of a 
similar program to estimate the new program and adjusts for 
differences. The engineering build-up method develops the cost estimate 
at the lowest level of the WBS, one piece at a time, and the sum of the 
pieces becomes the estimate. The parametric method relates cost to one 
or more technical, performance, cost, or program parameters, using a 
statistical relationship. 

Which method to select depends on where the program is in its life 
cycle. Early in the program, definition is limited and costs may not 
have accrued. Once a program is in production, cost and technical data 
from the development phase can be used to estimate the remainder of the 
program. Table 11 gives an overview of the strengths, weaknesses, and 
applications of the three methods. 

Table 11: Three Cost Estimating Methods Compared : 

Method: Analogy; 
Strength: 
* Requires few data; 
* Based on actual data;
* Reasonably quick;
* Good audit trail.
Weakness: 
* Subjective adjustments; 
* Accuracy depends on similarity of items; 
* Difficult to assess effect of design change; 
* Blind to cost drivers; 
Application: 
* When few data are available; 
* Rough-order-of-magnitude estimate; 
* Cross-check. 

Method: Engineering build-up; 
Strength: 
* Easily audited 
* Sensitive to labor rates
* Tracks vendor quotes
* Time honored
Weakness: 
* Requires detailed design 
* Slow and laborious
* Cumbersome
Application: 
* Production estimating 
* Software development 
* Negotiations 
 

Method: Parametric; 
Strength: 
* Reasonably quick; 
* Encourages discipline;
* Good audit trail;
* Objective, little bias;
* Cost driver visibility;
* Incorporates real-world effects (funding, technical, 
risk); 
Weakness: 
* Lacks detail; 
* Model investment;
* Cultural barriers;
* Need to understand model’s behavior;
Application: 
* Budgetary estimates; 
* Design-to-cost trade studies; 
* Cross-check; 
* Baseline estimate; 
* Cost goal allocations. 

Source: © 2003, MCR, LLC, “Cost Estimating: The Starting Point of EVM.” 

[End of table] 

Other cost estimating methods include: 

* expert opinion, which relies on subject matter experts to give their 
opinion on what an element should cost;[Footnote 39] 
 
* extrapolating, which uses actual costs and data from prototypes to 
predict the cost of future elements; and; 

* learning curves, which is a common form of extrapolating from actual 
costs. 

In the sections below, we describe these methods and their advantages 
and disadvantages. Finally, we discuss how to pull all the methods 
together to develop the point estimate. 

Analogy Cost Estimating Method: 

An analogy takes into consideration that no new program, no matter how 
state of the art it may be technologically, represents a totally new 
system. Most new programs evolve from programs already fielded that 
have had new features added on or that simply represent a new 
combination of existing components. The analogy method uses this 
concept for estimating new components, subsystems, or total programs. 
That is, an analogy uses actual costs from a similar program with 
adjustments to account for differences between the requirements of the 
existing and new systems. A cost estimator typically uses this method 
early in a program’s life cycle, when insufficient actual cost data are 
available but the technical and program definition is good enough to 
make the necessary adjustments. 

Adjustments should be made as objectively as possible, by using factors 
(sometimes scaling parameters) that represent differences in size, 
performance, technology, or complexity. The cost estimator should 
identify the important cost drivers, determine how the old item relates 
to the new item, and decide how each cost driver affects the overall 
cost. All estimates based on the analogy method, however, must pass 
the “reasonable person” test—that is, the sources of the analogy and 
any adjustments must be logical, credible, and acceptable to a 
reasonable person. In addition, since analogies are one-to-one 
comparisons, the historical and new systems should have a strong 
parallel. 

Analogy relies a great deal on expert opinion to modify the existing 
system data to approximate the new system. If possible, the adjustments 
should be quantitative rather than qualitative, avoiding subjective 
judgments as much as possible. An analogy is often used as a cross-
check for other methods. Even when an analyst is using a more detailed 
cost estimating technique, an analogy can provide a useful sanity 
check. Table 12 shows how an analogy works. 

Table 12: An Example of the Analogy Cost Estimating Method: 

Parameter: Engine; 
Existing system: F-100; 
New system: F-200; 
Cost of new system (assuming a linear relationship): [Empty]. 

Parameter: Thrust; 
Existing system: 12,000 lbs; 
New system: 16,000 lbs; 
Cost of new system (assuming a linear relationship): [Empty]. 

Parameter: Cost; 
Existing system: $5.2 million; 
New system: [Empty]; 
Cost of new system (assuming a linear relationship): (16,000/12,000) x 
$5.2 million = $6.9 million. 

Source: © 2003, Society of Cost Estimating and Analysis (SCEA), 
“Costing Techniques.” 

[End of table] 

The equation in table 12 implicitly assumes a linear relationship 
between engine cost and amount of thrust. However, there should be a 
compelling scientific or engineering reason why an engine’s cost is 
directly proportional to its thrust. Without more data (or an expert on 
engine costs), it is hard to know what parameters are the true drivers 
of cost. Therefore, when using the analogy method, it is important that 
the estimator research and discuss with program experts the 
reasonableness of technical program drivers to determine whether they 
are significant cost drivers. 

The analogy method has several advantages: 
 
* It can be used before detailed program requirements are known. 

* If the analogy is strong, the estimate will be defensible. 

* An analogy can be developed quickly and at minimum cost. 

* The tie to historical data is simple enough to be readily understood. 

Analogies also have some disadvantages: 

* An analogy relies on a single data point. 

* It is often difficult to find the detailed cost, technical, and 
program data required for analogies. 

* There is a tendency to be too subjective about the technical 
parameter adjustment factors. 

The last disadvantage can be best explained with an example. If a cost 
estimator assumes that a new component will be 20 percent more complex 
but cannot explain why, this adjustment factor is unacceptable. The 
complexity must be related to the system’s parameters, such as that the 
new system will have 20 percent more data processing capacity or will 
weigh 20 percent more. Case study 34 highlights what can happen when 
technical parameter assumptions are too optimistic. 

Case Study 34: Cost Estimating Methods, from Space Acquisitions, GAO-
07-96: 

In 2004, Advanced Extremely High Frequency (AEHF) satellite program 
decision makers relied on the program office cost estimate rather than 
the independent estimate the CAIG developed to support the production 
decision. The program office estimated that the system would cost about 
$6 billion, on the assumption that AEHF would have 10 times more 
capacity than Milstar, the predecessor satellite, at half the cost and 
weight. However, the CAIG concluded that the program could not deliver 
more data capacity at half the weight, given the state of the 
technology. In fact, the CAIG believed that to get the desired increase 
in data rate, the weight would have to increase proportionally. As a 
result, the CAIG estimated that AEHF would cost $8.7 billion and 
predicted a $2.7 billion cost overrun. 

The CAIG relied on weight data from historical satellites to estimate 
the program’s cost, because it considered weight to be the best cost 
predictor for military satellite communications. The historical data 
from the AEHF contractor showed that the weight had more than doubled 
since the program began and that the majority of the weight growth was 
in the payload. The Air Force also used weight as a cost predictor but 
attributed the weight growth to structural components rather than the 
more costly payload portion of the satellite. The CAIG stated that 
major cost growth was inevitable from the program start because 
historical data showed that it was possible to achieve a weight 
reduction or an increase in data capacity but not both at the same 
time. 

Source: GAO, Space Acquisitions: DOD Needs to Take More Action to 
Address Unrealistic Initial Cost Estimates of Space Systems, GAO-07-96, 
Washington, D.C.: Nov. 17, 2006. 

[End of case study] 

Engineering Build-Up Cost Estimating Method: 

The engineering build-up cost estimating method builds the overall cost 
estimate by summing or “rolling-up” detailed estimates done at lower 
levels of the WBS. Because the lower-level estimating associated with 
the build-up method uses industrial engineering principles, it is often 
referred to as engineering build-up and is sometimes referred to as a 
grass-roots or bottom-up estimate. 

An engineering build-up estimate is done at the lowest level of detail 
and consists of labor and materials costs that have overhead and fee 
added to them. In addition to labor hours, a detailed parts list is 
required. Once in hand, the material parts are allocated to the lowest 
WBS level, based on how the work will be accomplished. In addition, 
quantity and schedule have to be considered in order to capture the 
effects of learning. Typically, cost estimators work with engineers to 
develop the detailed estimates. The cost estimator’s focus is to get 
detailed information from the engineer in a way that is reasonable, 
complete, and consistent with the program’s ground rules and 
assumptions. The cost estimator must find additional data to validate 
the engineer’s estimates. 

An engineering build-up method is normally used during the program’s 
production, because the program’s configuration has to be stabilized, 
and actual cost data are required to complete the estimate. The 
underlying assumption of this method is that historical costs are good 
predictors of future costs.The premise is that data from the 
development phase can be used to estimate the cost for production. 
As illustrated in table 13, the build-up method is used when an analyst 
has enough detailed information about building an item—such as number 
of hours and number of parts—and the manufacturing process to be used.

Table 13: An Example of the Engineering Build-Up Cost Estimating 
Method: 

 
Problem: Estimate sheet metal cost of the inlet nacelle for a new 
aircraft; 
Similar aircraft: F/A-18 inlet nacelle; 
Solution: Apply historical F/A-18 variance for touch labor effort and 
apply support labor factor to adjust estimated touch labor hours; 
Result: 2,000 hours x 1.2 = 2,400 touch labor hours and 2,400 labor 
hours x 1.48 = 3,522 labor hours (touch labor plus support labor) 
estimate for new aircraft. 

Problem: Standard hours to produce a new nacelle are estimated at 2,000 
for touch labor; adjust to reflect experience of similar aircraft and 
support labor effort; 
Similar aircraft: F/A-18 inlet nacelle experienced a 20% variance in 
touch labor effort above the industrial engineering standard. In 
addition, F/A-18 support labor was equal to 48% of the touch labor 
hours; 
Solution: [Empty]; 
Result: Average labor rates would then be used to convert these total 
labor hours into costs. 

Source: © 2003, Society of Cost Estimating and Analysis (SCEA), 
“Costing Techniques.” 

[End of table] 


Because of the high level of detail, each step of the work flow should 
be identified, measured, and tracked, and the results for each outcome 
should be summed to make the point estimate. 

* The several advantages to the build-up technique include: 

* the estimator’s ability to determine exactly what the estimate 
includes and whether anything was overlooked, 

* its unique application to the specific program and manufacturer, 

* that it gives good insight into major cost contributors, and 

* easy transfer of results to other programs. 

Some disadvantages of the engineering build-up method are that 

* it can be expensive to implement and it is time consuming, 

* it is not flexible enough to answer what-if questions, 

* new estimates must be built for each alternative, 

* the product specification must be well known and stable, 

* all product and process changes must be reflected in the estimate, 

* small errors can grow into larger errors during the summation, and 

* some elements can be omitted by accident. 

Parametric Cost Estimating Method: 

In the parametric method, a statistical relationship is developed 
between historical costs and program, physical, and performance 
characteristics. The method is sometimes referred to as a top-down 
approach. Types of physical characteristics used for parametric 
estimating are weight, power, and lines of code. Other program and 
performance characteristics include site deployment plans for 
information technology installations, maintenance plans, test and 
evaluation schedules, technical performance measures, and crew size. 
These are just some examples of what could be a cost driver for a 
particular program. Sources for these cost drivers are often found in 
the technical baseline, cost analysis requirements document or cost 
analysis data requirement. The important thing is that the attributes 
used in a parametric estimate should be cost drivers of the program. 
The assumption driving the parametric approach is that the same factors 
that affected cost in the past will continue to affect future costs. 
This method is often used when little is known about a program except 
for a few key characteristics like weight or volume. 

Using a parametric method requires access to historical data, which may 
be difficult to obtain. If the data are available, they can be used to 
determine the cost drivers and to provide statistical results and can 
be adjusted to meet the requirements of the new program. Unlike an 
analogy, parametric estimating relies on data from many programs and 
covers a broader range. Confidence in a parametric estimate’s results 
depends on how valid the relationships are between cost and the 
physical attributes or performance characteristics. Using this method, 
the cost estimator must always present the related statistics, 
assumptions, and sources for the data. 

The goal of parametric estimating is to create a statistically valid 
cost estimating relationship using historical data. The parametric CER 
can then be used to estimate the cost of the new program by entering 
its specific characteristics into the parametric model. CERs 
established early in a program’s life cycle should be continually 
revisited to make sure they are current and the input range still 
applies to the new program. In addition, parametric CERs should be well 
documented, because serious estimating errors could occur if the CER is 
improperly used. 

Parametric techniques can be used in a wide variety of situations, 
ranging from early planning estimates to detailed contract 
negotiations. It is always essential to have an adequate number of 
relevant data points, and care must be taken to normalize the dataset 
so that it is consistent and complete. In software, the development 
environment—that is, the extent to which the requirements are 
understood and the strength of the programmers’ skill and experience—is 
usually the major cost driver. Because parametric relationships are 
often used early in a program, when the design is not well defined, 
they can easily be reflected in the estimate as the design changes 
simply by adjusting the values of the input parameters. 

It is important to make sure that the program attributes being 
estimated fall within (or, at least, not far outside) the CER dataset. 
For example, if a new software program was expected to contain 1 
million software lines of code and the data points for a software CER 
were based on programs with lines of code ranging from 10,000 to 
250,000, it would be inappropriate to use the CER to estimate the new 
program. 

To develop a parametric CER, cost estimators must determine the cost 
drivers that most influence cost. After studying the technical baseline 
and analyzing the data through scatter charts and other methods, the 
cost estimator should verify the selected cost drivers by discussing 
them with engineers. The CER can then be developed with a mathematical 
expression, which can range from a simple rule of thumb (for example, 
dollars per pound) to a complex regression equation. 

The more simplified CERs include rates, factors, and ratios. A rate 
uses a parameter to predict cost, using a multiplicative relationship. 
Since rate is defined to be cost as a function of a parameter, the 
units for rate are always dollars per something. The rate most commonly 
used in cost estimating is the labor rate, expressed in dollars per 
hour. 

A factor uses the cost of another element to estimate a new cost using 
a multiplier. Since a factor is defined to be cost as a function of 
another cost, it is often expressed as a percentage. For example, 
travel costs may be estimated as 5 percent of program management costs. 

A ratio is a function of another parameter and is often used to 
estimate effort. For example, the cost to build a component could be 
based on the industry standard of 20 hours per subcomponent. 

Rates, factors, and ratios are often the result of simple calculations 
(like averages) and many times do not include statistics. Table 14 
contains a parametric cost estimating example. 

Table 14: An Example of the Parametric Cost Estimating Method: 
 
Program attribute: A cost estimating relationship (CER) for site 
activation (SA) is a function of the number of workstations (NW); 
Calculation: SA = $82,800 + ($26,500 x NW). 

Program attribute: Data range for the CER; 
Calculation: 7 – 47 workstations based on 11 data points. 

Program attribute: Cost to site activate a program with 40 
workstations; 
Calculation: $82,800 + ($26,500 x 40) = $1,142,800. 

Source: © 2003, Society of Cost Estimating and Analysis (SCEA), 
“Costing Techniques.” 

[End of table] 

In table 14, the number of workstations is the cost driver. The 
equation is linear but has both a fixed component (that is, $82,800) 
and a variable component (that is, $26,500 x NW). 

In addition, the range of the data is from 7 to 47 workstations, so it 
would be inappropriate to use this CER for estimating the activation 
cost of a site with as few as 2 or as many as 200 workstations. 

In fact, at one extreme, the CER estimates a cost of $82,800 for no 
workstation installations, which is not logical. Although we do not 
show any CER statistics for this example, the CERs should always be 
presented with their statistics. The reason for this is to enable the 
cost estimator to understand the level of variation within the data and 
model its effect with uncertainty analysis. 

CERs should be developed using regression techniques, so that 
statistical inferences may be drawn. To perform a regression analysis, 
the first step is to determine what relationship exists between cost 
(dependent variable) and its various drivers (independent variables). 
This relationship is determined by developing a scatter chart of the 
data. If the data are linear, they can be fit by a linear regression. 
If they are not linear and transformation of the data does not produce 
a linear fit, nonlinear regression can be used. The independent 
variables should have a high correlation with cost and should be 
logical. 

For example, software complexity can be considered a valid driver of 
the cost of developing software. The ultimate goal is to create a fit 
with the least variation between the data and the regression line. This 
process helps minimize the statistical error or uncertainty brought on 
by the regression equation. 

The purpose of the regression is to predict with known accuracy the 
next real-world occurrence of the dependent variable (or the cost), 
based on knowledge of the independent variable (or some physical, 
operational, or program variable). Once the regression is developed, 
the statistics associated with the relationship must be examined to see 
if the CER is a strong enough predictor to be used in the estimate. 
Most statistics can be easily generated with the regression analysis 
function of spreadsheet software. Among important regression statistics 
are: 

* R-squared, 

* statistical significance, 

* the F statistic, and, 

* the t statistic. 

R-squared: 

The R-squared (R2) value measures the strength of the association 
between the independent and dependent (or cost) variables. The R2 value 
ranges between 0 and 1, where 0 indicates that there is no relationship 
between cost and its independent variable, and 1 means that there is a 
perfect relationship between them. Thus, the higher R2 is the better. 
An R2 of 91 percent in the example in table 14, for example, would mean 
that the number of workstations (NW) would explain 91 percent of the 
variation in site activation costs, indicating that it is a very good 
cost driver. 

Statistical Significance: 

Statistical significance is the most important factor for deciding 
whether a statistical relationship is valid. An independent variable 
can be considered statistically significant if there is small 
probability that its corresponding coefficient is equal to zero, 
because a coefficient of zero would indicate that the independent 
variable has no relationship to cost. Thus, it is desirable that the 
probability that the coefficient is equal to zero be as small as 
possible. How small is denoted by a predetermined value called the 
significance level. For example, a significance level of .05 would mean 
there was a 5 percent probability that a variable was not statistically 
significant. Statistical significance is determined by both the 
regression as a whole and each regression variable. 

F Statistic: 

The F statistic is used to judge whether the CER as a whole is 
statistically significant by testing to see whether any of the 
variables’ coefficients are equal to zero. The F statistic is defined 
as the ratio of the equation’s mean squares of the regression to its 
mean squared error, also called the residual. The higher the F 
statistic is, the better the regression, but it is the level of 
significance that is important. 

t Statistic: 

The t statistic is used to judge whether individual coefficients in the 
equation are statistically significant. It is defined as the ratio of 
the coefficient’s estimated value to its standard deviation. As with 
the F statistic, the higher the t statistic is, the better, but it is 
the level of significance that is important. 

The Parametric Method: Further Considerations: 

The four statistics described above are just some of the statistical 
analyses that can be used to validate a CER. (For more information on 
statistics or hardware cost estimating, a good reference is the 
Parametric Estimating Handbook.[Footnote 40]) Once the statistics have 
been evaluated, the cost estimator picks the best CER—that is, the one 
with the least variation and the highest correlation to cost. 

The final step in developing the CER is to validate the results, using 
a data set different from the one used to generate the equation, to see 
if the results are similar. Again, it is important to use a CER 
developed from programs whose variables are within the same data range 
as those used to develop the CER. Deviating from the CER variable input 
range could invalidate the relationship and skew the results. We 
note several other pitfalls associated with CERs. 

Always question the source of the data underlying the CER. Some CERs 
may be based on data that are biased by unusual events like a strike, 
hurricane, or major technical problems that required a lot of rework. 
To mitigate this risk, it is essential to understand the data the CER 
is based on and, if possible, to use other historical data to check the 
validity of the results. 

All equations should be checked for common sense to see if the 
relationship described by the CER is reasonable. This helps avoid the 
mistake that the relationship adequately describes one system but does 
not apply to the one being estimated. 

Normalizing the data to make them consistent is imperative to good 
results. All cost data should be converted to constant base years. In 
addition, labor and material costs should be broken out separately, 
since they may require different inflation factors to convert them to 
constant dollars. Moreover, independent variables should be converted 
into like units for various physical characteristics such as weight, 
speed, and length. 

Historical cost data may have to be adjusted to reflect similar 
accounting categories, which might be expressed differently from one 
company to another. 

It is important to fully understand all CER modeling assumptions and to 
examine the reliability of the dataset, including its sources, to see 
if they are reasonable. 

Among the several advantages to parametric cost estimating are its:
 
* Versatility: If the data are available, parametric relationships can 
be derived at any level, whether system or subsystem component. And as 
the design changes, CERs can be quickly modified and used to answer 
what-if questions about design alternatives. 

* Sensitivity: Simply varying input parameters and recording the 
resulting changes in cost can produce a sensitivity analysis. 

* Statistical output: Parametric relationships derived from statistical 
analysis generally have both objective measures of validity 
(statistical significance of each estimated coefficient and of the 
model as a whole) and a calculated standard error that can be used in 
risk analysis. This information can be used to provide a confidence 
level for the estimate, based on the CER’s predictive capability. 

* Objectivity: CERs rely on historical data that provide objective 
results. This increases the estimate’s defensibility. 

Disadvantages to parametric estimating include: 

* Database requirements: The underlying database must be consistent and 
reliable. It may be time-consuming to normalize the data or to ensure 
that the data were normalized correctly, especially if someone outside 
the estimator’s team developed the CER. Without understanding how the 
data were normalized, the analyst has to accept the database on faith— 
sometimes called the black-box syndrome, in which the analyst simply 
plugs in numbers and unquestioningly accepts the results. Using a CER 
in this manner can increase the estimate’s risk. 

* Currency: CERs must represent the state of the art; that is, they 
must be updated to capture the most current cost, technical, and 
program data. 
 
* Relevance: Using data outside the CER range may cause errors, because 
the CER loses its predictive ability for data outside the development 
range.

* Complexity: Complicated CERs (such as nonlinear CERs) may make it 
difficult for others to readily understand the relationship between 
cost and its independent variables. 

Parametric Cost Models: 

Many cost estimating models are based on parametric methods. They may 
estimate hardware or software costs. Depending on the model, the 
database may contain cost, technical, and programmatic data at the 
system, component, and subcomponent level. Parametric models typically 
consist of several interrelated CERs and are often computerized. They 
may involve extensive use of cost-to-noncost CERs, multiple independent 
variables related to a single cost effect, or independent variables 
defined in terms of weapon system performance or design characteristics 
rather than more discrete material requirements or production 
processes. Information technology databases and computer modeling may 
be used in these types of parametric cost estimating systems. 

When using parametric models, many times the underlying data are 
proprietary, so access to the raw data may not be available. When the 
inputs to the parametric models are qualitative, as often happens, they 
should be objectively assessed. In addition, many parameters should be 
selected to tailor the model to the specific hardware or software 
product that is being estimated. Therefore, it is also important to 
calibrate the parametric model to best reflect the particular situation 
or environment in which the product will be developed. Finally, the 
model should be validated using historical data to determine how well 
it predicts costs. 

Parametric models are always useful for cross-checking the 
reasonableness of a cost estimate that is derived by other means. As a 
primary estimating method, parametric models are most appropriate 
during the engineering concept phase when requirements are still 
somewhat unclear and no bill of materials exists. When this is the 
situation, it is imperative that the parametric model is based on 
historical cost data and that the model is calibrated to those data. To 
ensure that the model is a good predictor of costs, it should 
demonstrate that it actually reflects or replicates known data to a 
reasonable degree of accuracy. In addition, the model should 
demonstrate that the cost-to-nocost estimating relationships are 
logical and that the data used for the parametric model can be verified 
and traced back to source documentation. 

Using parametric cost models has several advantages: 

* They can be adjusted to best fit the hardware or software being 
estimated. 

* Cost estimates are based on a database of historical data. 

They can be calibrated to match a specific development environment. 

Their disadvantages are that: 

* their results depend on the quality of the underlying database, 
 
* they require many inputs that may be subjective, and 

* accurate calibration is required for valid results. 

Expert Opinion: 

Expert opinion is generally considered too subjective but can be useful 
in the absence of data. It is possible to alleviate this concern by 
probing further into the experts’ opinions to determine if real data 
back them up. If so, the analyst should attempt to obtain the data and 
document the source. 

The cost estimator’s interviewing skills are also important for 
capturing the experts’ knowledge so that the information can be used 
properly. However, cost estimators should never ask experts to estimate 
the costs for anything outside the bounds of their expertise, and they 
should always validate experts’ credentials before relying on their 
opinions. 

The advantages of using an expert’s opinion are that: 

* it can be used when no historical data are available; 

* it takes minimal time and is easy to implement, once experts are 
assembled; 

* an expert may give a different perspective or identify facets not 
previously considered, leading to a better understanding of the 
program; 
 
* it can help in cross-checking for CERs that require data 
significantly beyond the data range; 

* it can be blended with other estimation techniques within the same 
WBS element; and; 

* it can be applied in all acquisition phases. 

Disadvantages associated with using an expert’s opinion include 

* its lack of objectivity, 

* the risk that one expert will try to dominate a discussion to sway 
the group or that the group will succumb to the urge to agree, and; 

* it is not very accurate or valid as a primary estimating method. 

The bottom line is that because of its subjectivity and lack of 
supporting documentation, expert opinion should be used sparingly and 
only as a sanity check. Case study 35 shows how relying on expert 
opinion as a main source for a cost estimate is unwise. 

Case Study 35: Expert Opinion, from Customs Service Modernization, 
GAO/AIMD-99-41: 

The U.S. Customs Service Automated Commercial Environment (ACE), a 
major information technology systems modernization effort, was 
estimated in November 1997 to cost $1.05 billion to develop, operate, 
and maintain between 1994 and 2008. GAO’s 1999 review found that the 
agency lacked a reliable estimate of what ACE would cost to build, 
deploy, and maintain. Instead of using a cost model, Customs had used 
an unsophisticated spreadsheet to extrapolate the cost of each ACE 
software increment. 

Further, Customs’ approach to determining software size and reuse was 
not well supported or convincing and had not been documented. For 
example, Customs had estimated the size of each ACE software 
increment—most increments had still been undefined—by extrapolating 
from the estimated size of the first increment, based on individuals’ 
undocumented best judgments about functionality and complexity. 

Last, Customs did not have any historical project cost data when it 
developed the $1.05 billion estimate, and it had not accounted for 
relevant, measured, and normalized differences in the increments. For 
instance, it had not accounted for the change in ACE’s architecture 
from a mainframe system that had been written in COBOL and C++ to a 
combined mainframe and Internet-based system that was to be written in 
C++ and Java. Such a fundamental change would clearly have a dramatic 
effect on system costs and should have been explicitly addressed in 
Customs’ cost estimates. 

Source: GAO, Customs Service Modernization: Serious Management and 
Technical Weaknesses Must Be Corrected, GAO/AMD-99-41, Washington, 
D.C.: Feb. 26, 1999. 

[End of case study] 

Other Estimating Methods: Extrapolation from Actual Costs: 
 
Extrapolation uses the actual past or current costs of an item to 
estimate its future costs. The several variants of extrapolation 
include: 

* averages, the most basic variant, a method that uses simple or moving 
averages to determine the average actual costs of units that have been 
produced to predict the cost of future units; 

* learning curves, which account for cost improvement and are the most 
common variant; and; 

* estimates at completion, which use actual cost and schedule data to 
develop estimates of costs at completion with EVM techniques; EACs can 
be calculated with various EVM forecast techniques to take into account 
factors such as current performance. 

Extrapolation is best suited for estimating follow-on units of the same 
item when there are actual data from current or past production lots. 
This method is valid when the product design or manufacturing process 
has changed little. If major changes have occurred, careful adjustments 
will have to be made or another method will have to be used. When using 
extrapolation techniques, it is essential to have accurate data at the 
appropriate level of detail, and the cost estimator must ensure that 
the data have been validated and properly normalized. When such data 
exist, they form the best basis for cost estimates. Advantages 
associated with extrapolating from actual costs include their 

* reliance on historical costs to predict future costs, 

* great credibility and reliability for estimating costs, and 

* ability to be applied at whatever level of data—labor hours, material 
dollars, total costs. 

The disadvantages associated with extrapolating from actual costs are 
that: 

* changes in the accounting of actual costs can affect the results, 

* obtaining access to actual costs can be difficult, 

* results will be invalid if the production process or configuration is 
not stable, and; 
 
* it should not be used for items outside the actual cost data range. 

Other Estimating Methods: Learning Curves: 

Using the cost estimating methods discussed in this chapter can 
generate the cost of a single item. However, a cost estimator needs to 
determine whether that cost is for the first unit, the average unit, or 
every unit. And given the cost for one unit, how should a cost 
estimator determine the appropriate costs for other units? The answer 
is in the use of learning curves. Sometimes called progress or 
improvement curves, learning curve theory is based on the premise that 
people and organizations learn to do things better and more efficiently 
when they perform repetitive tasks. A continuous reduction in labor 
hours from repetitive performance in producing an item often results 
from more efficient use of resources, employee learning, new equipment 
and facilities, or improved flow of materials. This improvement can be 
modeled with a mathematical CER that assumes that as the quantity of 
units to be produced doubles, the amount of effort declines by a 
constant percentage. 

Workers gain efficiencies in a number of areas as items are repeatedly 
produced. The most commonly recognized area of improvement is worker 
learning. Improvement occurs because as a process is repeated, workers 
tend to become physically and mentally more adept at it. Supervisors, 
in addition to realizing these gains, become more efficient in using 
their people, as they learn their strengths and weaknesses. 
Improvements in the work environment also translate into worker and 
supervisory improvement: Studies show that changes in climate, 
lighting, and general working conditions motivate people to improve. 

Cost improvement also results from changes to the production process 
that optimize placement of tools and material and simplify tasks. In 
the same vein, organizational changes can lead to lower recurring 
costs, such as instituting a just-in-time inventory or centralizing 
tasks (heat and chemical treatment processes, tool bins, and the like). 
Another example of organizational change is a manufacturer’s agreeing 
to give a vendor preferred status if it is able to limit defective 
parts to some percentage. The reduction in defective parts can 
translate into savings in scrap rates, quality control hours, and 
recurring manufacturing labor, all of which can result in valuable time 
savings. In general, it appears that more complex manufacturing tasks 
tend to improve faster than simpler tasks. The more steps in a process, 
the more opportunity there is to learn how to do them better and 
faster. 

Another reason for contractor improvement is that in competitive 
business environments, market forces require suppliers to improve 
efficiency to survive. As a result, some suppliers may competitively 
price their initial product release at a loss, with the expectation 
that future cost improvements will make up the difference. This 
strategy can also discourage competitors from entering new markets. For 
the strategy to work, however, the assumed improvements must 
materialize or the supplier may cease to exist because of high losses. 

In observing production data (for example, manufacturing labor hours), 
early analysts noted that labor hours per unit decreased over time. 
This observation led to the formulation of the learning curve equation 
Y = AXb and the concept of a constant learning curve slope (b) that 
captures the change in Y given a change in X.[Footnote 41] The unit 
formulation states that “as the number of units doubles, the cost 
decreases by a 
constant percent.” In other words, every time the total quantity 
doubles, the cost decreases by some fixed percentage. Figure 13 
illustrates how a learning curve works. 

Figure 13: A Learning Curve: 

[Refer to PDF for image: line graph]

Cumulative average hours per unit (as a percent of first unit) plotted 
against Cumulative number of units. 

Two lines: 
90% curve ratio; 
80% curve ratio. 

Source: © 1994, R. Max Wideman, FCSCE, “A Pragmatic Approach to Using 
Resource Loading, Production and Learning Curves on Construction 
Projects.” 

[End of figure] 

Figure 13 shows how an item’s cost gets cheaper as its quantities 
increase. For example, if the learning curve slope is 90 percent and it 
takes 1,000 hours to produce the first unit, then it will take 900 
hours to produce the second unit. Every time the quantity doubles—for 
example, from 2 to 4, 4 to 8, 8 to 16—the resource requirements will 
reduce according to the learning curve slope. 

Determining the learning curve slope is an important effort and 
requires analyzing historical data. If several production lots of an 
item have been produced, the slope can be derived from the trend in the 
data. Another way to determine the slope would be to look at company 
history for similar efforts and calculate it from those efforts. Or the 
slope could be derived from an analogous program. The analyst could 
look at slopes for a particular industry—aircraft, electronics, 
shipbuilding—sometimes reported in organizational studies, research 
reports, or estimating handbooks. Slopes can be specific to functional 
areas such as manufacturing, tooling, and engineering, or they may be 
composite slopes calculated at the system level, such as aircraft, 
radar, tank, or missiles. 

The first unit cost might be arrived at by analogy, engineering build-
up, a cost estimating relationship, fitting the actual data, or another 
method. In some cases, the first unit cost is not available. Sometimes 
work measurement standards might provide the hours for the 5th unit, or 
a cost estimating relationship might predict the 100th unit cost. This 
is not a problem as long as the cost estimator understands the point on 
the learning curve that the unit cost is from and what learning curve 
slope applies. With this information, the cost estimator can easily 
solve for the 1st unit cost using the standard learning curve formula Y 
= AXb. 

Because learning can reduce the cost of an item over time, cost 
estimators should be aware that if multiple units are to be bought from 
one contractor as part of the program’s acquisition strategy, reduced 
costs can be anticipated. Thus, knowledge of the acquisition plan is 
paramount in deciding if learning curve theory can be applied. If so, 
careful consideration must be given to determining the appropriate 
learning curve slope for both labor hours and material costs. In 
addition, learning curves are based on recurring costs, so cost 
estimators need to separate recurring from nonrecurring costs if the 
results are not to be skewed. Finally, these circumstances should be 
satisfied before deciding to use learning curves:[Footnote 42]  
 
* much manual labor is required to produce the item; 

* the production of items is continuous and, if not, then adjustments 
are made; 

* the items to be produced require complex processes; 

* technological change is minimal between production lots; 

* the contractor’s business process is being continually improved; and; 

* the government program office culture (or environment) is 
sufficiently known. 

Particular care should be taken for early contracts, in which the cost 
estimator may not yet be familiar enough with program office habits to 
address the risk accurately (for example, high staff turnover, 
propensity for scope creep, or excessive schedule delays). 

Production Rate Effects On Learning: 
 
It is reasonable to expect that unit costs decrease not only as more 
units are produced but also as the production rate increases. This 
theory accounts for cost reductions that are achieved through economies 
of scale. Some examples are quantity discounts and reduced ordering, 
processing, shipping, receiving, and inspection costs. Conversely, if 
the number of quantities to be produced decreases, then unit costs can 
be expected to increase, because certain fixed costs have to be spread 
over fewer items. At times, an increase in production rate does not 
result in reduced costs, as when a manufacturer’s nominal capacity is 
exceeded. In such cases, unit costs increase because of such factors as 
overtime, capital purchases, hiring actions, and training costs. 

Another aspect of improvement is the continuity of the production line. 
Production breaks may occur because of program delays (budgetary or 
technical), time lapses between initial and follow-on orders, or labor 
disputes. They may occur as a result of design changes that may require 
a production line to shut down so it can be modified with new tools and 
equipment or a new configuration. Production lines can also shut down 
for unexpected recalls that require repairs for previously produced 
items. How much learning is lost depends on how long the production 
line is shut down. 

To determine the effect of a production break on the unit cost two 
questions need answering: 

1. How much learning has been lost (or forgotten) because of the break 
in production? 

2. How will this loss of learning affect the costs of future production 
items? 

The cost estimator should always consider the effect of a production 
break on the cost estimate. (See case study 36.) 

Case Study 36: Production Rate, from Defense Acquisitions, GAO-05-183: 

Costs on the CVN 76 and CVN 77 Nimitz aircraft carriers grew because of 
additional labor hours required to construct the ships. At delivery, 
CVN 76 had required 8 million additional labor hours to construct; CVN, 
77, 4 million. As the number of hours increased, total labor costs grew 
because the shipbuilder was paying for additional wages and overhead 
costs. Increases in labor hours stemmed in part from underestimating 
the labor hours. The shipbuilder had negotiated CVN 76 for 
approximately 39 million labor hours—only 2.7 million more labor hours 
than the previous ship—CVN 75. However, CVN 75 had been constructed 
more efficiently, because it was the fourth ship of two concurrent ship 
procurements. CVN 76 and CVN 77, in contrast, were procured as single 
ships. 

Single ship procurements have historically been less efficient than two-
ship procurements. The last time the Navy procured a carrier as a 
single-ship procurement, 7.9 million more hours were required—almost 3 
times the number estimated for CVN 76 (2.7 million more hours). In 
addition, a 4-month strike in 1999, during the construction of CVN 76, 
had led to employee shortages in key trades and learning losses, 
because many employees were not returning to the shipyard. According to 
Navy officials, the shipbuilder was given $51 million to offset the 
strike’s effect. 
 
Source: GAO, Defense Acquisitions: Improved Management Practices Could 
Minimize Cost Growth in Navy Shipbuilding Programs, GAO-05-183, 
Washington, D.C.: Feb. 28, 2005. 

[End of case study] 

Pulling The Point Estimate Together: 

After each WBS element has been estimated with one of the methods 
discussed in this chapter, the elements should be added together to 
arrive at the total point estimate. The cost estimator should validate 
the estimate by looking for errors like double-counting and omitted 
costs. The cost estimator should also perform, as a best practice, 
cross-checks on various cost drivers to see if similar results can be 
produced. This helps validate the estimate. The cost estimator should 
also compare the estimate to an independent cost estimate. The estimate 
and the independent cost estimate should also be reconciled at this 
time. (Chapter 15 discusses validating the estimate.) 

DOD’s major defense acquisition programs are required to develop 
independent cost estimates for major program milestones; other agencies 
may not require this practice. An independent cost estimate gives an 
objective measure of whether the point estimate is reasonable. 
Differences between them should be examined and discussed to achieve 
understanding of overall program risk and to adjust risk around the 
point estimate. 

Finally, as the program matures through its life cycle, as more data 
become available, or as changes occur, the cost estimator should update 
the point estimate. The updated point estimate should be compared 
against previous estimates, and lessons learned should be documented. 
(More detail is in chapter 20.) 

8. Best Practices Checklist: Developing a Point Estimate: 

* The cost estimator considered various cost estimating methods: 
- Analogy, early in the life cycle, when little was known about the 
system being developed: 
-- Adjustments were based on program information, physical and 
performance characteristics, contract type. 
- Expert opinion, very early in the life cycle, if an estimate could be 
derived no other way. 
- The build-up method later, in acquisition, when the scope of work was 
well defined and a complete WBS could be determined. 
- Parametrics, if a database of sufficient size, quality, and 
homogeneity was available for developing valid CERs and the data were 
normalized correctly. 
-- Parametric models were calibrated and validated using historical 
data. 
- Extrapolating from actual cost data, at the start of production. 

* Cost estimating relationships were considered: 
 Statistical techniques were used to develop CERs: 
-- Higher R-squared; 
- Statistical significance, for determining the validity of statistical 
relationships; 
-- Significance levels of F and t statistics. 
- Before using a CER, the cost estimator 
-- Examined the underlying data set to understand anomalies; 
-- Checked equations to ensure logical relationships; 
-- Normalized the data; 
-- Ensured that CER inputs were within the valid dataset range; 
-- Checked modeling assumptions to ensure they applied to the 
program. 
- Learning curve theory was applied if: 
-- Much manual labor was required for production; 
-- Production was continuous or adjustments had to be made; 
-- Items to be produced required complex processes; 
-- Technological change was minimal between production lots; 
-- The contractor’s business process was being continually improved. 

* Production rate and breaks in production were considered. 

* The point estimate was developed by aggregating the WBS element cost 
estimates by one of the cost estimating methods. 
- Results were checked for accuracy, double-counting, and omissions and 
were validated with cross-checks and independent cost estimates. 

[End of Chapter 11] 

Chapter 12: Estimating Software Costs: 

Software is a key component in almost all major systems the federal 
government acquires. Estimating software development, however, can be 
difficult and complex. To illustrate, consider some statistics: a 
Standish Group International 2000 report showed that 31 percent of 
software programs were canceled, more than 50 percent overran original 
cost estimates by almost 90 percent, and schedule delays averaged 
almost 240 percent.[Footnote 43] Moreover, the Standish Group reported 
that the number of software development projects that are completed 
successfully on time and on budget, with all features and functions as 
originally specified, rose only from 16 percent in 1994 to 28 percent 
in 2000.[Footnote 44] 

Most often, creating an estimate based on an unachievable schedule 
causes software cost estimates to be far off target. Playing into this 
problem is an overwhelming optimism about how quickly software can 
be developed. This optimism stems from a lack of understanding of how 
staffing, schedule, software complexity, and technology all 
interrelate. Furthermore, optimism about how much savings new 
technology can offer and the amount of reuse that can be leveraged from 
existing programs also cause software estimates to be underestimated. 
Case study 37 gives an example. 

Case Study 37: Underestimating Software, from Space Acquisitions, 
GAO-07-96: 

The original estimate for the Space Based Infrared System for 
nonrecurring engineering, based on actual experience in legacy sensor 
development and assumed software reuse, was significantly 
underestimated. Nonrecurring costs should have been two to three times 
higher, according to historical data and independent cost estimators. 
Program officials also planned on savings from simply rehosting 
existing legacy software, but those savings were not realized because 
all the software was eventually rewritten. It took 2 years longer than 
planned to complete the first increment of software. 

Source: GAO, Space Acquisitions: DOD Needs to Take More Action to 
Address Unrealistic Initial Cost Estimates of Space Systems, GAO-07-96 
(Washington, D.C.: Nov. 17, 2006). 

[End of case study] 

Our work has also shown that the ability of government program offices 
to estimate software costs and develop critical software is often 
immature. Therefore, we highlight software estimation as a special case 
of cost estimation because of its significance and complexity in 
acquiring major systems. This chapter supplements the steps in cost 
estimating with what is unique in the software development environment, 
so that auditors can better understand the factors that can lead to 
software cost overruns and failure to deliver required functionality on 
time. Auditors should remember that all the steps of cost estimating 
have to be performed for software just as they have to be performed for 
hardware.

The 12 steps of cost estimating described in chapter 1 and summarized 
in table 15 also apply to software. That is, the purpose of the 
estimate and the estimating plan should be defined in steps 1 and 2, 
software requirements should be defined in step 3, the effort to 
develop the software should be defined in step 4, GR&As should be 
established in step 5, relevant technical and cost data should be 
collected in step 6, and a method for estimating the cost for software 
development and maintenance should be part of the point estimate in 
step 7. Moreover, sensitivity analysis in step 8, risk and uncertainty 
analysis in step 9, documenting the estimate in step 10, presenting 
results to management in step 11, and updating estimates with actual 
costs in step 12 are all relevant for software cost estimates. 

Table 15: The Twelve Steps of High-Quality Cost Estimating Summarized: 
 
Step: 1; 
Summary: Define the estimate’s purpose. 
Step: 2; 
Summary: Develop the estimating plan. 

Step: 3; 
Summary: Define the program characteristics, the technical baseline. 

Step: 4; 
Summary: Determine the estimating structure, the WBS. 

Step: 5; 
Summary: Identify ground rules and assumptions. 

Step: 6; 
Summary: Obtain the data. 

Step: 7; 
Summary: Develop the point estimate and compare it to an independent 
cost estimate. 

Step: 8; 
Summary: Conduct sensitivity analysis. 

Step: 9; 
Summary: Conduct a risk and uncertainty analysis. 

Step: 10; 
Summary: Document the estimate. 

Step: 11; 
Summary: Present the estimate to management for approval. 

Step: 12; 
Summary: Update the estimate to reflect actual costs and changes. 

Source: GAO. 

[End of table] 

In this chapter, we discuss some of the best practices for developing 
reliable and credible software cost estimates and fully understanding 
typical cost drivers and risk elements associated with software 
development. 

Unique Components Of Software Estimation: 
 
Since software is not tangible like hardware, it can be more ambiguous 
and difficult to comprehend. In addition, software is built only once, 
whereas hardware is often mass produced, once design and testing 
are complete. Unlike hardware, for which the industry changes more 
slowly, software changes constantly, making it difficult to collect 
good data for cost estimating. Despite these differences, software 
estimating is otherwise similar to hardware estimating in that it 
follows the same basic development process.[Footnote 45] For 
instance, both use the same types of estimating methods—analogy, 
engineering build-up, parametric. 

Size and complexity are cost drivers for both. Finally, how quickly 
hardware and software can be produced depends on the developer’s 
capability, available resources, and familiarity with the environment.

Software is mainly labor intensive, and all the tasks associated with 
developing it are nonrecurring—there is no production phase. That is, 
once the software is developed, it is simple to produce a copy of it. 
How much effort is required to develop software depends on its size and 
complexity. Thus, estimating software costs has two basic elements—the 
software to be developed and the development effort to accomplish it. 

Estimating Software Size: 

Cost estimators begin a software estimate by predicting the sizes of 
the deliverables that must be constructed. Software sizing is the 
process of determining how big the application being developed will be. 
The size depends on many factors. For example, software programs that 
are more complex, perform many functions, have safety-of-life 
requirements, and require high reliability are typically bigger than 
simpler programs. 

Estimating software size is not easy and depends on having a detailed 
knowledge about a program’s functions in terms of scope, complexity, 
and interactions. Not only is it hard to generate a size estimate for 
an application that has not yet been developed, but the software 
process also often experiences requirements growth and scope creep that 
can significantly affect size and the resulting cost and schedule 
estimates. 

Programs that do not track and control these trends typically overrun 
their costs and experience schedule delays. Methods for measuring size 
data include COSMIC (Common Software Measurement International 
Consortium) Functional Sizing Method, function point analysis, object 
point analysis, source lines of code, and use case (described in table 
16). 

Table 16: Sizing Metrics and Commonly Associated Issues: 
 
COSMIC functional sizing: 

Metric: Measures the size of software based on functional user 
requirements; sizes software independently of the technology to be used 
to implement it, focusing on practices and procedures the software must 
follow to meet user needs. COSMIC points are based on four different 
data movements: entry, exit, read, and write. Each one constitutes a 
COSMIC function point. 
The method can be used to determine the software size of various 
applications including business, real-time (telecommunications, process 
control), embedded software (cellular phones, electronics), and 
infrastructure software (operating system software). 
Advantages: Sizing is easily understood and simplified because all data 
movements have the same value; sizing does not depend on data 
attributes; It applies to real-time and embedded systems and allows for 
end-user and developer viewpoints; standards exist for counting.
Disadvantages: Recently developed, so benchmarking data are limited; 
not accurate for counting highly algorithmic software; detailed 
information about data movements takes time to collect; automated 
counting does not exist. 

Metric: Function point analysis; Considers how many functions a program
does rather than how many instructions it contains; functions typically 
include user inputs (add, change, delete), outputs (reports), data 
files to be updated by the application, interfaces with other 
applications, and inquiries (searches or retrievals). Each function is 
weighted for complexity and total count is adjusted for the effect of 
14 characteristics such as data communications, transaction rate, 
installation ease, and whether there are multiple sites. Accurate 
counting requires in-depth knowledge of standards, experience, and, 
preferably, function point certification. Function point analysis is
linked directly to system requirements and functionality, so size 
analysis is measured in terms users can understand. The size estimates 
(and resulting cost and schedule estimates) can be based on quantifiable
analysis through the project life cycle as requirements change. 
Function points are particularly useful in many development 
environments that might use unified modeling language, commercial off-
the-shelf components, or object-oriented approaches to software 
development and implementation. 
Advantages: Many types of data sources can be used throughout 
development: user or estimator interviews, requirements and design 
documents, data dictionaries and models, end user guides, screen 
captures; not dependent on language or technology; count is unaffected 
by language or tools used to develop the software; counts are available
early in development from requirements and design specifications; 
nontechnical users can understand what function points are measuring;
function points can be used to determine requirements (or scope creep); 
counts are fully documented and auditable; standards are established
and reviewed often by the International Function Point Users Group; 
counting can be quick and efficient. 
Disadvantages: Counting involves subjectivity; difficult to derive 
requirements from top-level specifications; does not capture technical 
and design constraints; untrained or inexperienced people can develop
inconsistent function point counts; definitions can be confusing; 
automated function point analysis counting does not exist; database is 
not as big as for source line of code counts; counts tend to 
underestimate algorithmic intensive systems. 

Metric: Object point analysis: Uses integrated computer-aided software 
engineering tools (CASE) to count number of screens, reports, and third-
generation modules for basic sizing; CASE tools take over the job of 
manually writing software code by using graphical user interface 
generators, libraries of reusable components, and other design tools. 
Object points focus on actors involved in the solution and any actions 
they must take. One benefit of using objects (i.e., actors) is that 
similar behaviors can be grouped into classes, allowing for behaviors 
from upper classes (parent) to be inherited by lower classes 
(children). Inheritance results in reduced coding effort; each count is 
weighted for complexity, summed to a total count, and adjusted for 
reuse. 
Advantages: Relies on a graphical user interface; automates manual 
activities; objective measures; easier calculations; accounts for reuse 
through inheritance. 
Disadvantages: Counts occur at the end of design; no standards for 
counting; and not widely used and therefore validated productivity 
metrics are not available. 

Metric: Reports, interfaces, conversions, extensions, and 
forms/workflows (RICEF/W); Commonly used to size the effort associated 
with implementing Enterprise Resource Planning (ERP) systems; 
identifies changes that need to be made to configure the ERP system so 
that it satisfies user needs and fits within the target operating 
environment. Can be used to add functionality through custom 
development. RICEF/W needs to be adjusted for complexity. 
Advantages: Represents ERP modifications and enhancements that do not 
require custom development; 
Disadvantages: Specific to ERP systems; no standards for counting; does 
not capture costs for integrating bolt-on functionality. 

Metric: Source lines of code (SLOC): Considers the volume of code 
required to develop the software; includes executable instructions and 
data declarations and normally excludes comments and blanks. Estimation 
is by analogy, engineering expertise, or automated code counters. SLOC 
sizing is particularly appropriate for projects preceded by similar ones
(e.g., same language, developers, type of application); helps ensure 
that experience is aligned to future development. When developing lines 
of code counts, it is critical to define what is and is not included.
When developing databases or relying on software cost models, 
consistency in defining what the lines of code include is key. 
Advantages: Widely used for many years; can be used to estimate real 
time systems easily counted, manually or by automated code counter; 
objective; large databases of historical program sizes are available;
can obtain precise counts of existing software using the USC Code 
Counter. 
Disadvantages: No standard definition of what should be counted as lines
of code (e.g., physical line vs. logical statement); different lines of 
code count for the same function, depending on language and programmer’s
style; hard to capture lines of code for commercial off-the-shelf 
systems; hard to translate lines of code counts between other 
programming languages such as object oriented code; variations in 
definition make it hard to compare studies using SLOC; hard to estimate 
program SLOC early; emphasizes coding effort, which is small compared to
overall software development effort. 

Metric: Use cases and use case points: Defines interactions between 
external users and the system to achieve a goal (e.g., capture 
fingerprint or facial biometric to enroll applicants). A use case model 
describes a system’s functional requirements, consists of all users and 
use cases (tasks performed by the end user of a system that has a 
useful outcome), and identifies reuse by use case inclusions and 
extensions. Sizing count is arrived at by categorizing use cases as 
small, medium, or large and applying an average “use case points per 
category.” Adding a complexity factor to the sizing count based on 
number and types of users and transactions improves the count accuracy. 
Advantages: Applies to interactive end-user applications and devices 
users interact with; intuitive to stakeholders and development team;
identifies opportunities for software reuse; traceable to development 
team’s plans and output; increasingly applied to real-time systems;
can be mapped to test cases and business scenarios, which helps in 
staggered deployment. 
Disadvantages: Often yields an inaccurate final estimate if the system
engineering process is immature and historical data are lacking; no 
standards for counting; developer must be using object oriented design 
techniques so required documentation is available; estimate cannot be 
done until design document with the defined use case is available; 
requires a design team with a great deal of experience with object 
oriented design. 

Source: DOD, NASA, SCEA, and industry. 

[End of table] 

While software sizing can be approached in many ways, none are accurate 
because the “size” of software is an abstract concept. Moreover, with 
the exception of COSMIC and function points, none of the methods table 
16 describes has a controlling body for internationally standardizing 
the counting rules. In the absence of a universal counting convention, 
different places may take one of the source definitions for the basic 
approach and then “standardize” the rules internally. This can result 
in different counts. Therefore, it is critical that the sizing method 
used is consistent. The test of a good sizing method is that two 
separate individuals can apply the same rules to the same problem and 
yield almost the same result. Before choosing a sizing approach, one 
must consider the following questions of maturity and applicability: 
 
* Are the rules for the sizing technique rigorously defined in a widely 
accepted format? 

* Are they under the control of a recognized, independent controlling 
body? 

* Are they updated from time to time by the recognized, independent 
controlling body? 

* Does the controlling body certify the competency (and, hence, 
consistency) of counters who use their rules? 

* Are statistical data available to support claims for the consistency 
of counting by certified counters? 

* How long have the rules been stable? 

Auditors should know a few things about software sizing. The first is 
that reused and autogenerated software source lines of code should be 
differentiated from the total count. Reused software (code used 
verbatim with no modifications), adapted software (code that needs to 
be redesigned, may need to be converted, and may need some code added), 
and autogenerated software provide the developer with code that can be 
used in a new program, but none of these comes for free, and additional 
effort is usually associated with incorporating them into a new 
program. For instance, the effort associated with reused code depends 
on whether significant integration, reverse engineering, and additional 
design, validation, and testing are required. But if the effort to 
incorporate reused software is too great, it may be cheaper to write 
the code from scratch. As a result, the size of the software should 
reflect the amount of effort expected with incorporating code from 
another source. This can be accomplished by calculating the equivalent 
source lines of code, which adjusts the software size count to reflect 
the fact that some effort is required. 

Software porting is a special case of software reuse that is getting 
increasing visibility in cost estimation with respect to specific 
technologies, such as communications systems (waveforms). Porting 
represents hidden pitfalls, depending on the amount of capability to be 
transferred from special purpose processors (such as field-programmable 
gate arrays). Also, the quality of software commenting and 
documentation and the modularity of the initial code’s design and 
implementation greatly affect the porting of standard code in general 
purpose processors. Therefore, assumptions regarding savings (for 
example, assume less effort is required and no testing is necessary) 
from reused, adapted, and autogenerated software code should be looked 
at skeptically because of the additional work to research the code and 
provide necessary quality checks. As a minimum, regression testing will 
be required before integrating the software with the hardware for this 
type of code. 

Second, while function points generate counts for real-time software, 
like missile systems, they are not optimal in capturing the complexity 
associated with high levels of algorithmic software. Therefore, for 
programs that require high levels of complex processing like operating 
systems, telephone switching systems, navigation systems, and process 
control systems, estimators should base the count on COSMIC points or 
SLOC rather than function points to adequately capture the additional 
effort associated with developing algorithmic software.

Finally, choosing a sizing metric depends on the software application 
(purpose of the software and level of reliability needed) and the 
information that is available. Since no one way is best, cost 
estimators should work with software engineers to determine which 
metric is most appropriate. Since SLOCs have been used widely for years 
as a software sizing metric, many organizations have databases of 
historical SLOC counts for various completed programs. Thus, source 
lines of code tend to be the most predominant method for sizing 
software. If the decision is made to use historical source lines of 
code for estimating software size, however, the cost estimator needs to 
make sure that the program being estimated is similar in size, 
language, and application to the historical data. For programs for 
which no analogous data are available but detailed requirements and 
specifications have been developed, function point counting is 
appropriate, as long as the software does not contain many algorithms; 
if it does, then COSMIC points or SLOC should be used. And, if computer-
assisted software engineering tools are being used to develop the 
software, then object point analysis is appropriate. No matter which 
metric is chosen, however, the actual results can vary widely from the 
estimate, so that any point estimate should be accompanied by an 
estimated range of probability. (We discuss software and other cost 
estimating risk and uncertainty analyses in chapter 14.) 

When completing a software size estimate, it is preferable to use two 
different methodologies, if available, rather than relying on a single 
approach. Software estimates based on several different approaches that 
are compared and merge toward a consensus is the best practice. In 
addition, it is extremely important to include the expected growth in 
software size from requirements growth or underestimation (that is, 
optimism). Adjusting the software size to reflect expected growth from 
requirements being refined, changed, or added or initial size estimates 
being too optimistic and less reuse than expected is a best practice. 
This growth adjustment should be made before performing an uncertainty 
analysis (discussed in chapter 14). Understanding that software will 
usually grow, and accounting for it by using historical data, will 
result in more accurate software sizing estimates. Moreover, no matter 
what sizing convention is used, it is a best practice to continually 
update the size estimate as data become available so that growth can be 
monitored and accounted for. 

Estimating Software Development Effort: 

Once the initial software sizing is complete, it can be converted into 
software development effort—that is, an estimate of the human resources 
needed for the software’s development. It is important to note whether 
the effort accounts only for the WBS elements associated with the 
actual development of the software or also includes all the other 
nondevelopment activities. 

Table 53 in appendix IX, for example, shows a typical WBS for ground 
software development. The table shows that many other activities 
outside the actual coding of software are part of a typical software 
acquisition. These activities should also be estimated as part of the 
development effort. In particular, software management and control, 
software systems engineering, test-bed development, system integration 
and testing, quality assurance, and training are all activities that 
should be performed in a customized software solution acquisition. 

The level of effort required for each activity depends on the type of 
system being developed. For example, military and systems software 
programs require more effort than Web programs of the same size. Since 
variations in activities can affect overall costs, schedules, and 
productivity rates by significant amounts, it is critical to 
appropriately match activities to the type of software project being 
estimated. For example, safety critical software applications composed 
of complex mathematical algorithms require higher levels of effort 
because stringent quality and certification testing must be satisfied. 
Moreover, operating systems that must reflect real time updates and 
great reliability will need more careful design, development, and 
testing than software systems that rely on simple calculations. 

To convert software size into software development effort, the size is 
usually divided by a productivity factor like number of source lines of 
code, or function points, developed per labor work month. The 
productivity factor depends on several aspects, like the language used; 
whether the code is new, reused, or autogenerated; the developer’s 
capability; and the development tools used. It is best to use 
historical data from a similar program to develop the productivity 
factor, so that it best represents the development environment. If 
historical productivity factors are not available, an estimator can use 
a factor based on industry averages, but this will add more uncertainty 
to the estimate. It is important to note, however, that a productivity 
factor—based on the coding phase only—cannot be used to estimate the 
entire software development effort. When a productivity factor is used, 
all parameters associated with its computation need to be considered. 
Once the productivity factor has been selected, the corresponding labor 
hours can be generated. 

Some considerations in converting labor hours to cost are, first, that 
a cost estimator needs to determine how many productive hours are being 
assumed in a typical developer’s work day. This is important because 
assuming 8 hours of productive coding is unrealistic: staff meetings 
and training classes cut into valuable programming time, so that the 
number of effective work hours per day is typically 6 hours rather than 
8. Further, the number of work days per year is not the same from 
company to company because of differences in vacation and sick leave 
offered and the country the developers live in. In the United States, 
fewer vacation days tend to be provided than in countries in Europe, 
but in other countries like Japan less time is provided. All these 
issues need to be considered and calibrated to the program being 
estimated. In fact, multiple studies on the impact of overtime have 
shown that except for a short increase in effort over the first 1 or 2 
months, overtime does not have a significant impact on the life of the 
program. 

The sizing value usually represents only the actual software 
development effort, so the cost estimator needs to use other methods to 
estimate all the other activities related to developing the software. 
Sometimes factors (such as percentage of development effort) are 
available for estimating these additional costs. Software cost 
estimating models often provide estimates for these activities. If a 
model is not used or not available, then the cost estimator must 
account for the cost of the other labor as well as nonlabor costs, such 
as hardware and licenses. Accurately estimating all these tasks is 
challenging, because they are affected by a number of risks. (Some are 
identified in table 17; appendix XV contains a more comprehensive 
list of risks.) 

Table 17: Common Software Risks That Affect Cost and Schedule: 

Risk: Sizing and technology; 
Typical cost and schedule element: 
* Overly optimistic software engineers tending to underestimate the 
amount of code needed; 
* Poor assumptions on the use of reused code (requiring no 
modification) or adapted code (requiring some redesign, recoding, and 
retesting); 

* Vague or incomplete requirements, leading to uncertain size counts; 

* Not planning for additional effort associated with commercial off-the-
shelf software (e.g., systems engineering, performance testing, 
developing glue code). 

Risk: Complexity; 
Typical cost and schedule element: 
* Programming language: the amount of design, coding, and testing 
(e.g., object-oriented languages require more up-front design but 
result in less coding and testing); 
* Applications: software purpose and reliability (e.g., criticality of 
failure, loss of life); 
* Hardware limitations with respect to the need for more efficient 
code; 
* Number of modules affecting integration effort; 
* Amount of new code to be developed; 
* Higher quality requiring more development and testing but resulting 
in less and easier-to-perform maintenance; 
* Safety critical software requires more design, coding, and testing. 
 
Risk: Capability; 
Typical cost and schedule element: 
* Developers with better skill can deliver more effective software with 
fewer defects, allowing for faster software delivery; 
* Optimistic assumption that a new development tool will increase 
productivity; 
* Optimistic assumption about developer’s productivity, leading to cost 
growth, even if sizing is accurate; 
* Geographically dispersed development locations, making communication 
and coordination more difficult. 
 
Risk: Management and executive oversight; 
Typical cost and schedule element: 
* Management’s dictating an unrealistic schedule; 
* A decision to concurrently develop hardware and software, increasing 
risk; 
* Incorporating a new method, language, tool, or process for the first 
time; 
* Incomplete or inaccurate definition of system requirements; 
* Not handling creeping requirements proactively; 
* Inadequate quality control, causing delays in fixing unexpected 
defects; 
* Unanticipated risks associated with commercial off-the-shelf software 
upgrades and lack of support. 

Source: SCEA and industry. 

[End of table] 

Scheduling Software Development: 

The schedule for getting the work accomplished should also be 
estimated. Too often, software development programs tend to run late 
because of requirements creep or poor quality control. Other times, the 
schedule is driven by some arbitrary date dictated by management or the 
customer. Optimism may be based on management’s thinking that if more 
people are added to the development team, the product can be developed 
faster. Unfortunately, the opposite usually happens: the larger the 
development team, the less its members are able to communicate with one 
another or work effectively. In addition, the more complex the software 
development effort is, the harder it will be to find the right staff 
for the job. Scheduling is complicated and is affected by many factors. 
A cost estimator should understand the intricate interdependencies that 
affect the schedule: 
 
* staff availability; 

* an activity’s dependence on prior tasks; 

* the concurrence of scheduled activities; 
 
* the activities that make up the critical path; 

* the number of shifts working and effective work hours per shift; 

* available budget; 

* whether overtime can be authorized; 

* downtime from meetings, travel, sickness; 

* geographic location of workers, including time zones. 

Significantly large software development efforts frequently experience 
cost and schedule growth. This is because of the complexities inherent 
in managing configuration, communications, and design assumptions that 
typically hinder software development productivity. In addition, 
increased software schedule has a ripple effect on other collateral 
support efforts such as program management and systems engineering. 
Hardware programs experience the same problems. 

Management pressure on software developers to keep to an unrealistic 
schedule presents other problems. For example, to meet schedule 
constraints, the developer may minimize the time for requirements 
analysis, which can affect the quality of the software developed. In 
addition, developers may skip documentation, which could result in 
higher software maintenance costs. Moreover, developers may decide to 
build more components in parallel, defer functionality, postpone 
rework, or minimize functional testing, all to reduce schedule time. 
While these actions may save some time up front, they result in 
additional time, effort, and risk for the program. 

Rework should be included in every software development schedule 
because it is unwise to assume that software can be delivered without 
any defects. Therefore, if rework is not accounted for in the schedule, 
it will have to be accounted for when it occurs, which will cause 
problems in the sequencing of remaining tasks. It should be noted that 
if a software schedule does not include effort for rework, then the 
schedule will be unexecutable, and the maturity of the developing 
organization is questionable for assuming that all requirements will 
pass testing the very first time. Rework effort should include the time 
and resources associated with diagnosing the problem, designing and 
coding the fix, and then retesting until the problem is resolved. To 
adequately account for rework, the schedule should anticipate a certain 
number of defects based on historical experience, and time and effort 
should be allocated for fixing them. We discuss scheduling more 
thoroughly in chapter 18, including how to account for these risks so 
that schedule is realistic. 

Software Maintenance: 

Once the software has been developed, tested, and installed in its 
intended location, it must be maintained, just like hardware. Often 
called the operational phase for software, its costs must be accounted 
for in the LCCE. During this phase, software is maintained by fixing 
any defects not discovered in testing (known as corrective 
maintenance), modifying the software to work with any changes to its 
physical environment (adaptive maintenance), and adding new 
functionality (perfective maintenance). When adding capability, the 
effort is similar to a minidevelopment effort and the cost drivers are 
the same as in development. Software maintenance may also be driven by 
technology upgrades (adaptive maintenance) and users requesting 
enhancements (perfective maintenance). In addition to providing help 
desk support to users of the software, perfective maintenance often 
makes up the bulk of the software maintenance effort. 

The level of maintenance required depends on several factors. How 
complex the software is will determine how much maintenance is needed. 
In addition, if requirements from development were deferred until the 
software was in maintenance mode, or the requirements were too vague 
and not well understood, then additional perfective maintenance will be 
necessary. The quality of the developed software will also affect 
maintenance. If the software was rigorously tested, then less 
corrective maintenance will be needed. In addition, software that is 
well documented will be easier to de-bug and will provide maintainers a 
better understanding of how the software was designed, making 
modifications more streamlined. 

In addition to the need to maintain the software code, costs are 
associated with help desk support that need to be included in the 
software’s operation and support phase. Effort will be spent on trouble 
calls and generating defect tickets for software maintenance and should 
be included as part of the software cost estimate. 

Parametric Software Estimation: 

Software development cost estimating tools—or parametric tools—can be 
used to estimate the cost to develop and maintain software. Parametric 
tools are based on historical data collected from hundreds of actual 
projects that can generate cost, schedule, effort, and risk estimates 
based on inputs provided by the tool user. Among other things, these 
inputs generally include the size of the software, personnel 
capabilities, experience, development environment, amount of code 
reuse, programming language, and labor rates. Once the data have been 
input, the tool relies on cost estimating relationships and analogies 
to past projects to calculate the software cost and schedule estimates. 
When these data are not available to the cost estimator, most tools 
have default values that can be used instead. 

Parametric tools should be used throughout the development life cycle 
of the software. They are especially beneficial in the early stages of 
the software life cycle, when requirement specifications and design are 
still vague. For example, these tools provide flexibility by accepting 
multiple sizing metrics, so that estimators can apply different sizing 
methods and examine the results. Additionally, parametric-based 
estimates can be used to understand tradeoffs by analyzing the relative 
effects of different development scenarios, determine risk areas that 
can be managed, and provide the information necessary for monitoring 
and control of the program. 

The tools allow estimators to manipulate various inputs to gauge the 
overall sensitivity to parameter assumptions and then assess the 
overall risk, based on the certainty of those inputs. Developers who 
use tools in development can discover potential problems early enough 
to mitigate their impact. 

As the project matures and actual data become available, the precision 
of the cost estimates produced by a parametric tool are likely to 
improve. For this to happen, the tool must be calibrated with actual 
data from completed programs so it can be adjusted to reflect the 
actual development environment. Since most models are built on industry 
averages, simply using default values in the tool may lead to skewed 
results. Calibration avoids this by using known inputs and outcomes to 
adjust the relationships in the model. Therefore, calibration is 
necessary for ensuring more accurate estimates. 

When a parametric tool is used, it is essential to ensure that the 
estimators are trained and experienced in applying it and interpreting 
the results. Simply using a tool does not enhance the estimate’s 
validity. Using a tool correctly by calibrating it to the specific 
program is necessary for developing a reliable estimate. In addition, 
the following issues should be well understood before unquestioningly 
accepting the results of a parametric tool: 

* Ensure that autogenerated code is properly captured by the model, in 
terms of increased productivity and the effort required to design, 
develop, document, and produce the code.

* Output from the tool may include different cost and effort estimates 
or activities and phases that would have to be mapped or deleted to 
conform to the specific program. Not understanding what is in the 
output could lead to overestimating or underestimating the program. 

* Some models limit the size of the development program for which they 
can forecast the effort. Sizes outside of the tool range may not fit 
the program being estimated. 

* Data are often proprietary so the models are only as accurate as 
their underlying data allow them to be. Therefore, results from the 
model should be cross-checked. 

* Each model has different sensitivities to certain parameters and 
“opinions” on desirable staff levels. Therefore, various models offer 
different schedule duration results. For particularly small or large 
software programs, a schedule predicted by a commercial parametric 
model 
needs to be crosschecked. 

* Where a detailed build structure or spiral development is to be 
modeled, the commercial model implementation and results should be 
closely monitored. The same is true for significant integration of 
commercial off-the-shelf software (COTS) or government off-the-shelf 
software (GOTS) with development software (or hardware). 

In addition to these issues, it is important to note that many models 
do not address the costs associated with database development. If 
databases will be required as part of the software solution, and the 
model used to estimate the software does not account for the cost of 
database development, then this cost must be estimated separately. The 
cost for database development will depend on the size and complexity of 
the source data. Cost drivers for database development include the 
number of feeder systems, data elements, and users as well as the 
software to be used to develop the new database. 

Commercial Off-the-Shelf Software: 

Using commercial off-the-shelf software has advantages and 
disadvantages, and auditors need to understand the risks that come with 
relying on it. One advantage is that development time can be faster. 
The software can provide more user functionality than custom software 
and may be flexible enough to accommodate multiple hardware and 
operating environments. Also, help desk support can be purchased with 
the commercial license, which can help reduce software maintenance 
costs. 

Among the drawbacks to off-the-shelf software is the learning curve 
associated with its use, as well as integrating it into the new 
program’s environment. In addition, most commercial software is 
developed for a broad spectrum of users, so it tends to address only 
general functions. More specific functions must be customized and 
added, and glue-code may be required to enable the software to interact 
with other applications. And, because the source code is usually not 
provided to customers of commercial off-the-shelf software, it can be 
hard to support the software in-house. When upgrades occur, the 
software may have to be reintegrated with existing custom code. Thus, 
it can be wrong to think that commercial software will necessarily be 
an inexpensive solution. 

Estimators tend to underestimate the effort that comes before and after 
implementing off-the-shelf software. For example, requirements 
definition, design, and testing of the overall system must still be 
conducted. Poorly defined requirements can result in less than optimal 
software selection, necessitating the development of new code to 
satisfy all requirements. This unexpected effort will raise costs and 
cause program delays. In addition, adequate training and access to 
detailed documentation are important for effectively using the 
software. 

Furthermore, since commercial software is subject to intense market 
forces, upgrades can be released with minimal testing, causing 
unpredictable problems, such as defects and systems incompatibilities. 
When this happens, additional time is needed to analyze the cause of 
failures and fix them. Finally, interfaces between the software and 
other applications may need to be rewritten every time the software is 
upgraded. While software developers can address all these issues, they 
take some time to accomplish. Therefore, adequate planning should be 
identified and estimated by the cost estimator to ensure that enough 
time and resources are available to perform them. 

Enterprise Resource Planning Software: 

Enterprise resource planning (ERP) refers to the implementation of an 
administrative software system based on commercial off-the-shelf 
software throughout an organization. ERP’s objective is to integrate 
information and business processes—including human resources, finance, 
manufacturing, and sales—to allow information entered once into the 
system to be shared throughout an organization. ERP systems force 
business process reengineering, allowing for improved operations that 
can lead to savings down the road. To achieve savings requires an 
extensive knowledge of business processes so that users will optimize 
automation, programming skills, and change management in the new work 
processes. Although an ERP system is configured commercial software and 
should be treated as such, we highlight this type of effort because of 
the unique difficulty of estimating its implementation costs and 
duration. 

Organizations implementing ERP systems risk cost overruns and missed 
deadlines. According to a Gartner report, “For 40 percent of 
enterprises deploying ERP systems through 2009, the actual time and 
money spent on these implementations will exceed original estimates by 
at least 50 percent (0.7 probability).”[Footnote 46] 

At the heart of an ERP system are thousands of packages—built from 
database tables—that need to be configured to match end business 
processes. Each table has a decision switch that opens a specific 
decision path. By confining themselves to only one way to do a task, 
stove-piped units become integrated under one system. Deciding which 
switches in the tables to choose requires a deep understanding of the 
existing business operating processes. Thus, as table switches are 
picked, these business processes become reengineered to conform to the 
ERP’s way of doing business. As a result, change management and buy-in 
from the end users are crucial to the ERP system’s ultimate success. 

Cost estimators and auditors need to be aware of the additional risks 
associated with ERP implementation. 

Table 18 describes some of these risks and best practices for avoiding 
them. 

Table 18: Best Practices Associated with Risks in Implementing ERP: 

Risk: Training; 
Best practice: Staff are trained in the new ERP system’s software and 
the new processes; agencies teach workers how the ERP system will 
affect their business processes, developing their own training programs 
if necessary; providing mentoring and support for the first year of 
implementation eases the transition to the new system; obtaining user 
buy-in can be accomplished by communicating and marketing the benefits 
and new capabilities the ERP system will offer. 

Risk: Integrating and testing; 
Best practice: Agencies build and test links from their established 
software to the new ERP software links system or buy add-ons that are 
already integrated with the new system; they estimate and budget costs 
carefully, planning either way to test ERP integration from a process-
oriented perspective. 
 
Risk: Interfacing with legacy systems; 
Best practice: Since interfacing the ERP’s system software with legacy 
systems can be very expensive, carefully determining early on how both 
systems will pass data is paramount; preparing a business case to 
evaluate whether to maintain the legacy system is worth the added 
costs. 
 
Risk: Customizing; 
Best practice: Customizing core ERP software can be costly, especially 
since the ERP system’s elements are linked; perhaps use commercial add-
ons if the software cannot handle at least one business process. 
 
Risk: Converting and analyzing data; 
Best practice: Cost estimators look at the agency’s data conversion and 
analysis needs to see whether, for example, the cost of converting data 
to a new client server setup is accounted for, data from the ERP system 
and external systems have to be combined for analysis, the ERP budget 
should include data warehouse costs, or programming has to be 
customized. 
 
Risk: Following up installation; 
Best practice: Agencies plan for follow-up activities after 
installation, building them into their budget, keeping the team who 
implemented the ERP system onboard to keep the agency informed of its 
progress, and providing management with knowledge of the ERP project’s 
benefits. 

Source: GAO, DOD, and Derek Slater, “The Hidden Costs of Enterprise 
Software,” CIO Enterprise Magazine, Jan. 15, 1998. 

[End of table] 

Other costs associated with ERP system implementations include costs 
for adding “bolt-ons,” which are separate supplemental software 
packages that deliver capability not offered by the ERP system. Bolt-
ons connect to the ERP system using standard application programming 
interfaces or extensible markup language schema, which allow for data 
to pass between both systems. Costs for interfacing the bolt-on with 
the ERP system need to be identified and estimated. In addition, the 
number of bolt-ons that need to be integrated, as well as the type and 
size of the bolt-on functionality, will drive the cost of the 
interface. 

Experts agree that the ERP postimplementation stabilization period 
tends to be underestimated, because people tend to be too optimistic 
about how long training and the transition period will last. As a 
result, there is a risk for cost growth if management does not do a 
good job of selling the benefits of ERP. To successfully implement an 
ERP system, management has to be committed to freeing up resources to 
get the job done. This means that seasoned staff will need to be pulled 
away from their day jobs to focus on the effort to be fully effective. 
In addition, training tends to be underestimated in terms of both 
length and timing. To better plan for this effort, management needs to 
create a sense of urgency for change and provide early communication 
and adequate training in order to ensure successful implementation. 

Software Costs Must Also Account For Information Technology
Infrastructure And Services: 

Studies have shown that information technology (IT) services outside 
software development and maintenance (for example, hardware cost, help 
desk, upgrade installation, training) can make up a majority of total 
ownership costs. In fact, OMB reports that 77 percent of the overall IT 
budget for fiscal year 2009 will support steady state IT operations 
while only 23 percent will be used for development, modernization, and 
enhancement. 

Even systems such as ships, aircraft, and mission control centers have 
major IT infrastructure and services components to them. In fact, some 
IT systems encounter over 90 percent of their costs in the 
infrastructure and services required to support and run them. Yet when 
we read of costs, successes, failures, and challenges in IT systems, 
the vast majority of the systems typically refer to the software 
portions only, ignoring the IT services and infrastructure components. 
Making matters more difficult for those estimating IT systems are the 
numerous definitions of IT infrastructure. One useful definition is 
that it consists of the equipment, systems, software, and services used 
in common across an organization, regardless of mission, program, or 
project. IT infrastructure also serves as the foundation on which 
mission, program, or project-specific systems and capabilities are 
built. 

While we have already discussed software development and maintenance, 
we discuss in this section estimating the information technology 
services, hardware systems, and facilities required to support software 
and systems. 

Unique Components Of It Estimation: 

Unlike software, IT estimation is in some ways simpler than software 
development estimation, since IT infrastructure and services are more 
tangible. However, IT estimation is fraught with issues such as:
 
* What is the cost of the system engineering to define the IT system? 

* How much computing power is needed to support a system? 

* How many help desk personnel are needed to support X users? 

* How can costs be contained while still achieving innovation? 

* How can the value of the IT investment be quantified against its 
costs? 

* How do buy and lease decisions affect expenses and profitability? 

* How can we make tradeoffs between technology and costs? 

* What kind of application initiatives are needed to support the 
business? 

* How many vendors and how much vendor interface is required to run the 
IT operation? 

* How many sites does the IT infrastructure support? 

* How many and how clearly defined or stable are the requirements for 
the IT to align itself with the business goals? 

Simply getting a quote from a vendor for an IT system is rarely 
sufficient for IT cost estimation. While quotes often do not include 
many important cost elements, the cost estimator will still need to 
consider these elements. They include: 

* help desk support services supplied internally for applications and 
equipment; 

* facilities costs; 

* costs of on-going installation, maintenance, repair, and trouble 
shooting; 

* employee training, both formal training and self-training. 

To further complicate the effort, many vendors offer IT infrastructure 
either as a “software as a service” platform or as just “cloud 
computing.”[Footnote 47] Vendor-operated IT infrastructure hardware can 
be viable if issues such as loss of control, security, and potential 
resource sharing are acceptable. However, such vendor-operated 
infrastructure does not usually eliminate the costs of ongoing IT 
services to provide users help desk support, local computing, setup 
training, and other infrastructure services. The cost estimator must be 
aware that these costs should be considered, whether the infrastructure 
is to be owned by the government, leased, or owned and operated by 
vendors under contract with the government. 

Major Cost Drivers Associated with IT Estimation: 
 
Many factors that affect IT costs need to be considered when developing 
an IT cost estimate. Various examples of cost drivers, organized by 
physical attributes of the IT infrastructure, are listed next, along 
with performance and complexity requirements and economic 
considerations. 

1. Physical attributes that drive IT costs: 

* Application software, system software, and database storage size; 

* End user hardware list (e.g., laptops, CPU, printers); 

* Facility requirements (power, cooling); 

* Infrastructure hardware list (UNIX Servers, Windows servers, WAN/LAN 
equipment); 

* Number of application software, system software, and database items; 

* Number of application software, system software, and database users 
(concurrent, causal); 

* Number of inbound and outbound application software and database 
interfaces; 

* Number of unique platforms supported; 

* Operating locations; 

* Physical and organizational entities. 

2. Performance and complexity attributes: 

* Business requirements; 

* Complexity of infrastructure environment (e.g., disparate platforms, 
loose vs. tight coupling); 

* User type (professional, concurrent, casual); 

* Criticality and reliability of systems; 

* Expected service level (system administration, database 
administration, help desk Tier I, Help Desk Tier II, Help Desk Tier 
III); 

* Experience with systems; 

* Infrastructure hardware complexity (small, medium, large); 

* IT project type (ERP, SOA, Web application, data mart); 

* Number of transactions per second; 

* Number of vendors; 

* Process experience and rigor; 

* Security requirements; 

* System complexity (hardware or software); 

* Usage patterns (transaction rates). 

3. Economic factors and considerations: 

* Acquisition strategy; 

* Hardware leasing and purchasing agreements; 

* Labor rates; 

* Sourcing strategy; 

* Replacement and upgrade policies; 

* Software leasing and purchasing agreements (enterprise, user based); 

* Test plan; 

* Training strategy; 

* Years of operating. 

Common Risks for IT Infrastructure: 

Many of the risks that affect software cost estimating apply to IT 
infrastructure. For example, in estimating the costs of any effort, a 
consideration should be made whether the risks of the investment 
justify the inclusion of an independent verification and validation 
contractor. In situations where the risks are very high, such as 
potential loss of life, the overall schedule may need to be extended to 
accommodate the additional reviews and testing required. For IT 
infrastructure, the set of risks in table 19 should be considered. 

Table 19: Common IT Infrastructure Risks: 

Risk: Financial; 
Technical, management, and logistic requirements that increase costs: 
* Cost overruns; 
* Funding cuts and delays. 

Risk: Logistics and equipment; 
Technical, management, and logistic requirements that increase costs: 
* Contingency equipment availability; 
* Physical storage of equipment on arrival and security; 
* Supply availability. 
 
Risk: Schedule; 
Technical, management, and logistic requirements that increase costs: 
* Unscheduled changes and delays; 
* Nonconformance, not starting, and failures; 
* Reliance on external subcontractors and organizations. 

Risk: Personnel; 
Technical, management, and logistic requirements that increase costs: 
* Changes of personnel among customer or vendor; 
* Lack of skills or knowledge; 
* Not aware of policy or procedures or inadequate personnel to support 
help desk and deployment; 
* Time lost for end user training, trouble shooting, and down time. 
 
Risk: Project management; 
Technical, management, and logistic requirements that increase costs: 
* No quality control or management process built into plan; 
* Absence of issue, change request, or configuration management logs; 
* Inconsistent project documentation or lack of IT process model; 
* Information security; 
* Lack of detailed site information; 
* Lack of issue identification or trends; 
* Lack of reporting; 
* Poor planning; 
* Requirements not well defined; 
* Role confusion; 
* Unaware of customer site requirements. 
 
Risk: Technical; 
Technical, management, and logistic requirements that increase costs: 
* Adequate capacity; 
* Additional hardware or software requirements to fully support system 
* Compatibility or whether data in the relevant process flow from end 
to end; 
* Disasters; 
* Hardware or software failure; 
* Incorrect images or version loaded; 
* Integration with existing systems; 
* New design not working; 
* Unplanned or unapproved changes; 
* Version control problems. 

Risk: User; 
Technical, management, and logistic requirements that increase costs: 
* Confusion about customer and vendor responsibilities; 
* Inability to perform core or noncore business activities; 
* Loss of data; 
* Not aware of vendor schedule or activities; 
* User expectations. 

Source: GAO. 

[End of table] 

Estimating Labor and Material Costs Associated with IT Infrastructure: 

Labor and material nonrecurring and recurring efforts are associated 
with IT infrastructure. For estimating the nonrecurring effort, staff 
loading of the IT infrastructure is similar to software development 
during early architecture and design. Once the design is complete, the 
recurring effort associated with actual implementation and deployment 
can be accomplished, based on a distribution of organizational demand 
for IT. 

IT recurring operations costs include costs similar to the maintenance 
of general fixed facilities. For example, facilities costs such as 
power, security, and general facilities support apply to IT 
infrastructure recurring operations. Furthermore, costs for purchased 
software licenses, training, technical refreshment, and various service 
level agreements also need to be considered. Finally, since the cost of 
hardware changes daily as does the requirement for computing power in 
items like servers, designing with a 50 percent reserve in capacity is 
prudent since systems tend to grow. Many labor services categories need 
to be considered when developing an IT infrastructure labor cost 
estimate. Table 20 describes typical labor categories.[Footnote 48] 

Table 20: Common Labor Categories Described: 
 
Category: Project stakeholder; 
Description: A person invested in the project’s success while not 
participating in its execution or implementation; includes end users, 
managers, and external clients whose success is somehow tied to the 
project’s success. Stakeholders work with the product management team 
to ensure that the solution developed meets the project’s original 
needs. Stakeholder participation and availability are vital to the 
success of any project; 
Common titles: [Empty]. 
 
Category: Management; 
Description: Performs project planning, staffing, and tracking; is 
involved with daily operational activities, ensuring that resources are 
used effectively and services are delivered; 
Common titles: Configuration manager, database manager, IT manager, 
project manager. 
 
Category: Analyst; 
Description: Generally involved in planning and defining needs and
requirements for IT projects and related support systems and in ongoing 
systems support, often bridging the user or customer and the technical 
team. Generally has domain or specialty knowledge of a certain type of 
system, technology, or discipline used to apply technology to address 
business and user requirements; 
Common titles: Business process, requirements, or system analyst; 
network or telecommunications analyst; support analyst; operations
analyst; database analyst; UI analyst; security analyst. 

Category: Architect; 
Description: Develops high-level system design plans to meet the
organization’s needs and comply with its policies; can help formulate 
policies and plans that support the organization, particularly as they 
pertain to technologies used to carry out policies and procedures; 
Common titles: Systems architect or engineer; IT or data architect; 
network architect; storage architect. 

Category: Technician; 
Description: Involved primarily in the physical setup, support, and 
maintenance of systems according to well defined plans and procedures, 
including system setup, installation, upgrades, and troubleshooting; 
Common titles: Desktop or PC technician; network engineer or 
technician; hardware technician; telecommunications technician. 
 
Category: Test/QA; 
Description: Primarily verifies the integrity and performance of 
systems being deployed and operated; develops test plans and 
procedures, collecting and tracking defect data and problem reports and 
serves an auditing function to ensure compliance with policies and 
procedures; 
Common titles: IT auditor, QA analyst, application tester, call center 
agent. 
 
Category: Documentation; 
Description: Prepares or maintains documentation pertaining to 
programming, systems operation, and user documentation, including user 
manuals and online help screens; 
Common titles: Technical or report writer; online help publisher; 
content developer; documentation specialist. 
 
Category: Training; 
Description: Prepares and updates courseware and training materials and 
conducts training classes or events; 
Common titles: Instructor, training developer, instructional designer, 
end user. 

Category: Administrator; 
Description: Generally involved with the ongoing administration, 
maintenance, and support of specific systems to ensure they operate 
properly and effectively; associated with a specific system or type of 
system such as a platform, database, network, or enterprise 
application; 
Common titles: Network, system, or enterprise application 
administrator; system administrator; Web or telecommunications 
administrator; database administrator; security administrator; storage 
administrator; help desk specialist (tier I, tier II, tier III). 

Category: Computer operator; 
Description: Computer operators not included in support of IT 
infrastructure and IT services; 
Common titles: [Empty]. 

Category: Indirect support; 
Description: Secretarial, reception, and other labor in support of IT 
services and infrastructure personnel and systems; 
Common titles: [Empty]. 

Category: Contract labor; 
Description: Vendors that provide services under contract to support IT 
infrastructure; 
Common titles: [Empty]. 
 
Source: GAO. 
[End of table] 

9. Best Practices Checklist: Estimating Software Costs: 
 
* The software cost estimate followed the 12-step estimating process: 
- Software was sized with detailed knowledge of program scope, 
complexity, and interactions, and the cost estimators worked with 
software engineers to determine the appropriate sizing metric. 
- It was sized with source lines of code, function, object, feature 
point, or other counts. 

* The software sizing method was appropriate: 
- Source lines of code were used if requirements were well defined and 
if 
there was a historical database of code counts for similar programs and 
a standard definition for a line of code. 
- Function points were used if detailed requirements and specifications 
were available, software did not contain many algorithmic functions, 
and an experienced and certified function point counter was available. 
- COSMIC points were used if functional user requirements are known and 
the application is for business, real-time, embedded, or infrastructure 
software. 
- Object points were used if computer-aided software engineering tools 
were used to develop the software. 
- Reports, interfaces, conversions, extensions and forms/workflow were 
used for ERP programs. 
- Use cases and use case points were used if system and user 
interactions were defined. 
- Autogenerated and reused source lines of code were identified 
separately from new and modified code to account for pre- and 
postimplementation efforts.
- Several methods were used to size the software to increase the 
accuracy of the sizing estimate. 
- The final software size was adjusted for growth based on historical 
data, and growth is continually monitored over time. 

*Software cost estimates included: 
- Development labor costs for coding and testing, other labor 
supporting 
software development, and nonlabor costs like purchasing hardware 
and licenses. 
- Productivity factors for converting software size into labor effort, 
based on historical data and calibrated to match program size and 
development environment. 
- Industry average productivity factors and risk ranges (no historical 
data were available). 
- Assumptions about productive labor hours in a day and work days in a 
year. 
- Development schedules accounting for staff availability, prior task 
dependencies, concurrent and critical path activities, number and 
length of shifts, overtime allowance, down time, and worker locations. 
- Costs for help desk support, database development, and corrective, 
adaptive, and preventive maintenance as part of the software’s life 
cycle cost. 
- Time and effort associated with rework to fix defects. 
- Training cost estimators to calibrate parametric tools to match 
the program and cross-checking model results for accuracy. 
- Estimators' accounting for integrating commercial off-the-shelf 
software into the system, including developing custom software and glue-
code. 
- Impact of risks facing ERP system implementations as outlined in 
table 18. 
- Costs associated with interfacing bolt-on applications for ERP 
systems. 

* IT infrastructure and services components of the software cost 
estimate included: 
- Costs associated with the physical attributes of the IT 
infrastructure, the performance and complexity requirements, and 
economic considerations. 
- Impact of risks affecting IT infrastructure, as outlined in table 19. 
- Costs associated with labor and material nonrecurring and recurring 
efforts. 

[End of Chapter 12] 

Chapter 13: Sensitivity Analysis: 

As a best practice, sensitivity analysis should be included in all cost 
estimates because it examines the effects of changing assumptions and 
ground rules. Since uncertainty cannot be avoided, it is necessary to 
identify cost elements that represent the most risk and, if possible, 
cost estimators should quantify the risk. This can be done through both 
a sensitivity analysis and an uncertainty analysis (discussed in the 
next chapter). 

Sensitivity analysis helps decision makers choose the alternative. For 
example, it could allow a program manager to determine how sensitive a 
program is to changes in gasoline prices and at what gasoline price a 
program alternative is no longer attractive. By using information from 
a sensitivity analysis, a program manager can take certain risk 
mitigation steps, such as assigning someone to monitor gasoline price 
changes, deploying more vehicles with smaller payloads, or decreasing 
the number of patrols. 

For a sensitivity analysis to be useful in making informed decisions, 
however, carefully assessing the underlying risks and supporting data 
is necessary. In addition, the sources of the variation should be well 
documented and traceable. Simply varying the cost drivers by applying a 
subjective plus or minus percentage is not useful and does not 
constitute a valid sensitivity analysis. This is the case when the 
subjective percentage does not have a valid basis or is not based on 
historical data. 

In order for sensitivity analysis to reveal how the cost estimate is 
affected by a change in a single assumption, the cost estimator must 
examine the effect of changing one assumption or cost driver at a time 
while holding all other variables constant. By doing so, it is easier 
to understand which variable most affects the cost estimate. In some 
cases, a sensitivity analysis can be conducted to examine the effect of 
multiple assumptions changing in relation to a specific scenario. 

Regardless of whether the analysis is performed on only one cost driver 
or several within a single scenario, the difference between sensitivity 
analysis and risk or uncertainty analysis is that sensitivity analysis 
tries to isolate the effects of changing one variable at a time, while 
risk or uncertainty analysis examines the effects of many variables 
changing all at once. 

Typically performed on high-cost elements, sensitivity analysis 
examines how the cost estimate is affected by a change in a cost 
driver’s value. For example, it might evaluate how the number of 
maintenance staff varies with different assumptions about system 
reliability values or how system manufacturing labor and material costs 
vary in response to additional system weight growth. 

Sensitivity analysis involves recalculating the cost estimate with 
different quantitative values for selected input values, or parameters, 
in order to compare the results with the original estimate. If a small 
change in the value of a cost element’s parameter or assumption yields 
a large change in the overall cost estimate, the results are considered 
sensitive to that parameter or assumption. Therefore, a sensitivity 
analysis can provide helpful information for the system designer 
because it highlights elements that are cost sensitive. In this way, 
sensitivity analysis can be useful for identifying areas where more 
design research could result in less production cost or where increased 
performance could be implemented without substantially increasing cost. 
This type of analysis is typically called a what-if analysis and is 
often used for optimizing cost estimate parameters. 

Sensitivity Factors: 

Uncertainty about the values of some, if not most, of the technical 
parameters is common early in a program’s design and development. Many 
assumptions made at the start of a program turn out to be inaccurate. 
Therefore, once the point estimate has been developed, it is important 
to determine how sensitive the total cost estimate is to changes in the 
cost drivers. Some factors that are often varied in a sensitivity 
analysis are: 

* a shorter or longer economic life; 

* the volume, mix, or pattern of workload; 

* potential requirements changes; 

* configuration changes in hardware, software, or facilities; 

* alternative assumptions about program operations, fielding strategy, 
inflation rate, technology heritage savings, and development time; 

* higher or lower learning curves; 

* changes in performance characteristics; 

* testing requirements; 

* acquisition strategy, whether multiyear procurement, dual sourcing, 
or the like; 

* labor rates; 

* growth in software size or amount of software reuse; and 

* down-scoping the program. 

These are just some examples of potential cost drivers. Many factors 
that should be tested are determined by the assumptions and performance 
characteristics outlined in the technical baseline description and 
GR&As. Therefore, auditors should look for a link between the technical 
baseline parameters and the GR&As to see if the cost estimator examined 
those that had the greatest effect on the overall sensitivity of the 
cost estimate. 

In addition, the cost estimator should always include in a sensitivity 
analysis the assumptions that are most likely to change, such as an 
assumption that was made for lack of knowledge or one that is outside 
the control of the program office. Case study 38 shows some assumptions 
that can affect the cost of building a ship. 

Case Study 38: Sensitivity Analysis, from Defense Acquisitions, GAO-05-
183: 
 
Given the uncertainties inherent in ship acquisitions, such as 
introducing new technologies and volatile overhead rates over time, 
cost analysts face a significant challenge in developing credible 
initial cost estimates. The Navy must develop cost estimates as long as 
10 years before ship construction begins, before many program details 
are known. Cost analysts therefore have to make a number of assumptions 
about ship parameters like weight, performance, and software and about 
market conditions, such as inflation rates, workforce attrition, and 
supplier base. 

In the eight case study ships we examined, other unknowns led to 
uncertain estimates. Labor hour and material costs were based on data 
from previous ships and on unproven efficiencies in ship construction. 
GAO found that analysts often factored in savings based on expected 
efficiencies that never materialized. For example, they anticipated 
savings from implementing computer-assisted design and computer-
assisted manufacturing for the San Antonio class transport LPD 17, but 
the contractor had not made the requisite research investments to 
achieve the proposed savings. Similar unproven or unsupported 
efficiencies were estimated for the Arleigh Burke class destroyer DDG 
92 and Nimitz class aircraft carrier CVN 76. Changes in the 
shipbuilders’ supplier base also created uncertainties in their 
overhead costs. 

Despite these uncertainties, the Navy did not test the validity of the 
cost analysts’ assumptions in estimating construction costs for the 
eight case study ships and did not identify a confidence level for 
estimates. 

Source: GAO, Defense Acquisitions: Improved Management Practices Could 
Minimize Cost Growth in Navy Shipbuilding Programs, GAO-05-183, 
Washington, D.C.: Feb. 28, 2005. 

[End of case study] 

Steps In Performing A Sensitivity Analysis: 

A sensitivity analysis addresses some of the estimating uncertainty by 
testing discrete cases of assumptions and other factors that could 
change. By examining each assumption or factor independently, while 
holding all others constant, the cost estimator can evaluate the 
results to discover which assumptions or factors most influence the 
estimate. A sensitivity analysis also requires estimating the high and 
low uncertainty ranges for significant cost driver input factors. To 
determine what the key cost drivers are, a cost estimator needs to 
determine the percentage of total cost that each cost element 
represents. The major contributing variables within the highest 
percentage cost elements are the key cost drivers that should be 
varied in a sensitivity analysis. A credible sensitivity analysis 
typically has five steps: 

1. identify key cost drivers, ground rules, and assumptions for 
sensitivity testing; 

2. reestimate the total cost by choosing one of these cost drivers to 
vary between two set amounts—for example, maximum and minimum or 
performance thresholds;[Footnote 49] 

3. document the results; 

4. repeat 2 and 3 until all factors identified in step 1 have been 
tested independently; 

5. evaluate the results to determine which drivers affect the cost 
estimate most. 

Sensitivity analysis also provides important information for economic 
analyses that can end in the choice of a different alternative from the 
original recommendation. This can happen because, like a cost estimate, 
an economic analysis is based on assumptions and constraints that may 
change. Thus, before choosing an alternative, it is essential to test 
how sensitive the ranking of alternatives is to changes in assumptions. 
In an economic analysis, sensitivity is determined by how much an 
assumption must change to result in an alternative that differs from 
the one recommended. For example, an assumption is considered sensitive 
if a 10–50 percent change yields a different alternative, very 
sensitive if the change is less than 10 percent. 

Assumptions and cost drivers that have the most effect on the cost 
estimate warrant further study to ensure that the best possible value 
is used for that parameter. If the cost estimate is found to be 
sensitive to several parameters, all the GR&As should be reviewed, to 
assure decision makers that sensitive parameters have been carefully 
investigated and the best possible values have been used in the final 
point estimate. 

Sensitivity Analysis Benefits and Limitations: 

A sensitivity analysis provides a range of costs that span a best and 
worst case spread. In general, it is better for decision makers to know 
the range of potential costs that surround a point estimate and the 
reasons behind what drives that range than to just have a point 
estimate to make a decision from. Sensitivity analysis can provide a 
clear picture of both the high and low costs that can be expected, with 
discrete reasons for what drives them. Figure 14 shows how sensitivity 
analysis can give decision makers insight. 

Figure 14: A Sensitivity Analysis That Creates a Range around a Point 
Estimate: 

[Refer to PDF for image: Illustration] 

Point Estimate: $10 billion. 

Increase in life-cycle estimate: 

Description: Increase the number of cost penalties in airframe 
development CER: +$40.0 million ((0.4%): $10.040 billion. 

Description: Double the development testing: +$50.5 million (0.5%): 
$10.090 billion. 

Description: Increase airframe weight: +$1.009 million (10%): $11,099 
billion. 

Description: Eliminate concurrent production quantities: +$22.0 million 
(0.2%): $11.121 billion. 

Description: Increase quantity of materials in aircraft: +$1,668 
million (15%): $12,789 billion. 

Decrease in life-cycle estimate: 

Description: Use 88% learning curve instead of 91%: -$60.0 million 
(0.6%): $9.940 billion; 

Description: Eliminate integration and assembly cost add-on: -$50.0 
million (0.5%): $9.89 billion. 

Description: Reduce airframe weight: -$100.0 million (1.0%): $9.79 
billion. 

Description: Improve aircraft maintainability: -$40.0 million (0.4%): 
$9.75 billion. 

Description: Reduce peacetime flying hours: -#390.0 million (4.0%): 
$9.36 billion. 

Source: GAO. 

[End of figure] 

In figure 14, it is very apparent how certain assumptions affect the 
estimate. For example, increasing the quality of materials in the 
aircraft has the biggest effect on the highest cost estimate—adding 
$1,668 million to the point estimate—while reducing the number of 
flying hours is the biggest driver for reducing the cost 
estimate—reducing the flying hours saves $390 million. Using visuals 
like this can quickly display what-if analyses that can help management 
make informed decisions. 

A sensitivity analysis also reveals critical assumptions and program 
cost drivers that most affect the results and can sometimes yield 
surprises. Therefore, the value of sensitivity analysis to decision 
makers lies in the additional information and understanding it brings 
to the final decision. Sensitivity analysis can also make for a more 
traceable estimate by providing ranges around the point estimate, 
accompanied by specific reasons for why the estimate could vary. This 
insight allows the cost estimator and program manager to further 
examine potential sources of risk and develop ways to mitigate them 
early. Sensitivity analysis permits decisions that influence the 
design, production, and operation of a system to focus on the elements 
that have the greatest effect on cost. 

Sensitivity analysis is limited in that it examines only the effect of 
changing one assumption or factor at a time. But the risk of several 
assumptions or factors varying simultaneously, and its effect on the 
overall point estimate, should be understood.[Footnote 50] In the next 
chapter, we discuss risks and uncertainty analyses. 

10. Best Practices Checklist: Sensitivity Analysis: 

* The cost estimate was accompanied by a sensitivity analysis that 
identified the effects of changing key cost driver assumption and 
factors. 
- Well-documented sources supported the assumption or factor ranges. 
- The sensitivity analysis was part of a quantitative risk assessment 
and not based on arbitrary plus or minus percentages. 
- Cost-sensitive assumptions and factors were further examined to see 
whether design changes should be implemented to mitigate risk. 
- Sensitivity analysis was used to create a range of best and worst 
case costs. 
- Assumptions and performance characteristics listed in the technical 
baseline description and GR&As were tested for sensitivity, especially 
those least understood or at risk of changing. 
- Results were well documented and presented to management for 
decisions. 

* The following steps were taken during the sensitivity analysis: 
- Key cost drivers were identified. 
- Cost elements representing the highest percentage of cost were 
determined and their parameters and assumptions were examined. 
- The total cost was reestimated by varying each parameter between its 
minimum and maximum range. 
- Results were documented and the reestimate was repeated for each 
parameter that was a key cost driver. 
- Outcomes were evaluated for parameters most sensitive to change. 
* The sensitivity analysis provided a range of possible costs, a point 
estimate, and a method for performing what-if analysis. 

[End of Chapter 13] 

Chapter 14: Cost Risk And Uncertainty: 

In chapter 13, we discussed sensitivity analysis and how it is useful 
for performing what-if analysis, determining how sensitive the point 
estimate is to changes in the cost drivers, and developing ranges of 
potential costs. A drawback of sensitivity analysis is that it looks 
only at the effects of changing one parameter at a time. In reality, 
many parameters can change at the same time. Therefore, in addition to 
a sensitivity analysis, an uncertainty analysis should be performed to 
capture the cumulative effect of additional risks. 

Because cost estimates predict future program costs, uncertainty is 
always associated with them. For example, data from the past may not 
always be relevant in the future, because new manufacturing processes 
may change a learning curve slope or new composite materials may change 
the relationship between weight and cost. Moreover, a cost estimate is 
usually composed of many lower-level WBS elements, each of which comes 
with its own source of error. Once these elements are added together, 
the resulting cost estimate can contain a great deal of uncertainty. 

The Difference Between Risk And Uncertainty: 

Risk and uncertainty refer to the fact that because a cost estimate is 
a forecast, there is always a chance that the actual cost will differ 
from the estimate. Moreover, lack of knowledge about the future is only 
one possible reason for the difference. Another equally important 
reason is the error resulting from historical data inconsistencies, 
assumptions, cost estimating equations, and factors typically used to 
develop an estimate. 

In addition, biases are often found in estimating program costs and 
developing program schedules. The biases may be cognitive—often based 
on estimators’ inexperience—or motivational, where management 
intentionally reduces the estimate or shortens the schedule to make the 
project look good to stakeholders. Recognizing the potential for error 
and deciding how best to quantify it is the purpose of risk and 
uncertainty analysis.[Footnote 51] 

It is inaccurate to add up the most likely WBS elements to derive a 
program cost estimate, since their sum is not usually the most likely 
estimate for the total program, even if they are estimated without 
bias.[Footnote 52] Yet summing costs estimated at the detailed level to 
derive a point estimate is the most common approach to estimating a 
total program. Simulation of program risks is a better way to estimate 
total program cost, as we discuss below.

Quantifying risk and uncertainty is a cost estimating best practice 
addressed in many guides and references. DOD specifically directs that 
uncertainty be identified and quantified. The Clinger-Cohen Act 
requires agencies to assess and manage the risks of major information 
systems, including the application of the risk-adjusted return on 
investment criterion in deciding whether to undertake particular 
investments.[Footnote 53] 

While risk and uncertainty are often used interchangeably, in 
statistics their definitions are distinct: 

Risk is the chance of loss or injury. In a situation that includes 
favorable and unfavorable events, risk is the probability that an 
unfavorable event will occur. 

Uncertainty is the indefiniteness about the outcome of a situation. It 
is assessed in cost estimate models to estimate the risk (or 
probability) that a specific funding level will be exceeded.[Footnote 
54] 

Therefore, while both risk and uncertainty can affect a program’s cost 
estimate, enough data will never be available in most situations to 
develop a known frequency distribution. Cost estimating is analyzed 
more often for uncertainty than risk, although many textbooks use both 
terms to describe the effort. 

Point Estimates Alone Are Insufficient For Good Decisions: 

Since cost estimates are uncertain, making good predictions about how 
much funding a program needs to be successful is difficult. In a 
program’s early phases, knowledge about how well technology will 
perform, whether the estimates are unbiased, and how external events 
may affect the program is imperfect. For management to make good 
decisions, the program estimate must reflect the degree of uncertainty, 
so that a level of confidence can be given about the estimate. 

Quantitative risk and uncertainty analysis provide a way to assess the 
variability in the point estimate. Using this type of analysis, a cost 
estimator can model such effects as schedules slipping, missions 
changing, and proposed solutions not meeting user needs, allowing for a 
known range of potential costs. Having a range of costs around a point 
estimate is more useful to decision makers, because it conveys the 
level of confidence in achieving the most likely cost and also informs 
them on cost, schedule, and technical risks. 

Point estimates are more uncertain at the beginning of a program, 
because less is known about its detailed requirements and opportunity 
for change is greater. In addition, early in a program’s life cycle, 
only general statements can be made. As a program matures, general 
statements translate into clearer and more refined requirements that 
reduce the unknowns. However, more refined requirements often translate 
into additional costs, causing the distribution of potential costs to 
move further to the right, as illustrated in figure 15. 

Figure 15: Changes in Cost Uncertainty across the Acquisition Life 
Cycle: 

[Refer to PDF for image: Illustration] 

This figure plots cost against time and illustrates the following data 
points: 

Concept formulation: 
Cost estimate: $125 million. 

Development: 
Cost estimate: $175 million. 

Implementation: 
Cost Cost estimate: $230 million. 

Source: GAO. 

While the point estimate increases in figure 15, the uncertainty range 
around it decreases. More is learned as the project matures. First, a 
better understanding of the risks is achieved, and either some risk is  
retired or some form of risk handling lessens the potential cost or 
effect on schedule. Second, the program is understood better and, most 
probably, more requirements are added or overlooked as elements are 
added, which has a tendency to increase costs along with reducing the 
variance. Thus, a point estimate, by itself, provides no information 
about the underlying uncertainty other than that it is the value chosen 
as most likely. 

A confidence interval, in contrast, provides a range of possible costs, 
based on a specified probability level. For example, a program with a 
point estimate of $10 million could range in cost from $5 million to 
$15 million at the 95 percent confidence level. In addition, the 
probability distribution, usually in the form of a cumulative 
distribution or S curve (described below) can provide the decision 
maker with an estimate of the probability that the program’s cost will 
actually be at some value or lower. Conversely, 1.0 minus this 
probability is the probability that the project will overrun that 
value. 

Using an uncertainty analysis, a cost estimator can easily inform 
decision makers about a program’s potential range of costs. Management, 
in turn, can use these data to decide whether the program fits within 
the overall risk range of the agency’s portfolio. 

Budgeting To A Realistic Point Estimate: 
 
Over the years, GAO has reported that many programs overrun their 
budgets because original point estimates are unrealistic. Case studies 
39 and 40 are examples. 

Case Study 39: Point Estimates, from Space Acquisitions, GAO-07-96: 
 
Estimated costs for DOD’s major space acquisitions increased about 
$12.2 billion, or nearly 44 percent, above initial estimates for fiscal 
year 2006 through fiscal year 2011. GAO identified a variety of reasons 
for this. The most notable are that weapons programs have incentives to 
produce and use optimistic cost and schedule estimates to compete 
successfully for funding and that DOD starts its space programs before 
it has assurance that the capabilities it is pursuing can be achieved 
within its resource and time constraints. 

At the same time, the cost growth resulted partly from DOD’s using low 
cost estimates to establish program budgets, finding it necessary later 
to make funding shifts with costly, reverberating effects. In 2003, a 
DOD study found that the space acquisitions system was strongly biased 
to produce unrealistically low cost estimates throughout the process. 
The study found that most programs at contract initiation had a 
predictable cost growth of 50 percent to 100 percent. It found that the 
unrealistically low projections of program cost and the lack of 
provisions for management reserve seriously distorted management 
decisions and program content, increased risks to mission success, and 
virtually guaranteed program delays. GAO found most of these conditions 
in many DOD programs. 

Source: GAO, Space Acquisitions: DOD Needs to Take More Acton to 
Address Unrealistic Initial Cost Estimates of Space Systems, GAO-07-96, 
Washington, D.C.: Nov. 17, 2006. 

[End of case study] 

Case Study 40: Point Estimates, from Defense Acquisitions, GAO-05-183: 

For several case study ships, the costs of materials increased 
dramatically above the shipbuilder’s initial plan. Materials cost was 
the most significant component of cost growth for the CVN 76 in the 
Nimitz class of aircraft carriers, the LPD 17 in the San Antonio class 
of transports, and the SSN 775 in the Virginia class of submarines. The 
growth in materials costs resulted, in part, from Navy and shipbuilders 
under budgeting for these costs. 

For example, the materials budget for the first four Virginia class 
submarines was $132 million less than quotes received from vendors and 
subcontractors. The shipbuilder agreed to take on the challenge of 
achieving lower costs in exchange for providing in the contract that 
the shipbuilder would be reimbursed for cost growth in high-value, 
specialized materials. 

In addition, the materials budget for the CVN 76 and CVN 77 was based 
on an incomplete list of materials needed to construct the ships, 
leading to especially sharp increases in estimated materials costs. In 
this case, the Defense Contract Audit Agency criticized the 
shipbuilder’s estimating system, particularly the system for materials 
and subcontract costs, stating that the resulting estimates “do not 
provide an acceptable basis for negotiation of a fair and reasonable 
price.” Underbudgeting of materials contributed to cost growth, 
recognized in the fiscal year 2006 budget. 
 
Source: GAO, Defense Acquisitions: Improved Management Practices Could 
Minimize Cost Growth in Navy Shipbuilding Programs, GAO-05-183, 
Washington, D.C.: Feb. 28, 2005. 

[End of case study] 

We have found that budgeting programs to a risk-adjusted estimate that 
reflects a program’s risks is critical to its successfully achieving 
its objectives. However, programs have developed optimistic estimates 
for many reasons. Cost estimators may have ignored program risk, 
underestimated data outliers, relied on historical data that may be 
misleading for a new technology, or assumed better productivity than 
the historical data supported, causing narrow uncertainty ranges. 
Decision makers may add their own bias for political or budgetary 
reasons. For example, they may make optimistic assumptions by assuming 
that a new program will perform much better than its predecessor in 
order to justify a preconceived notion, to fit the program within 
unrealistic budgetary parameters, or just to sell the program. 

One way to determine whether a program is realistically budgeted is to 
perform an uncertainty analysis, so that the probability associated 
with achieving its point estimate can be determined. A cumulative 
probability distribution, more commonly known as an S curve—usually 
derived from a simulation such as Monte Carlo—can be particularly 
useful in portraying the uncertainty implications of various cost 
estimates. Figure 16 shows an example of a cumulative probability 
distribution with various cost estimates mapped to a certain 
probability level. 

Figure 16: A Cumulative Probability Distribution, or S Curve: 

[Refer to PDF for image: S-curve graph] 

Probability of occurrence plotted against dollars in thousands: 
 
Probability of occurrence: The risk-adjusted primary estimate = $825, 
or 40% probable; 

Probability of occurrence: 50% probability = $907.9. 

Probability of occurrence: 70% probability = $1,096. 

Source: GAO and NASA. 

[End of figure] 

In figure 16, one can readily see that given what is known about 
program risks and uncertainties, the least this hypothetical program 
could cost is about $500,000, at about 5 percent probability; the most, 
$1,700,000 or less, at about 95 percent probability. Using an S curve, 
decision makers can easily understand what the likelihood of different 
funding alternatives will imply.[Footnote 55] 

For example, according to the S curve in figure 16, the point estimate 
has up to a 40 percent chance of being met, meaning there is a 60 
percent chance that costs will be greater than $825,000. On the basis 
of this information, management could decide to add $82,900 to the 
point estimate to increase the probability to 50 percent or $271,000 to 
increase the confidence level to 70 percent. The important thing to 
note, however, is the large cost increase between the 70 percent and 95 
percent confidence levels—about $600,000—indicating that a substantial 
investment would be necessary to reach a higher level of certainty. 

Management can use the data in an S curve to choose a defensible level 
of contingency reserves. While no specific confidence level is 
considered a best practice, experts agree that program cost estimates 
should be budgeted to at least the 50 percent confidence level, but 
budgeting to a higher level (for example, 70 percent to 80 percent, or 
the mean) is now common practice. Moreover, they stress that 
contingency reserves are necessary to cover increased costs resulting 
from unexpected design complexity, incomplete requirements, technology 
uncertainty, and industrial base concerns, to name a few uncertainties 
that can affect programs. 

How much contingency reserve should be allocated to a program beyond 
the 50 percent confidence level depends on the program cost growth an 
agency is willing to risk. Some organizations adopt other levels 
like the 70th or 80th percentile (refer to the S curve above) to: 
 
1. reduce their anxiety about success within budget, 

2. make some provision for risks unknown at the time but likely to 
appear as the project progresses, and, 

3. reduce the probability that they will have to explain overruns or 
rebaseline because they ran out of reserve budget. 

The amount of contingency reserve should be based on the level of 
confidence with which management chooses to fund a program, based on 
the probabilities reported in the S curve. In figure 16, management 
might choose to award a contract for $907,900 but fund the program at 
$1,096,000. This alternative would provide an additional $188,000 in 
contingency reserve at the 70 percent confidence level. The result 
would be only a 30 percent chance that the program would need 
additional funding, given the identification and quantification of the 
risks at the time of the analysis. 

Another benefit of using an S curve is that management can proactively 
monitor a program’s costs, because it knows the probability for 
incurring overruns. By understanding which input variables have a 
significant effect on a program’s final costs, management can devote 
resources to acquire better knowledge about them so that risks can be 
minimized. Finally, knowing early what the potential risks are enables 
management to prepare contingencies to monitor and mitigate them using 
an EVM system once the program is under contract. 

The bottom line is that management needs a risk-adjusted point estimate 
based on an estimate of the range of confidence to make wise decisions. 
Using information from an S curve with a realistic probability 
distribution, management can quantify the level of confidence in 
achieving a program within a certain funding level. It can also 
determine a defensible amount of contingency reserve to quickly 
mitigate risk. 

Developing a Credible S Curve Of Potential Program Costs: 

Since an S curve is vital to knowing how much confidence management can 
have in a given point estimate, it is important to know the activities 
in developing one. Seven steps are associated with developing a 
justifiable S curve: 

1. determine the program cost drivers and associated risks; 

2. develop probability distributions to model various types of 
uncertainty (for example, program, technical, external, organizational, 
program management including cost estimating and scheduling); 

3. account for correlation between cost elements to properly capture 
risk; 

4. perform the uncertainty analysis using a Monte Carlo simulation 
model; 

5. identify the probability level associated with the point estimate; 

6. recommend sufficient contingency reserves to achieve levels of 
confidence acceptable to the organization; and; 

7. allocate, phase, and convert a risk-adjusted cost estimate to then-
year dollars and identify high-risk elements to help in risk mitigation 
efforts. 

To take these steps, the cost estimator must work with the program 
office and technical experts to collect the proper information. Short-
changing or merely guessing at the first two steps does not lead to a 
credible S curve and can give management a false sense of confidence in 
the information. 

Step 1: Determine Program Cost Drivers and Associated Risks: 
 
In chapter 13, we noted that one of the benefits of a sensitivity 
analysis is a list of the program cost drivers. Since numerous risks 
can influence the estimate, they should be examined for their sources 
of uncertainty and potential effect, and they should be modeled to 
determine how they can affect the uncertainty of the cost estimate. For 
example, undefined or unknown technical information, uncertain economic 
conditions, unexpected schedule problems, requirements growth, security 
level changes, and political issues are often encountered during a 
program’s acquisition. Each of these risks can negatively or positively 
affect a program’s cost. This means that uncertainty can cause the 
actual cost or schedule to differ from any current plan either in a 
positive or beneficial direction or in a negative or harmful direction. 
In addition, new technologies may be proposed that can fail outright, 
causing rework and unexpected cost growth. 

Risks are also associated with the estimating process itself. For 
instance, historical data from which to make a credible estimate can be 
lacking. When this happens, a cost estimator has no choice but to 
extrapolate with existing methods or develop a new estimating approach. 
No matter the method, some error will be introduced into the estimate. 

Accounting for all possible risks is necessary to adequately capture 
the uncertainty associated with a program’s point estimate. Far from 
exhaustive, table 21 describes some of the many sources of risk. It is 
only a starting point, since each program is unique. 

Table 21: Potential Sources of Program Cost Estimate Uncertainty: 

Uncertainty: Business or economic; 
Definition: Variations from change in business or economic assumptions; 
Example: Changes in labor rate assumptions—e.g., wages, overhead, 
general and administrative cost—supplier viability, inflation indexes, 
multiyear savings assumptions, market conditions, and competitive 
environment for future procurements. 

Uncertainty: Cost estimating; 
Definition: Variations in the cost estimate despite a fixed 
configuration baseline; 
Example: Errors in historical data and cost estimating relationships, 
variation associated with input parameters, errors with analogies and 
data limitations, data extrapolation errors, optimistic learning and 
rate curve assumptions, using the wrong estimating technique, omission 
or lack of data, misinterpretation of data, incorrect escalation 
factors, overoptimism in contractor capabilities, optimistic savings 
associated with new ways of doing business, inadequate time to develop 
a cost estimate. 
 
Uncertainty: Program; 
Definition: Risks outside the program office control; 
Example: Program decisions made at higher levels of authority, indirect 
events that adversely affect a program, directed funding cuts, multiple 
contractor teams, conflicting schedules and workload, lack of 
resources, organizational interface issues, lack of user input when 
developing requirements, personnel management issues, organization’s 
ability to accept change, other program dependencies. 

Uncertainty: Requirements; 
Definition: Variations in the cost estimate caused by change in the 
configuration baseline from unforeseen design shifts; 
Example: Changes in system architecture (especially for system of 
systems programs), specifications, hardware and software requirements, 
deployment strategy, critical assumptions, program threat levels, 
procurement quantities, network security, data confidentiality. 

Uncertainty: Schedule; 
Definition: Any event that changes the schedule: stretching it out may 
increase funding requirements, delay delivery, and reduce mission 
benefits; 
Example: Amount of concurrent development, changes in configuration, 
delayed milestone approval, testing failures requiring rework, 
infeasible schedule with no margin, overly optimistic task durations, 
unnecessary activities, omission of critical reviews. 
 
Uncertainty: Software; 
Definition: Cost growth from overly optimistic assumptions about 
software development; 
Example: Underestimated software sizing, overly optimistic software 
productivity, optimistic savings associated with using commercial off-
the-shelf software, underestimated integration effort, lack of 
commercial software documentation, underestimating the amount of glue 
code needed, configuration changes required to support commercial 
software upgrades, changes in licensing fees, lack of support for older 
software versions, lack of interface specification, lack of software 
metrics, low staff capability with development language and platform, 
underestimating software defects. 

Uncertainty: Technology; 
Definition: Variations from problems associated with technology 
maturity or availability; 
Example: Uncertainty associated with unproven technology, obsolete 
parts, optimistic hardware or software heritage assumptions, 
feasibility of producing large technology leaps, relying on lower 
reliability components, design errors or omissions. 
 
Source: DHS, DOD, DOE, NASA, OMB, SCEA, and industry. 

[End of table] 

Collecting high-quality risk data is key to a successful analysis of 
risk. Often there are no historical data from which to derive the 
information needed as inputs to a risk analysis of cost or schedule. 
Usually most risk data are derived from in-depth interviews or in risk 
workshops. In other words, the data used in program risk analyses are 
often based on individuals’ expert judgment, which depends on the 
experience of the interviewees and may be biased. The success of data 
collection depends also on the risk maturity of the organization’s 
culture. It is difficult to collect useful risk analysis data when the 
organization is indifferent or even hostile to expressing risk in the 
program. Obtaining risk information from staff outside the acquisition 
program office can help balance potential optimism. 

After identifying all possible risks, a cost estimator needs to define 
each one in a way that facilitates determining the probability of each 
risk occurring, along with the cost effect. To do this, the estimator 
needs to identify a range of values and their respective probabilities— 
either based on specific statistics or expressed as best case, worst 
case, and most likely—and the rationale for choosing the variability 
discussed. While the best practice is to rely on historical data, if 
these data are not available, how qualitative judgment was applied 
should be explained (e.g., not planning for first time success in 
testing). Because the quality and availability of the data affect the 
cost estimate’s uncertainty, these should be well documented and 
understood. For example, a cost estimate based on detailed actual data 
in significant quantities will yield a more confident estimate than one 
based on an analogy using only a few data points. 

Since collecting all this information can be formidable, it should be 
done when the data are collected to develop the estimate. Interviews 
with experts familiar with the program are good sources of how varied 
the risks are for a particular cost element. However, experts do not 
always think in extremes. They tend instead to estimate probability 
ranges that represent only 60 percent to 85 percent of the possible 
outcomes, so adjustments may have to be made to consider a wider 
universe of risks. In addition, the technical baseline description 
should address the minimum and maximum range, as well as the most 
likely value for critical program parameters. 

Several approaches, ranging from subjective judgment to complex 
statistical techniques, are available for dealing with uncertainty. 
Here we describe different ways of determining the uncertainty of a 
cost estimate. 

Cost Growth Factor: 

Using the cost growth factor, the cost estimator reflects on 
assumptions and judgments from the development of the cost estimate and 
then makes a final adjustment to the estimate. This is usually a 
percentage increase, based on historical data from similar programs, or 
an adjustment solicited from expert opinion and based on experience. 
This yields a revised cost estimate that explicitly recognizes the 
existence of uncertainty. It can be applied at the total program level 
or for one or more WBS elements. The advantages of this approach are 
that it is easy to implement, takes little time to perform, and 
requires minimal detail. Its several problems are that it requires 
access to a credible historical database, the selection of comparative 
projects and adjustment factors that can be subjective, and new 
technologies or lessons learned that may cause historical data to be 
less relevant. 

Expert Opinion: 

An independent panel of experts can be gathered to review, understand, 
and discuss the system and its costs, in order to quantify the 
estimate’s uncertainty and adjust the point estimate. This approach is 
often used in conjunction with the Delphi technique, in which several 
experts provide opinions independently and anonymously. The results are 
summarized and returned to the experts, who are then given the 
opportunity to change or modify their opinions, based on the opinions 
of the group as a whole. If successful, after several such iterations, 
the expert opinions converge. 

The strengths of this approach are directly related to the diversity 
and experience of the panel members. The major weaknesses are that it 
can be time consuming and experts can present biased opinions. For 
example, some of the largest risks facing a program may stem from a new 
technology for which there is little previous experience. If the risk 
distributions rest on the beliefs of the same experts who may be 
stakeholders, it could be difficult to truly capture the program risks. 
A typical rule of thumb is that lower and upper bounds estimated by 
experts should be interpreted as representing the 15 percent and 85 
percent levels, respectively, of all possible outcomes. Therefore, the 
cost estimator will need to adjust the distribution bounds to account 
for skew (see step 2 for more on this issue). Cost estimators can also 
mitigate bias by avoiding leading questions and by questioning all 
assumptions to see if they are backed by historical data. 

The analytic hierarchy process, like the Delphi technique, is another 
approach to making the best of expert opinion. It can be applied to the 
opinion of either an individual or a panel of experts and mitigates the 
problems of bias that result from group think or dominating 
personalities. The analytic hierarchy process provides a structured way 
to address complicated decisions: it relies on a framework for 
quantifying decision elements and evaluating various alternatives. This 
process allows for effective decision making because it captures both 
subjective and objective evaluation parameters, which can lead to less 
bias and help determine the best overall decision. The approach relies 
on mathematics to organize pair-wise comparisons of decision components 
and prioritizes the results to arrive at a stable outcome. 

Mathematical Approaches: 

Mathematical approaches rely on statistics to describe the variance 
associated with an analogy or a cost estimating relationship. The most 
common approach is to collect data on the optimistic, most likely, and 
pessimistic range (the “3-point estimate”) for the risk or the cost 
element schedule activity duration. Statistics like the standard error 
of the estimate and confidence intervals are more difficult to collect 
from program participants and are not commonly used. Some distributions 
use more exotic inputs such as “shape parameters” that are often 
difficult to collect, even in the most in-depth interviews. Therefore, 
the 3-point estimate and an idea about the distribution shape can be 
used to define the probability distribution to be used in a simulation. 
Probability distributions are used either to characterize risks that 
are assigned to cost elements or activity durations or as estimates of 
uncertainty in costs or durations that may be affected by several 
risks. With either of these approaches, in the simulation the lower-
level WBS element cost probabilities are combined to develop an overall 
program cost estimate probability distribution. 

A benefit of this approach is that it complements the decomposition 
approach to cost estimating. In addition, the emergence of commercial 
software models means that Monte Carlo simulation can be implemented 
quickly and easily, once all the data have been collected. Some 
drawbacks to the approach include input distributions that can be 
various, correlation between cost elements needs to be included, and 
decision makers may not always accept the output. In addition, high-
quality risk data are sometimes difficult and may be expensive to 
collect. 

Technology Readiness Levels: 

NASA and the Air Force Space Command, among other organizations, 
address uncertainty by applying readiness levels, which capture the 
risk associated with developing state-of-the-art technology. They have 
historically developed technology readiness levels to indicate how 
close a given technology is to being available. Technology readiness 
levels are rated on a scale from 1 to 9, with 1 representing paper 
studies of a technology’s feasibility and 9 representing technology 
completely integrated into a finished product. In appendix XII, we list 
and describe nine technology readiness levels. 

Knowing a technology’s readiness level allows a cost estimator to judge 
the risk inherent in assuming it will be available for a given program. 
For example, GAO has determined that level 7—demonstration of a 
prototype in an operational environment—is the level of technological 
maturity that constitutes low risk for starting a product development 
program. One needs to be cautious, since programs can inflate the 
level. There should be specific evidence that a program has achieved 
the claimed technology readiness level, such as that physical and 
functional interfaces are clearly defined, raw materials are available, 
and manufacturing procedures are set up and undergoing testing for 
proof of concept before accepting a claim as true. 

Software Engineering Institute Maturity Models: 

SEI has developed a variety of models that provide a logical framework 
for assessing whether an organization has the necessary process 
discipline to repeat earlier successes on similar projects. 
Organizations that do not satisfy the requirements for the “repeatable” 
level are by default judged to be at the initial level of 
maturity—meaning that their processes are ad hoc, sometimes even 
chaotic, with few of the processes defined and success dependent mainly 
on the heroic efforts of individuals. The lower the maturity, the 
higher the risk that a program will incur cost overruns. 

In addition to evaluating software risks, SEI’s risk evaluation method 
can be tailored to address hardware and organizational risks with a 
program. This method includes identifying and quantifying risk using a 
repeatable process for eliciting risks from experts. Furthermore, using 
SEI’s taxonomy, the risk evaluation method provides a consistent 
framework for employing risk management methods and mitigation 
techniques. 

Schedule Risk Analysis: 

Schedule risk analysis captures the risk that schedule durations may 
increase from technical challenges, lack of qualified personnel, and 
too few staff to do the work. Schedule risk analysis examines the 
effect of activities and events slipping on a program’s critical path 
or the longest path through the network schedule. A program schedule 
delay will have cost effects for all aspects of a program, including 
systems engineering and program management. It also analyzes how 
various activities affect one another because of precedence 
relationships—activity C cannot begin until activities A and B are 
finished—and how a slip in one activity affects the duration of other 
activities when concurrence is high among tasks. By applying 
probabilistic distributions to capture the uncertainty with traditional 
early start–late start and early finish–late finish schedule durations, 
using optimistic, pessimistic, and most likely values, a cost estimator 
can draw a better picture of the true critical path and any cost 
effects to the program. In addition, this analysis addresses the 
feasibility of the program plan as well as the effect of not meeting 
the anticipated finish date. 

Risk Cube (Probability Impact Matrix) Method: 

The risk cube method prioritizes uncertainties that could jeopardize 
program cost, schedule, performance, and quality objectives in terms of 
probability of occurrence and cost effect. Subject matter experts, 
typically engineers and others familiar with the program, define the 
risk factors, probabilities, and cost effect for each identified risk. 
Using these data, the cost estimator develops the expected cost overrun 
by multiplying the cost impact by each risk factor’s probability of 
occurrence. A common technique for engaging those knowledgeable about 
the program is creating a two-dimensional matrix like the one in figure 
17. 

Figure 17: A Risk Cube Two-Dimensional Matrix: 

[Refer to PDF for image: illustration] 

The illustration shows an steadily increasing amount of risk, from low 
to medium, to high when plotted as follows: 

Probability: The likelihood that an objective will not plan be met if 
the current plan is used; 

plotted against: 

Consequence: The program penalty incurred if the objective is not 
obtained. 

Source: GAO. 

[End of figure] 

In the risk cube (P-I matrix) method, risks are mapped onto the matrix, 
based on the severity of the consequence—ranging from low risk = 1 to 
high risk = 5—and the likelihood of their occurring—ranging from low 
likelihood = 1 to high = 5. Risks that fall in the upper right quadrant 
are the most likely to occur and have the greatest consequences for the 
program, compared to risks that fall into the lower left quadrant. 

When risks are plotted together, management can quickly determine which 
ones have top priority. For a risk cube (P-I matrix) analysis to be 
accurate, complete lists of all risks are needed, as well as accurate 
probabilities of occurrence and cost impacts. Determining the cost 
impact will vary by program and WBS element, but a cost impact could, 
for example, be categorized as “60 percent more funding is required to 
resolve a technical shortfall that has no acceptable workarounds.” Once 
the cost impacts are identified, they are mapped to the appropriate WBS 
elements to help identify risk mitigation steps that would be most 
beneficial. 

The advantages of using this approach are that those knowledgeable 
about the program can readily understand and relate to risks presented 
in this manner and that decision makers can understand the link between 
specific risks and consequences. A disadvantage is that engineers may 
not always know the cost impacts and may not account for the full 
spectrum of possible outcomes. Moreover, this method can underestimate 
total risk by omitting the correlation between technical risk and level 
of effort in activities like program management. 

Risk Scoring: 
 
Risk scoring quantifies and translates risks into cost impacts. Risk 
scoring is used to determine the amount of risk, preferably using an 
objective method in which the intervals between a score have meaning—a 
score of 1 = low risk, a score of 5 = medium risk, and a score of 10 = 
high risk. This method is used most often to determine technical risk 
associated with hardware and software. The following categories are 
used for hardware: technology advancement (level of maturity), 
engineering development (current stage of development), reliability 
(operating time without failure), producibility (ease to manufacture), 
alternative item (availability of back-up item), and schedule (amount 
of aggressiveness). Table 22 is an example of the hardware risk scoring 
matrix.[Footnote 56] 
 
Table 22: A Hardware Risk Scoring Matrix: 

Risk score: 0 = low, 5 = medium, 10 = high: 

Risk category: 1. Technology advancement; 
0: Completed, state of the art; 
1–2: Minimum advancement required; 
3–5: Modest advancement required; 
6–8: Significant advancement required; 
9–10: New technology. 

Risk category: 2. Engineering development; 
0: Completed, fully tested; 
1–2: Prototype; 
3–5: Hardware and software development; 
6–8: Detailed design; 
9–10: Concept defined. 
 
Risk category: 3. Reliability; 
0: Historically high for same system;
1–2: Historically high on similar systems;
3–5: Modest problems known;
6–8: Serious problems known;
9–10: Unknown. 
 
Risk category: 4. Producibility; 
0: Production and yield shown on same system;
1–2: Production and yield shown on similar system;
3–5: Production and yield feasible;
6–8: Production feasible and yield problems; 
9–10: No known production experience. 
 
Risk category: 5. Alternative item; 
0: Exists or availability on other items not important;
1–2: Exists or availability on other items somewhat important;
3–5: Potential alternative in development;
6–8: Potential alternative in design;
9–10: Alternative does not exist and is required. 
 
Risk category: 6. Schedule; 
0: Easily achieved; 
1–2: Achievable; 
3–5: Somewhat challenging;
6–8: Challenging; 
9–10: Very challenging.

Source: © 2003, Society of Cost Estimating and Analysis (SCEA), “Cost 
Risk Analysis.” 

[End of table] 

In addition to hardware, categories for software include technology 
approach (level of innovation), design engineering (current stage of 
development), coding (code maturity), integrated software (based on the 
source lines of code count), testing (amount completed), alternatives 
(availability of back-up code), and schedule (amount of 
aggressiveness). A software risk scoring matrix is shown in table 23. 

Table 23: A Software Risk Scoring Matrix: 

Risk score: 0 = low, 5 = medium, 10 = high: 

Risk category: 1. Technology advancement; 
0: Proven conventional analytic approach, standard methods; 
1–2: Undemonstrated conventional approach, standard methods; 
3–5: Emerging approaches, new applications; 
6–8: Unconventional approach, concept in development; 
9–10: Unconventional approach, concept unproven. 

Risk category: 2. Design engineering; 
0: Design complete and validated; 
1–2: Specifications defined and validated; 
3–5: Specifications defined; 
6–8: Requirements defined; 
9–10: Requirements partly defined. 

Risk category: 3. Coding; 
0: Fully integrated code available and validated; 
1–2: Fully integrated code available; 
3–5: Modules integrated; 
6–8: Modules exist but not integrated; 
9–10: Wholly new design, no modules exist. 

Risk category: 4. Integrated software; 
0: Thousands of instructions;
1–2: Tens of thousands of instructions;
3–5: Hundreds of thousands of instructions;
6–8: Millions of instructions; 
9–10: Tens of millions of instructions. 

Risk category: 5. Testing; 
0: Tested with system; 
1–2: Tested by simulation; 
3–5: Structured walk-throughs conducted; 
6–8: Modules tested but not as a system; 
9–10: Untested modules. 

Risk category: 6. Alternatives; 
0: Alternatives exist, alternative design not important; 
1–2: Alternatives exist, design somewhat important; 
3–5: Potential for alternatives in development; 
6–8: Potential alternatives being considered; 
9–10: Alternative does not exist but is required. 

Risk category: 7. Schedule and management; 
0: Relaxed schedule, serial activities, high review cycle frequency, 
early first review; 
1–2: Modest schedule, few concurrent activities, review cycle 
reasonable; 
3–5: Modest schedule, many concurrent activities, occasional reviews, 
late first review; 
6–8: Fast track on schedule, many concurrent activities; 
9–10: Fast track, missed milestones, review at demonstrations only, no 
periodic reviews. 

Source: U.S. Air Force. 

[End of table] 

Technical engineers score program elements between 0 and 10 for each 
category and then rank the categories according to the program’s 
effect. Next, each element’s risk score is translated into a cost 
impact by (1) multiplying a factor by an element’s estimated cost (for 
example, a score of 2 increases the cost of an element by 10 percent) 
or (2) multiplying a factor by predetermined costs (a score of 2 has a 
cost impact of $50,000) or (3) developing a weighted average risk 
assessment score that is mapped to a historical cost growth 
distribution. 

After using one or several of these methods to determine the cost risk, 
the estimator’s next step is to choose probability distributions to 
model the risk for each WBS cost element that has uncertainty. 

Step 2: Develop Probability Distributions to Model Uncertainty: 
 
Uncertainty is best modeled with a probability distribution that 
accounts for all possible outcomes according to the probability that 
they will occur. Figure 18 gives an example of a known distribution 
that models all outcomes associated with rolling a pair of dice. 

Figure 18: The Distribution of Sums from Rolling Two Dice: 

[Refer to PDF for image: illustration] 

Probability plotted versus Value; 

0 probability: that outcome is less than 2; 
50% probability: that the outcome is above or below 7 (this is the 
median); 
100% probability: that outcome will not exceed 12. 

Source: GAO. 

[End of figure] 

In figure 18, the horizontal axis shows the potential value of dice 
rolls, while the vertical axis shows the probability associated with 
each roll. The value at the midpoint of all rolls is the median. In the 
example, the median is also the most likely value (that is, average = a 
roll of 7), because the outcomes associated with rolling a pair of dice 
are symmetric. 

Besides descriptive statistics, probability distributions provide other 
useful information, such as the boundaries of an outcome. For example, 
the lower bound in figure 18 is 2 and the upper bound is 12. By 
examining the distribution, it is easy to see that both the upper and 
lower bounds have the lowest probability of occurring, while the 
chances of rolling a 6, 7, or 8 are much greater. 

It is difficult to pick an appropriate probability distribution for the 
point cost estimate as a whole, because it is composed of several 
subsidiary estimates based on the WBS. These WBS elements are often 
estimated with a variety of techniques, each with its own uncertainty 
distributions that may be asymmetrical. Therefore, just simply adding 
the most likely WBS element costs does not result in the most likely 
cost estimate because the risk distributions associated with the 
subelements differ. 

One way to resolve this issue is to create statistical probability 
distributions for each WBS element or risk by specifying the risk shape 
and bounds that reflect the relative spread and skewness of the 
distribution. The probability distribution represents the risk shape, 
and the tails of the distribution reflect the best and worst case 
outcomes. Even though the bounds are extremes and unlikely to occur, 
the distribution acknowledges the possibility and probability that they 
could happen. Probability distributions are typically determined using 
the 3-point estimates of optimistic, most likely, and pessimistic 
values to identify the amount of spread and skewness of the data. 
However, if risks are used directly, they will be assigned to specific 
cost elements or activities in a schedule and will perform 
appropriately in a simulation.[Footnote 57] 

Using a simulation tool such as Monte Carlo, a cost estimator can 
develop a statistical summation of all probable costs, allowing for a 
better understanding of how likely it is that the point estimate can be 
met. A Monte Carlo simulation also does a better job of capturing risk, 
because it takes into consideration that some risks will occur while 
others may not. Furthermore, the simulation can adjust the risks beyond 
the upper and lower bounds to account for the fact that experts do not 
typically think in extremes. Figure 19 shows why different WBS element 
distributions need to be statistically summed in order to develop the 
overall point estimate probability distribution. 

Figure 19: A Point Estimate Probability Distribution Driven by WBS: 

[Refer to PDF for image: illustrations] 

Inputs: 
Probability distributions for each cost element in a system’s work 
breakdown structure; 

Outputs: 
A cumulative probability distribution of the system’s total cost. 

Source: NASA. 

Note: RPE = reference point estimate. 

[End of figure] 

In figure 19, the sum of the reference point estimates has a low level 
of probability on the S curve. In other words, there is only a 20 
percent chance or less of meeting the point estimate cost. Therefore, 
in order to increase the confidence in the program cost estimate, it 
will be necessary to add more funding to reach a higher level of 
confidence. 

Next to knowing the bounds or 3-point estimates for the uncertainty of 
the WBS element or risk, choosing the right probability distribution 
for each WBS element is important for capturing the uncertainty 
correctly. For any WBS element, selecting the probability distribution 
should be based on how effectively it models actual outcomes. Since 
different distributions model different types of risk, knowing the 
shape of the distribution helps in visualizing how the risk will affect 
the overall cost estimate uncertainty. A variety of probability 
distribution shapes are available for modeling cost risk. Table 24 
lists eight of the most common probability distributions used in cost 
estimating uncertainty analysis. 

Table 24: Eight Common Probability Distributions: 

Distribution: Bernoulli; 
Description: Assigns probabilities of “p” for success and “1 – p” for 
failure; mean = “p”; variance = “1 – p”; 
Shape: [Refer to PDF for image]; 
Typical application: With likelihood and consequence risk cube models; 
good for representing the probability of a risk occurring but not for 
the impact on the program. 

Distribution: Beta; 
Description: Similar to normal distribution but does not allow for 
negative cost or duration, this continuous distribution can be 
symmetric or skewed; 
Shape: [Refer to PDF for image]; 
Typical application: To capture outcomes biased toward the tail ends of 
a range; often used with engineering data or analogy estimates; the 
shape parameters usually cannot be collected from interviewees. 

Distribution: Lognormal; 
Description: A continuous distribution positively skewed with a 
limitless upper bound and known lower bound; skewed to the right to 
reflect the tendency toward higher cost; 
Shape: [Refer to PDF for image]; 
Typical application: To characterize uncertainty in nonlinear cost 
estimating relationships; it is important to know how to scale the 
standard deviation, which is needed for this distribution. 

Distribution: Normal; 
Description: Used for outcomes likely to occur on either side of the
average value; symmetric and continuous, allowing for negative costs 
and durations. In a normal distribution, about 68% of the values fall 
within one standard deviation of the mean; 
Shape: [Refer to PDF for image]; 
Typical application: To assess uncertainty with cost estimating 
methods; standard deviation or standard error of the estimate is used 
to determine dispersion. Since data must be symmetrical, it is not as 
useful for defining risk, which is usually asymmetrical, but can be 
useful for scaling estimating error. 

Distribution: Poisson; 
Description: Peaks early and has a long tail compared to other 
distributions; 
Shape: [Refer to PDF for image]; 
Typical application: To predict all kinds of outcomes, like the number 
of software defects or test failures. 

Distribution: Triangular; 
Description: Characterized by three points (most likely, pessimistic, 
and optimistic values), can be skewed or symmetric and is easy to 
understand because it is intuitive; one drawback is the absoluteness of 
the end points, although this is not a limitation in practice since it 
is used in a simulation; 
Shape: [Refer to PDF for image]; 
Typical application: To express technical uncertainty, because it works 
for any system architecture or design; also used to determine schedule 
uncertainty. 

Distribution: Uniform; 
Description: Has no peaks because all values, including highest and 
lowest possible values, are equally likely; 
Shape: [Refer to PDF for image]; 
Typical application: With engineering data or analogy estimates. 

Distribution: Weibull; 
Description: Versatile, can take on the characteristics of other 
distributions, based on the value of the shape parameter “b”— e.g., 
Rayleigh and exponential distributions can be derived from it[A]; 
Shape: [Refer to PDF for image]; 
Typical application: In life data and reliability analysis because it 
can mimic other distributions and its objective relationship to 
reliability modeling. 

Source: DOD, NASA, SCEA, and Industry. 

[A] The Rayleigh and exponential distributions are a class of 
continuous probability distribution. 

[End of table] 

The triangular, lognormal, beta, uniform, and normal distributions in 
table 24 are the most common distributions that cost estimators may use 
to perform an uncertainty analysis. They are generally sufficient, 
given the quality of the information derived from interviews and the 
granularity of the results. However, many other types of distributions 
are discussed in myriad literature sources and are available through a
variety of statistical tools. 

The point to remember is that the shape of the distribution is 
determined by the characteristics of the risks they represent. If they 
are applied to WBS elements, they may combine the impact of several 
risks, so it may take some thought to determine the most appropriate 
distribution to use. For a CER, it is a best practice to use prediction 
interval statistical analysis to determine the bounds of the 
probability distribution because it is an objective method for 
determining variability. The prediction interval captures the error 
around a regression estimate and results in a wider variance for the 
CER. 

When there is no objective way to pick the distribution bounds, a cost 
estimator will resort to interviewing several people—especially 
experienced personnel both within and outside the program—about what 
the distribution bounds should be. Promising anonymity to the 
interviewees may help secure their unbiased thoughts. Separating the 
risk analysis function organizationally from the program and program 
manager often provides the needed independence to withstand political 
and other pressures for biased results. 

One way to avoid the potential for experts to be success oriented when 
choosing the upper and lower extremes of the distribution is to look 
for historical data that back up the distribution range. If historical 
data are not available, it may be necessary to adjust the tails to 
account for the fact that being overly optimistic usually results in 
programs costing more and taking longer than planned. Thus, it is 
necessary to skew the tails to account for this possibility in order to 
properly represent the overall risk. Organizations should, as a best 
practice, examine and publish default distribution bounds that cost 
estimators can use when the data cannot be obtained objectively. 

Once all cost element risks have been identified in step 1 and 
distributions have been chosen to model them in step 2, correlation 
between the cost elements must be examined in order to fully capture 
risk, especially risk related to level-of-effort cost elements. 

Step 3: Account for Correlation between Cost Elements: 

Because different WBS elements’ costs may be affected by the same 
external factors, some degree of correlation exists between them. 
Correlation identifies the relationship between WBS elements such that 
when one WBS element’s cost is high within its own probability 
distribution, the other WBS element will also show a high cost in its 
own probability distribution. Thus, correlated cost elements should 
rise and fall together. Without correlating the two elements, 
inconsistent scenarios where one is high and the other is low could 
occur during the simulation, causing erroneous results. Therefore, a 
change in one WBS element’s cost will usually be found with a change in 
the same direction (if positive correlation) in another element’s cost. 
If this is so for many elements, the cumulative effect tends to 
increase the range of possible costs. Consider the following examples: 

* If a supplier delivers an item late, other scheduled deliveries could 
be missed, resulting in additional cost. 

* If technical performance problems occur, unexpected design changes 
and unplanned testing may result, affecting the final schedule and 
cost. 

* If concurrence is great between activities, a slip in one activity 
could have a cascading effect on others, resulting in a high degree of 
schedule and cost uncertainty. 

* If the number of software lines of code depends heavily on the 
software language and the definition of what constitutes a line of 
code, a change in the counting definition or software language will 
change the number of lines of code affecting both schedule and cost. 

As these examples show, many parts of a cost estimate may move 
together, and when they do, summing their costs results in 
reinforcement in both negative and positive directions. Therefore, 
mitigating a risk that affects two or more WBS cost elements can reduce 
uncertainty on several cost elements. A case in point is the standing 
army effect, which occurs when a schedule slip in one element results 
in delays in many other cost elements as staff wait to complete their 
work. As such, interelement correlation must be addressed so that the 
total cost probability distribution properly reflects the risks. 

To properly capture functional correlation, the cost model should be 
structured with all dependencies intact. For instance, if the cost of 
training is modeled as a factor of hardware cost, then any uncertainty 
in the hardware cost will be positively correlated to the risk in 
training cost. Thus, when the simulation is run, risks fluctuating 
within main cost element probability distributions will accurately flow 
down to dependent WBS elements. 

One of the advantages of a cost estimating relationship based cost 
model is the manner in which the statistical analysis used to derive 
the CERs can also be drawn on to identify and, in some cases, quantify 
the correlations between various cost risk elements. It is also 
important to ensure that uncertain cost method inputs (weight, labor 
rates) are correlated appropriately. 

In some cases, however, it may be necessary to inject correlation to 
“below the line” dependent elements to account for correlated risk. 
These elements are typically level-of-effort support activities, like 
systems engineering and program management. In addition, correlation 
may have to be injected into the cost model to account for effects that 
the model may not capture. For example, a program risk may be that the 
length of an aircraft wing increases. If that happens, a larger engine 
than was originally estimated would then be required. Because this risk 
effect is not correlated in the cost model, it must be injected into 
the risk scenario. 

Estimators should examine the correlation coefficients from the 
simulation model to determine the amount of correlation that already 
exists in the cost model. As a rule of thumb, it is better to insert an 
overall nominal correlation of 0.25 than to show no correlation at all. 
This will prevent the simulation from drawing a low value for one 
element and a high value for another element, causing a cancellation of 
risk when both elements are positively correlated. 

Regardless of which approach is taken, it is important to note that 
correlation should never be ignored. Doing so can significantly affect 
the cost risk analysis, resulting in a dramatically understated 
probability distribution that can create a false sense of confidence in 
the resulting estimate. Therefore, highly risky programs should show a 
wide range of possible costs. (More information on correlation and how 
to account for schedule risk affecting the cost estimate is in appendix 
X.) 

Step 4: Perform Uncertainty Analysis with a Monte Carlo Simulation: 

The most common technique for combining the individual elements and 
their distributions is Monte Carlo simulation.[Footnote 58] In one 
approach, the distributions for each cost element are treated as 
individual populations from which random samples are taken. In another 
approach, each risk is modeled and assigned to the WBS elements it 
affects; in this approach, a risk may affect more than one WBS 
element’s cost, and a WBS element’s cost may be affected by more than 
one risk. In either case, during the simulation a cost model is 
recalculated thousands of times by repeatedly drawing random values 
from each WBS distribution or distribution of risk factors, so that 
many, thousands of, or nearly all possible outcomes are taken into 
account. The simulation’s output illustrates (1) the likelihood of 
achieving the program’s cost objectives, given the current plan and 
risks as they are known and quantified; (2) the likelihood of other 
possible outcomes, which can be a way to determine the cost value that 
has an acceptable probability of being exceeded; and (3) by 
sensitivity, the high-priority risks or WBS elements as a guide to 
effective risk mitigation. 

Not a new concept, Monte Carlo simulation has been a respected method 
of analyzing risk in engineering and science for more than 60 years. 
Mathematicians working on the Manhattan project used it during 
World War II and this technique was used to determine the value of pi 
(p) to within 6 decimal points. Developed by a mathematician who 
pondered the probabilities associated with winning a card game of 
solitaire, Monte Carlo simulation is used to approximate the 
probability outcomes of multiple trials by generating random numbers. 
In determining the uncertainty associated with a program’s point 
estimate, a Monte Carlo simulation randomly generates values for 
uncertain variables over and over to simulate a model. 

Without the aid of simulation, the analyst generally produces a single 
outcome for the total program cost, usually by adding up the 
individual WBS element cost estimates. This value is not necessarily 
the most likely or average scenario. In fact, without a risk analysis, 
it is not known how adequate this single-point estimate is likely to be 
for handling the program risks. But after hundreds or thousands of 
trials, one can view the frequency distribution of the results and 
determine the certainty of any outcome. Performing an uncertainty 
analysis using Monte Carlo simulation quantifies the amount of cost 
risk within a program. Only by quantifying the cost risk can management 
make informed decisions about risk mitigation strategies and provide a 
benchmark against which to measure progress. 

To perform an uncertainty analysis, each WBS element’s risk or risk 
factor is assigned a specific probability distribution of feasible 
values. In setting up the simulation, any identified causality may be 
modeled. Also, correlations are specified, including identified 
correlated elements and estimated strength of the correlation. These 
are automatically taken into account by the software during the 
simulation, where a random draw from each distribution is taken and the 
results are added up. This random drawing among distributions is 
repeated thousands of times with statistical software in order to 
determine the frequency distribution. Since the simulation’s inputs are 
probability distributions, the outputs are also distributions. The 
result is a distribution of random total program costs based on the 
overall mean and standard deviation. Rather than being normal, the 
total cost distribution is usually lognormal. This happens because the 
overall cost distribution is derived from the lower-level WBS elements, 
each of which has unique distributions. Since many of these underlying 
distributions tend to be skewed to the right, the overall distribution 
is typically lognormal. This makes sense since most cost estimates tend 
to overrun rather than underrun. This distribution can also be 
converted to an S curve like the S curves shown in figures 16 and 19. 

An advantage of using a Monte Carlo simulation is that both good and 
bad effects can be modeled, as well as any offsets that occur when both 
happen at the same time. In addition, Monte Carlo simulation not only 
recognizes the uncertainty inherent in the point estimate but also 
captures the uncertainty with all other possible estimates, allowing 
for a better analysis of alternatives. Using this technique, management 
can base decisions on cost estimate probabilities rather than relying 
on a single point estimate with no level of confidence attached. 

Step 5: Identify the Probability Associated with the Point Estimate: 

After the simulation has been run and causality and correlation have 
been accounted for, the next step is to determine the probability 
associated with the point estimate. The cumulative probability 
distribution resulting from the Monte Carlo simulation provides the 
cost estimator and management with risk-adjusted estimates and 
corresponding probabilities. The output of the simulation is useful for 
determining the level of probability in achieving the point estimate, 
along with a range of possible outcomes bounded by minimum and maximum 
costs. This probability can then be weighed against available funding 
to understand the confidence one can place in the program’s meeting its 
objectives. 

Uncertainty analysis using a Monte Carlo simulation communicates to 
stakeholders how likely a program is to finish at the estimated cost 
and schedule, how much cost contingency reserve is needed to provide 
the desired degree of certainty that the estimate will be adequate, and 
the likely risks so that proactive responses can be developed.[Footnote 
59] It also determines how different two competing alternatives are in 
terms of cost. In addition, estimating future costs with probabilities 
is better than just relying on a point estimate, because informed 
decisions can be made regarding all possible outcomes. 

Because we can never know all the risks until the program is finally 
complete, the risk analysis and cost risk simulation exercise should be 
conducted periodically through the life of the program. Organizations 
often require such an analysis before major milestone decision points. 

Step 6: Recommend Sufficient Contingency Reserves: 

The main purpose of risk and uncertainty analysis is to ensure that a 
program’s cost, schedule, and performance goals can be met. The 
analysis also communicates to decision makers the specific risks that 
contribute to a program’s cost estimate uncertainty. Without this 
knowledge, a program’s estimated cost could be understated and subject 
to underfunding and cost overruns, putting it at risk of being reduced 
in scope or requiring additional funding to meet its objectives. 
Moreover, probability data from an uncertainty analysis can result in 
more equitable distribution of budget in an EVM system, ensuring that 
the most risky cost elements receive adequate budget up front. 

Using information from the S curve, management can determine the 
contingency reserves needed to reach a specified level of confidence. 
The difference in cost between the point estimate and the desired 
confidence level determines the required contingency reserve. Because 
cost distributions tend to be right skewed (that is, the tendency is 
for costs to overrun rather than underrun), the mean of the 
distribution tends to fall somewhere between the 55 percent and 65 
percent confidence levels. Therefore, if it is decided to fund a 
program at the 50 percent confidence level, there is still a chance 
that the program will need additional funding because the expected 
value is at a higher confidence level. Moreover, extremely risky 
programs will require funding at a level closer to the 65 percent 
confidence level or higher. Since each program is unique and so are its 
risks, there are no set rules as to what level of contingency is 
sufficient. Decision makers have to decide the level of confidence at 
which to set the budget. Having adequate funding is paramount for 
optimal program execution, since it can take many months to obtain 
necessary funding to address an emergent program issue. Without 
available risk funding, cost growth is likely. 

We caution that the validity of the results depends on the knowledge, 
experience, and data regarding a program’s risks. When the uncertainty 
analysis has been poorly executed, management may have a false sense of 
security that all risks have been accounted for and that the analysis 
was based on sound data. When this happens, program decisions will be 
based on bad information. Thus, it is imperative that the cost 
estimators properly correlate cost elements and consider a broad range 
of potential program risks rather than simply focusing on the risks 
that most concern the program office or contractor. Furthermore, to 
ensure that best practices have been followed and to prevent errors 
such as not properly accounting for correlation between cost elements, 
it is a best practice to vet the uncertainty analysis through a core 
group of experts to ensure that results are robust and valid. 

In addition, to ensure that accurate information is available for 
performing uncertainty analysis, the estimate should be continually 
updated with actual costs and any variances recorded. This will enable 
organizations to identify areas where estimating was difficult or 
sources of risk were not considered. Doing so will guard against 
presenting misleading results to management and will result in 
continuous improvements in the uncertainty analysis process. 

A program’s early phases entail a lot of uncertainty, and the amount of 
contingency funding required may exceed acceptable levels. Management 
may gain insight from the uncertainty analysis by acting to reduce risk 
to keep the program affordable. It may also examine different levels of 
contingency reserve funds to understand what level of confidence the 
program can afford. Most importantly, management needs to understand 
that any uncertainty analysis or risk mitigation is only as good as the 
comprehensiveness of risks and uncertainties identified. Unknown risks 
could still cause problems, and these are difficult, if not impossible, 
to quantify. 

Step 7: Allocate, Phase, and Convert a Risk-Adjusted Cost Estimate to 
Then-Year Dollars and Identify High-Risk Elements: 

Uncertainty is calculated on the total cost estimate results, not year 
by year. Therefore, since a budget is requested in then-year dollars, 
it is necessary to convert the cost estimate into then-year dollars by 
phasing the WBS element costs over time. Because WBS element results at 
a specific confidence level will not sum to the parent levels, it will 
be necessary to pick the level in the WBS from which risk dollars are 
to be managed. The difference between the point estimate and the risk 
result at the selected confidence level is the amount of contingency 
reserve to be set aside for mitigating risks in lower WBS level 
elements. 

Once the amount of contingency reserve has been identified, reserves 
need to be identified and set aside for the WBS elements that harbor 
the most risks so that funding will be available to mitigate risks 
quickly. To identify which WBS elements may need contingency reserve, 
results from the uncertainty analysis are used to prioritize risks, 
based on probability and impact as they affected the cost estimate 
during the simulation. Knowing which risks are important will guide the 
allocation of contingency reserve. 

Risk Management: 

Risk and uncertainty analysis is just the beginning of the overall risk 
management process. Risk management is a structured and efficient 
process for identifying risks, assessing their effect, and developing 
ways to reduce or eliminate risk. It is a continuous process that 
constantly monitors a program’s health. In this process, program 
management develops risk handling plans and continually tracks them to 
assess the status of program risk mitigation activities and abatement 
plans. In addition, risk management anticipates what can go wrong 
before it becomes necessary to react to a problem that has already 
occurred. Identifying and measuring risk by evaluating the likelihood 
and consequences of an undesirable event are key steps in risk 
management. The risk management process should address five steps: 

1. identify risks, 

2. analyze risks (that is, assess their severity and prioritize them), 

3. plan for risk mitigation, 

4. implement a risk mitigation plan, and, 

5. track risks. 

Steps 1 and 2 should have already been taken during the risk and 
uncertainty analysis. Steps 3–5 should begin before the actual work 
starts and continue throughout the life of the program. Over time, some 
risks will be realized, others will be retired, and some will be 
discovered: Risk management never ends. Establishing a baseline of risk 
expectations early provides a reference from which actual cost risk can 
be measured. The baseline helps program managers track and defend the 
need to apply risk reserves to resolve problems. 

Integrating risk management with a program’s systems engineering and 
program management process permits enhanced root cause analysis and 
consequence management, and it ensures that risks are handled at the 
appropriate management level. Furthermore, successful risk mitigation 
requires communication and coordination between government and the 
contractor to identify and address risks. A common database of risks 
available to both is a valuable tool for mitigating risk so that 
performance and cost are monitored continually. 

Regular event-driven reviews are also helpful in defining a program 
that meets users’ needs while minimizing risk. Similarly, relying on 
technology demonstrations, modeling and simulation, and prototyping can 
be effective in containing risk. When risks materialize, risk 
management should provide a structure for identifying and analyzing 
root causes. 

Effective risk management depends on identifying and analyzing risk 
early, while there is still time to make corrections. By developing a 
watch list of risk issues that may cause future problems, management 
can monitor and detect potential risks once the program is under 
contract. Programs that have an EVM system can provide early warning of 
emerging risk items and worsening performance trends, allowing for 
implementing corrections quickly. 

EVM systems also require the contractor to provide an estimate at 
completion and written corrective action plans for any variances that 
can be assessed for realism, using risk management data and techniques. 
Moreover, during an integrated baseline review (IBR), the joint 
government and contractor team evaluates program risks associated with 
work definition, schedule, and the adequacy of budgets. This review 
enhances mutual understanding of risks facing the program and lays the 
foundation for tracking them in the EVM system. It also establishes a 
realistic baseline from which to measure performance and identify risk 
early. 

Risk management is continual because risks change significantly during 
a program’s life. A risk event’s likelihood and consequences may change 
as the program matures and more information becomes known. Program 
management needs always to reevaluate the risk watch list to keep it 
current and examine new root causes. Successful risk management 
requires timely reporting to alert management to risks that are 
surfacing, so that mitigation action can be approved quickly. Having an 
active risk management process in place is a best practice: When it is 
implemented correctly, it minimizes risks and maximizes a program’s 
chances of being delivered on time, within budget, and with the 
promised functionality. 

11. Best Practices Checklist: Cost Risk and Uncertainty: 

* A risk and uncertainty analysis quantified the imperfectly understood 
risks that are in the program and identified the effects of changing 
key cost driver assumptions and factors. 
- Management was given a range of possible costs and the level of 
certainty in achieving the point estimate. 
- A risk adjusted estimate that reflects the program’s risks was 
determined. 
- A cumulative probability density function, an S curve, mapped various 
cost estimates to a certain probability level and defensible 
contingency reserves were developed. 
- Periodic risk and uncertainty analysis was conducted to improve 
estimate uncertainty. 

* The following steps were taken in performing an uncertainty analysis: 
- Program cost drivers and associated risks were determined, including 
those related to changing requirements, cost estimating errors, 
business or economic uncertainty, and technology, schedule, program, 
and software uncertainty. 
-- All risks were documented for source, data quality and availability, 
and probability and consequence. 
-- Risks were collected from staff within and outside the program to 
counter optimism. 
-- Uncertainty was determined by cost growth factor, expert opinion 
(adjusted to consider a wider range of risks), statistics and Monte 
Carlo simulation, technology readiness levels, software engineering 
maturity models and risk evaluation methods, schedule risk analysis, 
risk cube (P-I matrix) method, or risk scoring. 
- A probability distribution modeled each cost element’s uncertainty 
based on data availability, reliability, and variability. 
-- A range of values and their respective probabilities were determined 
either based on statistics or expressed as 3-point estimates (best 
case, most likely, and worst case), and rationale for choosing which 
method was discussed. 
-- Documentation of the rationale for choosing the probability 
distributions should be provided. 
-- Probability distribution reflects the risk shape and the tails of 
the distribution reflect the best and worst case spread as well as any 
skewness. Distribution bounds were adjusted to account for stakeholder 
bias using organization default values when data specific to the 
program are not available. 
-- If the risk driver approach is used, the data collected, including 
probability of occurrence and impact were applied to the risks 
themselves. 
-- Prediction interval statistical analysis was used for CER 
distribution bounds. 
- The correlation between cost elements was accounted for to capture 
risk. 
-- The correlation ensures that related cost elements move together 
during the simulation, resulting in reinforcement of the risks. 
-- Cost estimators examined the amount of correlation already existing 
in the model. If no correlation is present, an insertion of 0.25 
correlation was added. 
- A Monte Carlo simulation model was used to develop a distribution of 
total possible costs and an S curve showing alternative cost estimate 
probabilities. 
-- High-priority risks were examined and identified for risk 
mitigation. 
-- Strength of correlated cost elements were examined and additional 
correlation added if necessary to account for risk. 
- The probability associated with the point estimate was identified. 
- Contingency reserves were recommended for achieving the desired 
confidence level. 
-- The mean of the distribution tends to fall around the 55%–65% 
confidence level because the total cost distribution follows a 
lognormal trend (i.e., tendency to overrun rather than underrun costs). 
-- Budgeting to at least the mean of the distribution or higher is 
necessary to guard against potential risk. 
-- The cost risk and uncertainty results were vetted through a core 
group of experts to ensure that the proper steps were followed. 
-- The estimate is continually updated with actual costs and any 
variances recorded to identify areas where estimating was difficult or 
sources of risks were not considered. 
- The risk-adjusted cost estimate was allocated, phased, and converted 
to then-year dollars for budgeting, and high-risk elements were 
identified to mitigate risks. 
-- Results from the uncertainty analysis were used to prioritize risks 
based on probability and impacts as they affected the cost estimate. 

* A risk management plan was implemented jointly with the contractor to 
identify and analyze risk, plan for risk mitigation, and continually 
track risk. 
- A risk database watch list was developed, and a contractor’s EVM 
system was used for root cause analysis of cost and schedule variances, 
monitoring worsening trends, and providing early risk warning. 
- Event-driven reviews, technology demonstrations, modeling and 
simulation, and risk-mitigation prototyping were implemented. 

[End of Chapter 14] 

Chapter 15: Validating The Estimate: 

It is important that cost estimators and organizations independent of 
the program office validate that all cost elements are credible and can 
be justified by acceptable estimating methods, adequate data, and 
detailed documentation. This crucial step ensures that a high-quality 
cost estimate is developed, presented, and defended to management. This 
process verifies that the cost estimate adequately reflects the program 
baseline and provides a reasonable estimate of how much it will cost to 
accomplish all tasks. It also confirms that the program cost estimate 
is traceable and accurate and reflects realistic assumptions. 

Validating the point estimate is considered a best practice. One reason 
for this is that independent cost estimators typically rely on 
historical data and therefore tend to estimate more realistic program 
schedules and costs for state-of-the-art technologies. Moreover, 
independent cost estimators are less likely to automatically accept 
unproven assumptions associated with anticipated savings. That is, they 
bring more objectivity to their analyses, resulting in estimates that 
are less optimistic and higher in cost. An independent view provides a 
reality check of the point estimate and helps reduce the odds that 
management will invest in an unrealistic program that is bound to fail. 

The Cost Estimating Community’s Best Practices For Validating 
Estimates: 
 
OMB’s Circular No. A-94 and best practices established by professional 
cost analysis organizations, such as SCEA, identify four 
characteristics of a high-quality, reliable cost estimate.[Footnote 60] 
It is well-documented, comprehensive, accurate, and credible. 

By well documented is meant that an estimate is thoroughly documented, 
including source data and significance, clearly detailed calculations 
and results, and explanations of why particular methods and references 
were chosen. Data can be traced to their source documents. 

An estimate is comprehensive if it has enough detail to ensure that 
cost elements are neither omitted nor double counted. All cost-
influencing ground rules and assumptions are detailed in the estimate’s 
documentation. 

An estimate that is accurate is unbiased, is not overly conservative or 
overly optimistic, and is based on an assessment of most likely costs. 
Few, if any, mathematical mistakes are present; those that are are 
minor. 

As for credibility, any limitations of the analysis because of 
uncertainty or bias surrounding data or assumptions are discussed. 
Major assumptions are varied, and other outcomes are recomputed to 
determine how sensitive they are to changes in the assumptions. Risk 
and uncertainty analysis is performed to determine the level of risk 
associated with the estimate. The estimate’s results are crosschecked, 
and an independent cost estimate (ICE) conducted by a group outside the 
acquiring organization is developed to determine whether other 
estimating methods produce similar results. 

Table 25 shows how the 12 steps of a high-quality cost estimating 
process, described in table 2, can be mapped to these four 
characteristics of a high-quality, reliable cost estimate. 

Table 25: The Twelve Steps of High-Quality Cost Estimating, Mapped to 
the Characteristics of a High-Quality Cost Estimate: 

Cost estimate characteristic: Well documented; 
The estimate is thoroughly documented, including source data and
significance, clearly detailed calculations and results, and 
explanations for choosing a particular method or reference: 
* Data are traced back to the source documentation; 
* Includes a technical baseline description; 
* Documents all steps in developing the estimate so that a cost analyst 
unfamiliar with the program can recreate it quickly with the same
result; 
* Documents all data sources for how the data were normalized; 
* Describes in detail the estimating methodology and rationale used to 
derive each WBS element’s cost. 
Cost estimating step: 
1. Define the estimate’s purpose; 
3. Define the program; 
5. Identify ground rules and assumptions; 
6. Obtain the data; 
10. Document the estimate; 
11. Present the estimate to management. 
 
Cost estimate characteristic: Comprehensive: 
The estimate’s level of detail ensures that cost elements are neither
omitted nor double counted: 
* Details all cost-influencing ground rules and assumptions; 
* Defines the WBS and describes each element in a WBS dictionary; 
* A major automated information system program may have only a cost 
element structure. 
Cost estimating step: 
2. Develop the estimating plan; 
4. Determine the estimating approach. 

Cost estimate characteristic: Accurate: 
The estimate is unbiased, not overly conservative or overly optimistic, 
and based on an assessment of most likely costs: 
* It has few, if any, mathematical mistakes; its mistakes are minor; 
* It has been validated for errors like double counting and omitted 
costs; 
* Cost drivers have been cross-checked to see if results are similar; 
* It is timely; 
* It is updated to reflect changes in technical or program assumptions 
and new phases or milestones; 
* Estimates are replaced with EVM EAC and the independent EAC from 
the integrated EVM system. 
Cost estimating step: 
7. Develop the point estimate and compare it to an independent cost 
estimate; 
12. Update the estimate to reflect actual costs and changes. 

Cost estimate characteristic: Credible: 
Discusses any limitations of the analysis from uncertainty or biases
surrounding data or assumptions: 
* Major assumptions are varied and other outcomes recomputed to 
determine their sensitivity to changes in assumptions; 
* Risk and uncertainty analysis is performed to determine the level of
risk associated with the estimate; 
* An independent cost estimate is developed to determine if other 
estimating methods produce similar results. 
Cost estimating step: 
7. Develop the point estimate and compare it to an independent cost 
estimate; 
8. Conduct sensitivity analysis; 
9. Conduct risk and uncertainty analysis. 

Source: GAO. 

[End of table] 

It is important that cost estimates be validated, because lessons 
learned have shown that cost estimates tend to be deficient in this 
area (see case study 41). 

Case Study 41: Validating the Estimate, from Chemical Demilitarization, 
GAO-07-240R: 

GAO reviewed and evaluated the cost analyses that the U.S. Army used to 
prepare its cost-benefit report on the DuPont plan of treatment and 
disposal options for the VX nerve agent stockpile at the Newport, 
Indiana, depot. GAO also interviewed Army and contractor officials on 
the data and assumptions they had used to prepare their analyses. To 
determine the accuracy of the underlying data, GAO independently 
calculated values based on provided assumptions to compare with values 
in the supporting spreadsheets. GAO compared values from the supporting 
spreadsheets with summary data in the supporting posttreatment estimate 
report that the Shaw Environmental Group had prepared, Shaw being the 
contractor that helped perform the analysis for the U.S. Army Chemical 
Materials Agency report. 

GAO found, based on OMB criteria and criteria approved by the cost 
estimating community, that the underlying cost estimates in the Army’s 
report were not reliable and that the effect of this on the Army’s 
finding that the DuPont plan had “significant cost savings over the 
three considered alternatives” was uncertain. GAO’s finding of 
unreliable cost estimates included (1) the quantity and magnitude of 
errors, (2) quality control weaknesses, (3) questionable or inadequate 
supporting source data and documentation, and (4) the undetermined 
sensitivity of key assumptions. Neither the Army nor the contractor had 
a system for cross-checking costs, underlying assumptions, or technical 
parameters that went into the estimates. 

Moreover, GAO determined that the results from the Army’s program risk 
analysis were unreliable because they had been generated from 
previously discussed, unreliable cost estimates and because the Army 
attributed no risk to potential permit, legal, or other challenges to 
the DuPont plan. It was unclear whether the program risks of other 
alternatives were understated or overstated. 

Overall, GAO could not determine the cumulative effect of these 
problems on the outcome or results of the Army’s analysis, largely 
because GAO had no confidence in much of the supporting data, given 
these problems. Without reliable underlying cost estimates, the Army, 
the Congress, and the public could not have confidence that the most 
cost-effective solution had been selected. 

GAO’s recommendations were that the Army conduct its cost-benefit 
analysis again, using best practices, so that its data and conclusions 
would be comprehensive, traceable, accurate, and credible; that it 
correct any technical and mathematical errors in the cost estimate; 
that it establish quality control and independent review processes to 
check data sources, calculations, and assumptions; and that it perform 
a sensitivity analysis of key assumptions. 

Source: GAO, Chemical Demilitarization: Actions Needed to Improve the 
Reliability of the Army’s Cost Comparison Analysis for Treatment and 
Disposal Options for Newport’s VX Hydrolysate, GAO-07-240R, Washington, 
D.C.: Jan. 6, 2007. 

[End of case study] 

Too often, we have reported that program cost estimates are unrealistic 
and that, as a result, they cost more than originally promised. One way 
to avoid this predicament is to ensure that program cost estimates are 
both internally and externally validated—that is, that they are 
comprehensive, well documented, accurate, and credible. This increases 
the confidence that an estimate is reasonable and as accurate as 
possible. A detailed review of these characteristics follows.

1. Determine That the Estimate Is Well Documented: 

Cost estimates are considered valid if they are well documented to the 
point at which they can be easily repeated or updated and can be traced 
to original sources through auditing. Rigorous documentation also 
increases an estimate’s credibility and helps support an organization’s 
decision making. The documentation should explicitly identify the 
primary methods, calculations, results, rationales or assumptions, and 
sources of the data used to generate each cost element. 

Cost estimate documentation should be detailed enough to provide an 
accurate assessment of the cost estimate’s quality. For example, it 
should identify the data sources, justify all assumptions, and describe 
each estimating method (including any cost estimating relationships) 
for every WBS cost element. Further, schedule milestones and 
deliverables should be traceable and consistent with the cost estimate 
documentation. Finally, estimating methods used to develop each WBS 
cost element should be thoroughly documented so that their derivation 
can be traced to all sources, allowing for the estimate to be easily 
replicated and updated. 

2. Determine That the Estimate Is Comprehensive: 

Analysts should make sure that the cost estimate is complete and 
accounts for all possible costs. They should confirm its completeness, 
its consistency, and the realism of its information to ensure that all 
pertinent costs are included. Comprehensive cost estimates completely 
define the program, reflect the current schedule, and are technically 
reasonable. In addition, cost estimates should be structured in 
sufficient detail to ensure that cost elements are neither omitted nor 
double-counted. For example, if it is assumed that software will be 
reused, the estimate should account for all associated costs, such as 
interface design, modification, integration, testing, and 
documentation. 

To determine whether an estimate is comprehensive, an objective review 
must be performed to certify that the estimate’s criteria and 
requirements have been met, since they create the estimate’s framework. 
This step also infuses quality assurance practices into the cost 
estimate. In this effort, the reviewer checks that the estimate 
captures the complete technical scope of the work to be performed, 
using a logical WBS that accounts for all performance criteria and 
requirements. In addition, the reviewer must determine that all 
assumptions and exclusions the estimate is based on are clearly 
identified, explained, and reasonable. 

3. Determine That the Estimate Is Accurate: 
 
Estimates are accurate when they are not overly conservative or too 
optimistic, based on an assessment of most likely costs, adjusted 
properly for inflation, and contain few, if any, minor mistakes. In 
addition, when schedules or other assumptions change, cost estimates 
should be revised to reflect their current status. 

Validating that a cost estimate is accurate requires thoroughly 
understanding and investigating how the cost model was constructed. For 
example, all WBS cost estimates should be checked to verify that 
calculations are accurate and account for all costs, including indirect 
costs. Moreover, proper escalation factors should be used to inflate 
costs so that they are expressed consistently and accurately. Finally, 
rechecking spreadsheet formulas and data input is imperative to 
validate cost model accuracy. 

Besides these basic checks for accuracy, the estimating technique used 
for each cost element should be reviewed. Depending on the analytical 
method chosen, several questions should be answered to ensure accuracy. 
Table 26 outlines typical questions associated with various estimating 
techniques. 

Table 26: Questions for Checking the Accuracy of Estimating Techniques: 

Technique: Analogy; 
Question: 
* What heritage programs and scaling factors were used to create the 
analogy? 
* Are the analogous data from reliable sources? 
* Did technical experts validate the scaling factor? 
* Can any unusual requirements invalidate the analogy? 
* Are the parameters used to develop an analogous factor similar to the 
program being estimated? 
* How were adjustments made to account for differences between existing 
and new systems? Were they logical, credible, and acceptable? 

Technique: Data collection; 
Question: 
* How old are the data? Are they still relevant to the new program? 
* Is there enough knowledge about the data source to determine if it 
can be used to estimate accurate costs for the new program? 
* Has a data scatter plot been developed to determine whether any 
outliers, relationships, and trends exist? 
* Were descriptive statistics generated to describe the data, including 
the historical average, mean, standard deviation, and coefficient of 
variation? 
* If data outliers were removed, did the data fall outside three 
standard deviations? 
* Were comparisons made to historical data to show they were an 
anomaly? 
* Were the data properly normalized so that comparisons and projections 
are valid? 
* Were the cost data adjusted for inflation so that they could be 
described in like terms? 

Technique: Engineering build-up; 
Question: 
* Was each WBS cost element defined in enough detail to use this method 
correctly? 
* Are data adequate to accurately estimate the cost of each WBS 
element? 
* Did experienced experts help determine a reasonable cost estimate? 
* Was the estimate based on specific quantities that would be ordered 
at one time, allowing for quantity discounts? 
* Did the estimate account for contractor material handling overhead? 
* Is there a definitive understanding of each WBS cost element’s 
composition? 
* Were labor rates based on auditable sources? Did they include all 
applicable overhead, general and administrative costs, and fees? Were 
they consistent with industry standards? 
* Is a detailed and accurate materials and parts list available? 

Technique: Expert opinion; 
Question: 
* Do quantitative historical data back up the expert opinion? 
* How did the estimate account for the possibility that bias influenced 
the results? 

Technique: Extrapolate from actuals (averages, learning curves, 
estimates at completion); 
Question: 
* Were cost reports used for extrapolation validated as accurate? 
* Was the cost element at least 25% complete before using its data as 
an extrapolation? 
* Were functional experts consulted to validate the reported percentage 
as complete? 
* Were contractors interviewed to ensure the cost data’s validity? 
* Were recurring and nonrecurring costs separated to avoid double 
counting? 
* How were first unit costs of the learning curve determined? What 
historical data were used to determine the learning curve slope? 
* Were recurring and nonrecurring costs separated when the learning 
curve was developed? 
* How were partial units treated in the learning curve equation? 
* Were production rate effects considered? How were production break 
effects determined? 

Technique: Parametric; 
Question: 
* Was a valid statistical relationship, or CER, between historical 
costs and program, physical, and performance characteristics 
established?
* How logical is the relationship between key cost drivers and cost? 
* Was the CER used to develop the estimate validated and accepted? 
* How old are the data in the CER database? Are they still relevant for 
the program being estimated? 
* Do the independent variables for the program fall within the CER data 
range? 
* What is the level of variation in the CER? How well does the CER 
explain the variation (R2) and how much of the variation does the model 
not explain? 
* Do any outliers affect the overall fit? 
* How significant is the relationship between cost and its independent 
variables? 
* How well does the CER predict costs? 

Technique: Software estimating; 
Question: 
* Was the software estimate broken into unique categories: new 
development, reuse, commercial off-the-shelf, modified code, glue code, 
integration? 
* What input parameters—programmer skills, applications experience, 
development language, environment, process—were used for commercial 
software cost models, and how were they justified? 
* How was the software effort sized? Was the sizing method reasonable? 
* How were productivity factors determined? 
* How were labor hours converted to cost? How many productive hours 
were assumed in each day? 
* How were savings from autogenerated code and commercial off-the-shelf 
software estimated? Are the savings reasonable? 
* What were the assumptions behind the amount of code reuse? Were they 
supported? 
* How was the integration between the software, commercial software, 
system, and hardware estimated, and what historical data supported the 
results? 
* Were software license costs based on actual or historical data? 
* Were software maintenance costs adequately identified and reasonable? 

Source: DOD, SCEA, and industry. 

[End of table]

CERs and cost models also need to be validated to demonstrate that they 
can predict costs within an acceptable range of accuracy. To do this, 
data from historical programs similar to the new program should be 
collected to determine whether the CER selected is a reliable predictor 
of costs. In this review, technical parameters for the historical 
programs should be examined to determine whether they are similar to 
the program being estimated. For the CER to be accurate, the new and 
historical programs should have similar functions, objectives, and 
program factors, like acquisition strategy, or results could be 
misleading. Equally important, CERs should be developed with 
established and enforced policies and procedures that require staff to 
have proper experience and training to ensure the model’s continued 
integrity. 

Before a parametric model is used to develop an estimate, the model 
should be calibrated and validated to ensure that it is based on 
current, accurate, and complete data and is therefore a good predictor 
of cost. Like a CER, a parametric model is validated by determining 
that its users have enough experience and training and that formal 
estimating system policies and procedures have been established. The 
procedures focus on the model’s background and history, identifying key 
cost drivers and recommending steps for calibrating and developing the 
estimate. To stay current, parametric models should be continually 
updated and calibrated. 

Validation with calibration gives confidence that the model is a 
reliable estimating technique. To evaluate a model’s ability to predict 
costs, a variety of assessment tests can be performed. One is to 
compare calibrated values with independent data that were not included 
in the model’s calibration. Comparing the model’s results to the 
independent test data’s “known value” provides a useful benchmark for 
how accurately the model can predict costs. An alternative is to use 
the model to prepare an estimate and then compare its result with an 
independent estimate based on another estimating technique. 

The accuracy of both CERs and parametric models can be verified with 
regression statistics, which measure the accuracy and goodness of fit, 
such as the coefficient of determination (R2). CERs with an 2 equal to 
1.0 would indicate that the CER predicts the sample data perfectly. 
While this is hardly ever the case, an R2 close to 1.0 is more accurate 
than an R2 that is less than 0.70, meaning 30 percent of the variation 
is unexplained. 

4. Determine That the Estimate Is Credible: 

Credible cost estimates clearly identify limitations because of 
uncertainty or bias surrounding the data or assumptions. Major 
assumptions should be varied and other outcomes recomputed to determine 
how sensitive outcomes are to changes in the assumptions. In addition, 
a risk and uncertainty analysis should be performed to determine the 
level of risk associated with the estimate. Finally, the results of the 
estimate should be cross-checked and an ICE performed to determine 
whether alternative estimate views produce similar results. 

To determine an estimate’s credibility, key cost elements should be 
tested for sensitivity, and other cost estimating techniques should be 
used to cross-check the reasonableness of GR&As. It is also important 
to determine how sensitive the final results are to changes in key 
assumptions and parameters. A sensitivity analysis identifies key 
elements that drive cost and permits what-if analysis, often used to 
develop cost ranges and risk reserves. This enables management to know 
the potential for cost growth and the reasons behind it. 

Along with a sensitivity analysis, a risk and uncertainty analysis adds 
to the credibility of the cost estimate, because it identifies the 
level of confidence associated with achieving the cost estimate. Risk 
and uncertainty analysis produces more realistic results, because it 
assesses the variability in the cost estimate from such effects as 
schedules slipping, missions changing, and proposed solutions not 
meeting users’ needs. An uncertainty analysis gives decision makers 
perspective on the potential variability of the estimate should facts, 
circumstances, and assumptions change. By examining the effects of 
varying the estimate’s elements, a degree of uncertainty about the 
estimate can be expressed with a range of potential costs that is 
qualified by a factor of confidence. 

Another way to reinforce the credibility of the cost estimate is to see 
whether applying a different method produces similar results. In 
addition, industry rules of thumb can constitute a sanity check. The 
main purpose of cross-checking is to determine whether alternative 
methods produce similar results. If so, then confidence in the estimate 
increases, leading to greater credibility. If not, then the cost 
estimator should examine and explain the reason for the difference and 
determine whether it is acceptable. 

An ICE is considered one of the best and most reliable validation 
methods. ICEs are typically performed by organizations higher in the 
decision-making process than the office performing the baseline 
estimate. They provide an independent view of expected program costs 
that tests the program office’s estimate for reasonableness. Therefore, 
ICEs can provide decision makers with additional insight into a 
program’s potential costs—in part, because they frequently use 
different methods and are less burdened with organizational bias. 
Moreover, ICEs tend to incorporate adequate risk and, therefore, tend 
to be more conservative by forecasting higher costs than the program 
office. 

The ICE is usually developed from the same technical baseline 
description the program office used so that the estimates are 
comparable. An ICE’s major benefit is that it provides an objective and 
unbiased assessment of whether the program estimate can be achieved, 
reducing the risk that the program will proceed underfunded. It also 
can be used as a benchmark to assess the reasonableness of a 
contractor’s proposed costs, improving management’s ability to make 
sound investment decisions, and accurately assess the contractor’s 
performance. 

In most cases, the ICE team does not have insight into daily program 
events, so it is usually forced to estimate at a higher level or use 
analogous estimating techniques. It is, in fact, expected that the ICE 
team will use different estimating techniques and, where possible, data 
sources from those used to develop the baseline estimate. It is 
important for the ICE team and the program’s cost estimate team to 
reconcile the two estimates. 

Two issues with ICEs are the degree of independence and the depth of 
the analysis. Degree of independence depends on how far removed the 
estimator is from the program office. The greater the independence, the 
more detached and disinterested the cost estimator is in the program’s 
success. The basic test for independence, therefore, is whether the 
cost estimator can be influenced by the program office. 

Thus, independence is determined by the position of the cost estimator 
in relation to the program office and whether there is a common 
superior between the two. For example, if an independent cost estimator 
is hired by the program office, the estimator may be susceptible to 
success-oriented bias. When this happens, the ICE can end up too 
optimistic. 

History has shown a clear pattern of higher cost estimates the further 
away from the program office that the ICE is created. This is because 
the ICE team is more objective and less prone to accept optimistic 
assumptions. To be of value, however, an ICE must not only be performed 
by entities far removed from the acquiring program office but must also 
be accepted by management as a valuable risk reduction resource that 
can be used to minimize unrealistic expectations. The second issue with 
an ICE is the depth of the review. 

While an ICE reveals for decision makers any optimistic assumptions or 
items that may have been overlooked, in some cases management may 
choose to ignore it because the estimate is too high, as in case study 
42. 

Case Study 42: Independent Cost Estimates, from Space Acquisitions, GAO-
07-96: 

In a review of the Advanced Extremely High Frequency (AEHF) satellite 
program, the National Polar-orbiting Operational Environmental 
Satellite System (NPOESS), and the Space Based Infrared System (SBIRS) 
High program, GAO found examples of program decision makers’ not 
relying on independent cost estimates (ICE). Independent estimates had 
forecast considerably higher costs and lengthier schedules than program 
office or service cost estimates. To establish budgets for their 
programs, however, the milestone decision authorities had used program 
office estimates, or even lower estimates, rather than the independent 
estimates. 

DOD’s space acquisition policy required that ICEs be prepared outside 
the acquisition chain of command and that program and DOD decision 
makers consider them at key acquisition decision points. The policy did 
not require, however, that the independent estimates be relied on for 
setting budgets. 

In 2004, AEHF program decision makers relied on the program office cost 
estimate rather than the independent estimate the CAIG had developed to 
support the production decision. The program office had estimated that 
AEHF would cost about $6 billion; the CAIG had estimated $8.7 billion, 
some $2.7 billion more. 

The program office estimate was based on the assumption that AEHF would 
have ten times more capacity than Milstar, the predecessor satellite, 
at half the cost and weight. The CAIG believed that this assumption was 
overly optimistic, given that since AEHF began in 1999, its weight had 
more than doubled to obtain the desired increase in data rate. 

NPOESS was another example of large differences between program office 
and independent cost estimates. In 2003, government decision makers 
relied on the program office’s $7.2 billion cost estimate rather than 
the $8.8 billion independent cost estimate that the Air Force Cost 
Analysis Agency (AFCAA) had presented to support the development 
contract award. Program officials and decision makers had preferred the 
more optimistic assumptions and costs of the program office estimate, 
having viewed the independent estimate as too high. 

The SBIRS High program office and AFCAA predicted program cost growth 
as early as 1996, when the program began. While the two estimates, in 
2006 dollars, were close—$5.7 billion by the program office and $5.6 
billion by AFCAA—both were much more than the contractor’s estimate. 
Nevertheless, the program office budgeted SBIRS High at $3.6 billion, 
almost $2 billion less than either the program office or AFCAA had 
estimated. 

Source: GAO, Space Acquisitions: DOD Needs to Take More Action to 
Address Unrealistic Initial Cost Estimates of Space Systems, GAO-07-96, 
Washington, D.C.: Nov. 17, 2006. 

[End of case study] 

Table 27 lists eight types of ICE reviews and describes what they 
entail. 

Table 27: Eight Types of Independent Cost Estimate Reviews: 

Review: Document review; 
Description: It is an inventory of existing documentation to determine 
whether information is missing and an assessment of the available 
documentation to support the estimate.

Review: Independent cost assessment; 
Description: An outside evaluation of a program’s cost estimate that 
examines its quality and accuracy, with emphasis on specific cost and 
technical risks, it involves the same procedures as those of the 
program estimate but using different methods and techniques. 

Review: Independent cost estimate 
Description: Conducted by an organization outside the acquisition 
chain, using the same detailed technical information as the program 
estimate, it is a comparison with the program estimate to determine 
whether it is accurate and realistic. 

Review: Independent government cost estimate; 
Description: Analyzing contractors’ prices or cost proposals, it 
estimates the cost of activities outlined in the statement of work; 
does not include all costs associated with a program and can only 
reflect costs from a contractor’s viewpoint. Assumes that all technical 
challenges can be met as outlined in the proposal, meaning that it 
cannot account for potential risks associated with design problems. 

Review: Nonadvocate review 
Description: Performed by experienced but independent internal 
nonadvocate staff, it ascertains the adequacy and accuracy of a 
program’s estimated budget; assesses the validity of program scope, 
requirements, capabilities, acquisition strategy, and estimated life-
cycle costs. 

Review: Parametric estimating technique; 
Description: Usually performed at the summary WBS level, it includes 
all 
activities associated with a reasonableness review and incorporates 
cross-checks using parametric techniques and factors based on 
historical data to analyze the estimate’s validity. 

Review: Reasonableness, or sufficiency, review;
Description: It is a review of all documentation by an independent cost 
team, meeting with staff responsible for developing the program 
estimate, to analyze whether the estimate is sufficient with regard to 
the validity of cost and schedule assumptions and cost estimate 
methodology rationale and whether it is complete. 

Review: Sampling technique; 
Description: It is an independent estimate of key cost drivers of major 
WBS elements whose sensitivity affects the overall estimate; detailed 
independent government cost estimates developed for these key drivers 
include vendor quotes and material, labor, and subcontractor costs. 
Other program costs are estimated using the program estimate, as long 
as a reasonableness review has been conducted to ensure their validity. 

Source: DOD, DOE, and NASA. 

[End of table] 

As the table shows, the most rigorous independent review is an ICE. 
Other independent cost reviews address only a program’s high-value, 
high-risk, and high-interest elements and simply pass through program 
estimate values for the other costs. While they are useful to 
management, not all provide the objectivity necessary to ensure that 
the estimate going forward for a decision is valid. 

After an ICE or independent review is completed, it is reconciled to 
the baseline estimate to ensure that both estimates are based on the 
same GR&As. A synopsis of the estimates and their differences is then 
presented to management. Using this information, decision makers use 
the ICE or independent review to validate whether the program estimate 
is reasonable. 

Since the ICE team is outside the acquisition chain, is not associated 
with the program, and has nothing at stake with regard to program 
outcome or funding decisions, its estimate is usually considered more 
accurate. Some ICEs are mandated by law, such as those for DOD’s major 
acquisition programs. Nevertheless, the history of myriad DOD programs 
clearly shows that ICEs are usually higher, and more accurate, than 
baseline estimates. Thus, if a program cost estimate is close to ICE 
results, one can be more confident that it is accurate and more likely 
to result in funding at a reasonable level. 

12. Best Practices Checklist: Validating the Estimate: 

The cost estimate was validated against four characteristics: 

* It is comprehensive, includes all possible costs, ensures that no 
costs were omitted or double-counted, and explains and documents key 
assumptions. 
- It completely defines the program, reflects the current schedule, and 
contains technically reasonable assumptions. 
- It captures the complete technical scope of the work to be performed, 
using a logical WBS that accounts for all performance criteria and 
requirements. 

* It was documented so well that it can easily be repeated or updated 
and traced to original sources by auditing. 
- Supporting documentation identifies data sources, justifies all 
assumptions, and describes all estimating methods (including 
relationships) for all WBS elements. 
- Schedule milestones and deliverables can be traced and are consistent 
with the documentation. 

* It is accurate, not too conservative or too optimistic; is based on 
an assessment of most likely costs, adjusted properly for inflation; 
and contains few minor mistakes. 
- WBS estimates were checked to verify that calculations were accurate 
and accounted for all costs and that proper escalation factors were 
used to inflate costs so they were expressed consistently and 
accurately. 
- Questions associated with estimating techniques were answered to 
determine the estimate’s accuracy. 
- CERs and parametric cost models were validated to ensure that they 
were good predictors of costs, their data were current and applied 
to the program, the relationships between technical parameters 
were logical and statistically significant, and results were tested 
with independent data. 

* Data limitations from uncertainty or bias were identified; results 
were crosschecked; 
an ICE was developed to see if results were similar. 
- Major assumptions were varied and other outcomes recomputed to 
determine their sensitivity to changes in the assumptions. 
- Risk and uncertainty analysis was conducted. 

[End of Chapter 15] 

Chapter 16: Documenting The Estimate: 

Well-documented cost estimates are considered a best practice for high-
quality cost estimates, for several reasons. 

* First, thorough documentation is essential for validating and 
defending a cost estimate. That is, a well documented estimate can 
present a convincing argument of an estimate’s validity and can help 
answer decision makers’ and oversight groups’ probing questions. 

* Second, documenting the estimate in detail, step by step, provides 
enough information so that someone unfamiliar with the program could 
easily recreate or update it. 

* Third, good documentation helps with analyzing changes in program 
costs and contributes to the collection of cost and technical data that 
can be used to support future cost estimates. 

* Finally, a well-documented cost estimate is essential if an effective 
independent review is to ensure that it is valid and credible. It also 
supports reconciling differences with an independent cost estimate, 
improving understanding of the cost elements and their differences so 
that decision makers can be better informed. 

Documentation provides total recall of the estimate’s detail so that it 
can be replicated by someone other than those who prepared it. It also 
serves as a reference to support future estimates. Documenting the cost 
estimate makes available a written justification showing how it was 
developed and aiding in updating it as key assumptions change and more 
information becomes available. 

Estimates should be documented to show all parameters, assumptions, 
descriptions, methods, and calculations used to develop a cost 
estimate. A best practice is to use both a narrative and cost tables to 
describe the basis for the estimate, with a focus on the methods and 
calculations used to derive the estimate. With this standard approach, 
the documentation provides a clear understanding of how the cost 
estimate was constructed. Moreover, cost estimate documentation should 
explain why particular methods and data sets were chosen and why these 
choices are reasonable. It should also reveal the pros and cons of each 
method selected. Finally, there should be enough detail so that the 
documentation serves as an audit trail of backup data, methods, and 
results, allowing for clear tracking of a program’s costs as it moves 
through its various life-cycle phases. 

Estimates that lack documentation are not useful for updates or 
information sharing and can hinder understanding and proper use. 
Experience shows that poorly documented estimates can cause a program’s 
credibility to suffer because the documentation cannot explain the 
rationale of the underlying cost elements. Case study 43 takes a closer 
look at the effect a poorly documented cost estimate can have on 
decision making. 

Case Study 43: Documenting the Estimate, from Telecommuncations, GAO-07-
268: 

The General Services Administration (GSA) provided GAO with 
documentation of its method, the calculations it used to derive each 
cost element, its results, and many of the previous transition costs 
for Networx—its program of governmentwide telecommunications contracts 
enabling agencies to make a transition to new, innovative services and 
operations. It had not, however, documented significant assumptions. 
Specifically, GSA had not documented the rationale behind its 76 
percent transition traffic factor or why it had chosen 30 months for 
the transition—two key assumptions of its analysis. 

GSA also did not provide documentation of certain data sources. 
Specifically, program officials could not provide supporting data for 
the estimate of an agency transition cost valued at $4.7 million. 
Likewise, GSA could not document the data sources used to estimate 
costs for contractor support in planning and implementing the 
transition. While many costs in its estimate were based on charges 
incurred during the previous transition, GSA officials stated that it 
was not appropriate to use previous costs as a basis for the contractor 
cost element. 

They explained that unlike the previous transition, GSA would not 
provide agencies with on-site contractor support. They had made this 
decision because, in part, the 2-1/2 years of transition planning that 
had taken place was expected to result in the agencies’ better 
preparation and ability to facilitate making their transition without 
direct assistance from GSA or its contractors. 

Instead of basing projection of contractor costs on prior charges, 
program officials told GAO that GSA management had decided that 
contractor support costs should not exceed $35 million. Program 
officials could not provide any data or analysis to support this 
decision. 

GSA had not used sound analysis when estimating the funds needed to 
meet its transition-related commitments. These weaknesses could be 
attributed, in part, to the lack of a cost estimation policy that 
reflected best practices. While GSA’s intentionally conservative 
approach minimized the risk that it would have inadequate funds to pay 
for committed transition costs, it increased the risk that GSA would 
retain excess funds that could be used for other purposes. 

Source: GAO, Telecommuncations: GSA Has Accumulated Adequate Funding 
for Transition to New Contracts but Needs Cost Estimation Policy, GAO-
07-268, Washington, D.C.: Feb. 23, 2007. 

[End of case study] 

In addition to these requirements, good documentation is necessary to: 

* satisfy policy requirements for properly recording the basis of the 
estimate, 

* convince management and oversight staff that the estimate is 
credible, 

* provide supporting data that can be used to create a historical 
database, 

* help answer questions about the approach or data used to create the 
estimate, 

* record lessons learned and provide a history for tracking why costs 
changed, 

* define the scope of the analysis, 

* allow for replication so that an analyst unfamiliar with the program 
can understand the logic behind the estimate, and; 

* help conduct future cost estimates and train junior analysts. 

Elements Of Cost Estimate Documentation: 

Two important criteria should be kept in mind when generating high-
quality cost estimate documentation. First, it should describe the cost 
estimating process, data sources, and methods and should be clearly 
detailed to allow anyone to easily reconstruct the estimate. Second, 
the results of the estimating process should be presented in a format 
that makes it easy to prepare reports and briefings to upper 
management. 

Cost estimators should document all the steps used to develop the 
estimate. As a best practice, the cost estimate documentation should 
address how the estimate satisfies the 12-step process and 
corresponding best practices identified in this guide for creating high-
quality cost estimates. Table 28 describes the various sections of 
proper documentation and what they should include. 

Table 28: What Cost Estimate Documentation Includes: 
 
Document section and cost estimating step: Cover page and table of 
contents; 2–3; 
Description: 
* Names the cost estimators, the organization they belong to, etc. 
* Gives the program’s name, date, and milestones; 
* Lists the document’s contents, including supporting appendixes. 

Document section and cost estimating step: Executive summary; 6–9; 
Description: 
* Summarizes clearly and concisely the cost estimate results, with 
enough information about cost drivers and high-risk areas for 
management to make informed decisions; 
* Presents a time-phased display of the LCCE in constant and current 
year dollars, broken out by major WBS cost elements; if an update, 
tracks the results and discusses lessons learned; 
* Identifies critical ground rules and assumptions; 
* Identifies data sources and methods used to develop major WBS cost 
elements and reasons for each approach; 
* Discusses ICE results and differences and explains whether the point 
estimate can be considered reasonable; 
* Discusses the results of a sensitivity analysis, the level of 
uncertainty associated with the point estimate, and any contingency 
reserve recommendations and compares them to the funding profile. 

Document section and cost estimating step: Introduction; 1-5; 
Description: 
* Gives a program overview: who estimated it, how cost was estimated, 
the date associated with the estimate; 
* Addresses the estimate’s purpose, need, and whether it is an initial 
estimate or update; 
* Names the requester, citing tasks assigned and related correspondence 
(in an appendix, if necessary); 
* Gives the estimate’s scope, describing major program phases and their 
estimated time periods, and what the estimate includes and excludes, 
with reasons; 
* Describes GR&As and technical and program assumptions, such as 
inflation rates. 

Document section and cost estimating step: System description; 5; 
Description: 
* Describes the program background and system, with detailed technical 
and program data, major system components, performance parameters, and 
support requirements; 
* Describes contract type, acquisition strategy, and other information 
in the technical baseline description. 

Document section and cost estimating step: Program inputs; 1–3; 
Description: 
* Gives the team composition—names, organizational affiliations, who 
was responsible for developing the estimate; 
* Details the program schedule, including master schedule and 
deliverables; 
* Describes the acquisition strategy. 

Document section and cost estimating step: Estimating method and data 
by WBS cost element; 6, 7, 10; 
Description: 
* The bulk of the documentation, describing in a logical flow how each 
WBS cost element in the executive summary was estimated; details each 
cost element enough that someone independent of the program recreating 
the estimate could arrive at the same results. Supporting information 
too detailed for this section is placed in an appendix; 
* Defines the cost element and describes how it was derived; 
* Summarizes costs spread by fiscal year in constant year dollars, 
matching the current program schedule; 
* Details the method, sources, models, and calculations for developing 
the estimate; fully documents CERs, including the rationale for the 
relationship between cost and the independent variables, the applicable 
range for independent variables, and the process for validating the 
CER, including descriptive statistics associated with the relationship; 
* If cost models were used, documents input and output data and any 
calibrations to the model; the cost model, data input, and results are 
in an appendix; 
* Documents the data in detail with a display of all database 
information used for parametric or analogy-based estimates; describes 
judgments about parametric variables, analogy scaling, or complexity 
factors and adjustments of the data; identifies data limitations and 
qualifies the data, based on sources (historical data, budget 
estimates), time periods they represent, and adjustments to normalize 
them or account for significant events like production breaks. 
* Identifies direct and indirect labor rates, labor hours, material and 
subcontractor costs, overhead rates, learning curves, inflation 
indexes, and factors, including their basis; 
* Shows the calculation of the cost estimate, with a logical link to 
input data; 
* Identifies and discusses significant cost drivers; identifies 
specialists whose judgments were used and their qualifications; 
* Discusses the cross-check approach for validating the estimate 
* Discusses the ICE’s results and differences and whether it 
corroborates the point estimate as reasonable. 
 
Document section and cost estimating step: Sensitivity analysis; 8; 
Description: 
* Describes the effect of changing key cost drivers and assumptions 
independently; 
* Identifies the major cost drivers that should be closely monitored. 

Document section and cost estimating step: Risk and uncertainty 
analysis; 9; 
Description: 
* Discusses sources of risk and uncertainty, including critical 
assumptions, associated with the estimate; 
* The effect of uncertainty associated with the point estimate is 
quantified with probability distributions, and the resulting S curve is 
fully documented; the method for quantifying uncertainty is discussed 
and backed up by supporting data; 
* The basis for contingency reserves and how they were calculated is 
fully documented. 

Document section and cost estimating step: Management approval; 11; 
Description: 
* Includes briefings presenting the LCCE to management for approval, 
explaining the technical and program baseline, estimating approach, 
sensitivity analysis, risk and uncertainty analysis, ICE results and 
reasons for differences, and an affordability analysis to identify any 
funding shortfalls; 
* Presents the estimate’s limitations and strengths; 
* Includes management approval memorandums, recommendations for change, 
and feedback. 

Document section and cost estimating step: Updates reflecting actual 
costs and changes; 12; 
Description: 
* Reflects changes in technical or program assumptions or new program 
phases or milestones; 
* Replaces estimates with actual costs from the EVM system and reports 
progress on meeting cost and schedule estimates; 
* Includes results of post mortems and lessons learned, with precise 
reasons for why actual costs or schedules differ from the estimate. 

Source: DHS, DOD, DOE, NASA, SCEA, and industry. 

[End of table] 

While documentation of the cost estimate is typically in the form of a 
written document, the documentation can be completed in other 
acceptable ways. For example, some organizations rely on cost models 
that automatically develop documentation, while others use detailed MS 
Excel spreadsheets with cell notes and hyperlinks to other documents. 
The important thing to consider is whether the documentation allows 
someone to trace the data, calculations, modeling assumptions, and 
rationale back to a source document for verification and validation. In 
addition, it should also address the reconciliation with the 
independent cost estimate so that others can understand areas of risk. 

Other Considerations: 

Documenting the cost estimate should not be a last-minute effort. If 
documentation is left untouched until the end of the estimating 
process, it will be much harder to recapture the rationale and 
judgments that formed the cost estimate and will increase the chance of 
overlooking important information that can cause credibility issues. 
Documentation should be done in parallel with the estimate’s 
development, so that the quality of the data, methods, and rationale 
are fully justified. More information is preferred over too little, 
since the purpose of documenting the estimate is to allow for 
recreating it or updating it by someone else who knows nothing about 
the program or estimate. Consequently, documentation should be written 
step by step and should include everything necessary for another 
analyst to easily and quickly replicate the estimate and arrive at the 
same results. In addition, access to an electronic copy of the cost 
model supporting the estimate should be available with the 
documentation so that updates can be performed efficiently. Finally, 
the cost estimate and documentation need to be stored so that 
authorized personnel can easily find it and use it for future 
estimates. 

13. Best Practices Checklist: Documenting the Estimate: 

* The documentation describes the cost estimating process, data 
sources, and methods step by step so that a cost analyst unfamiliar 
with the program could understand what was done and replicate it. 
- Supporting data are adequate for easily updating the estimate to 
reflect actual costs or program changes and using them for future 
estimates. 
- The documentation describes the estimate with narrative and cost 
tables. 
- It contains an executive summary, introduction, and descriptions of 
methods, with data broken out by WBS cost elements, sensitivity 
analysis, risk and uncertainty analysis, management approval, and 
updates that reflect actual costs and changes. 
- Detail addresses best practices and the 12 steps of high-quality 
estimates. 
- The documentation is mathematically sensible and logical. 
- It discusses contingency reserves and how they were derived from risk 
and uncertainty analysis and the LCCE funding profile. 

*It includes access to an electronic copy, and both are stored so that 
authorized personnel can easily find and use them for other cost 
estimates. 

[End of Chapter 16] 

Chapter 17: Presenting The Estimate To Management: 

A cost estimate is not considered valid until management has approved 
it. Since many cost estimates are developed to support a budget request 
or make a decision between competing alternatives, it is vital that 
management is briefed on how the estimate was developed, including 
risks associated with the underlying data and methods. Therefore, the 
cost estimator should prepare a briefing for management with enough 
detail to easily defend the estimate by showing how it is accurate, 
complete, and high in quality. The briefing should present the 
documented LCCE with an explanation of the program’s technical and 
program baseline. 

The briefing should be clear and complete, making it easy for those 
unfamiliar with the estimate to comprehend its level of competence. The 
briefing should focus on illustrating to management, in a logical 
manner, what the largest cost drivers are. Slides with visuals should 
be available to answer more probing questions. A best practice is to 
present the briefing in a consistent format to facilitate management’s 
understanding the completeness of the cost estimate, as well as its 
quality. Moreover, decision makers who are familiar with a standard 
briefing format will be better able to concentrate on the briefing’s 
contents, and on the cost estimate, rather than focusing on the format 
itself. 

The cost estimate briefing should succinctly illustrate key points that 
center on the main cost drivers and the final cost estimate’s outcome. 
Communicating results simply and clearly engenders management 
confidence in the ground rules, methods, and results and in the process 
that was followed to develop the estimate. The presentation must 
include program and technical information specific to the program, 
along with displays of budget implications, contractor staffing levels, 
and industrial base considerations, to name a few. These items should 
be included in the briefing: 
 
* The title page, briefing date, and the name of the person being 
briefed. 

* A top-level outline. 

* The estimate’s purpose: why it was developed and what approval is 
needed. 

* A brief program overview: its physical and performance 
characteristics and acquisition strategy, sufficient to understand its 
technical foundation and objectives. 

* Estimating ground rules and assumptions. 

* Life-cycle cost estimate: time-phased in constant-year dollars and 
tracked to any previous estimate. 

* For each WBS cost element, show the estimating method for cost 
drivers and high-value items; show a breakout of cost elements and 
their percentage of the total cost estimate to identify key cost 
drivers. 

* Sensitivity analysis, interpreting results carefully if there is a 
high degree of sensitivity. 

* Discussion of risk and uncertainty analysis: (1) cost drivers, the 
magnitude of outside influences, contingencies, and the confidence 
interval surrounding the point estimate and the corresponding S curve 
showing the range within which the actual estimate should fall; (2) 
other historic data for reality checks; and (3) how uncertainty, 
bounds, and distributions were defined. 
 
* Comparison to an independent cost estimate, explaining differences 
and discussing results. 

* Comparison of the LCCE, expressed in current-year dollars, to the 
funding profile, including contingency reserve based on the risk 
analysis and any budget shortfall and its effect. 

* Concerns or challenges the audience should be aware of. 

* Conclusions, recommendations, and associated level of confidence in 
the estimate. 

When briefing management on LCCEs, the presenter should include 
separate sections for each program phase—research and development, 
procurement, operations and support, disposal—and should provide the 
same type of information as the cost estimate documentation contains. 
In addition, the briefing should present the summary information, main 
conclusions, and recommendations first, followed by detailed 
explanations of the estimating process. 

This approach allows management to gain confidence in the estimating 
process and, thus, the estimate itself. At the conclusion of the 
briefing, the cost estimator should ask management whether it accepts 
the cost estimate. Acceptance, along with any feedback from management, 
should be acted on and documented in the cost estimate documentation 
package. 

14. Best Practices Checklist: Presenting the Estimate to Management: 

* The briefing to management: 
- was simple, clear, and concise enough to convey its level of 
competence. 
- illustrated the largest cost drivers, presenting them logically, with 
backup charts for responding to more probing questions. 
- was consistent, allowing management to focus on the estimate’s 
content. 

* The briefing contained: 
- A title page, outline, and brief statement of purpose of the 
estimate. 
- An overview of the program’s technical foundation and objectives. 
- LCCE results in time-phased constant-year dollars, tracked to 
previous estimates. 
- A discussion of GR&As. 
- The method and process for each WBS cost element, with estimating 
techniques and data sources. 
- The results of sensitivity analysis and cost drivers that were 
identified. 
- The results of risk and uncertainty analysis with confidence 
interval, S curve analysis, and bounds and distributions. 
- The comparison of the point estimate to an ICE with discussion of 
differences and whether the point estimate was reasonable. 
- An affordability analysis based on funding and contingency reserves. 
- Discussion of any other concerns or challenges. 
- Conclusions and recommendations. 

* Feedback from the briefing, including management’s acceptance of the 
estimate, was acted on and recorded in the cost estimate documentation. 

[End of Chapter 17] 

Chapter 18: Managing Program Costs: Planning: 

In this chapter, we review the importance of obtaining the best 
perspective on a program and its inherent risks by linking cost 
estimating and EVM. We describe a best practice for cost estimators and 
EVM analysts: sharing data to update program costs and examining 
differences between estimated and actual costs to present scope 
changes, risks, and other opportunities to management with sufficient 
lead time to plan for and mitigate their impact. Then we summarize the 
history and nature of EVM—its concepts, tools, and benefits. Finally, 
we describe EVM as managing program costs through proper planning. 

Linking Cost Estimation As The Foundation For EVM Analysis: 

A credible cost estimate lies at the heart of EVM analysis. Figure 20 
depicts how cost estimating supports the EVM process. It also lays out 
the specific flow of activity between key functions such as cost 
estimation, system development oversight, and risk management. 

Figure 20: Integrating Cost Estimation, Systems Development Oversight, 
and Risk Management: 

[Refer to PDF for image: illustration] 

Presystems acquisition planning: 

Program management: [Empty]; 
Earned value management: [Empty]; 
Cost analysis: 
* Life-cycle cost estimate: 
- Cost; 
- Schedule; 
- Technical; 
* Risk analysis: 
- Cost; 
- Schedule; 
- Technical; 
Systems engineering: 
* Concept definition; 
- Operational/functional concept; 
* Analysis of alternatives; 
* Requirements definition; 
* Risk analysis: 
- Cost; 
- Schedule; 
- Technical; 

Systems acquisition: 

Program management: 
* Integrated baseline review; 
* Surveillance: 
- Cost; 
- Schedule; 
- Technical;
Earned value management: 
* Risk analysis: 
- Cost; 
- Schedule; 
- Technical; 
* Performance measure baseline; 
- Cost; 
- Schedule; 
- Technical;
Cost analysis: 
* Acquisition cost estimate: 
- Cost; 
- Schedule; 
- Technical; 
* Risk analysis: 
- Cost; 
- Schedule; 
- Technical; 
Systems engineering: 
* Acquisition plan: 
-Work breakdown structure; 
* Risk analysis: 
- Cost; 
- Schedule; 
- Technical; 

Systems maintenance: 

Source: NDIA. 

As the lower left of figure 20 shows, a program’s life cycle begins 
with planning, where systems engineering defines the program’s concept, 
requirements, and WBS. When these activities are complete, the 
information is passed on to the cost analysis team so that they can 
develop the program’s LCCE. Before a system is acquired, however, a 
risk analysis examining cost, schedule, and technical impacts is 
performed. The results of the LCCE and risk analysis are presented to 
executive management for an informed decision on whether the program 
should proceed to systems acquisition. 

If management approves the program for acquisition, then systems 
engineering and cost analyses continue, in conjunction with the 
development of the program’s EVM performance measurement baseline. 
[Footnote 61] This baseline is necessary for defining the time-phased 
budget plan from which actual program performance is measured. After 
the performance measurement baseline has been established, the program 
manager and supplier participate in an IBR to ensure mutual 
understanding of all the risks. This review also validates that the 
program baseline is adequate and realistically portrays all authorized 
work according to the schedule. When appropriate, an IBR may begin 
before contract award to mitigate risk. The Federal Acquisition 
Regulation (FAR) provides for a pre-award IBR as an option, in 
accordance with agency procedures.[Footnote 62] 

Preparing for and managing program risk occurs during both planning and 
system acquisition. In planning, a detailed WBS is developed that 
completely defines the program and encompasses all risks from program 
initiation through assigning adequate resources to perform the work. 
During acquisition, risks are linked to specific WBS elements so that 
they can be prioritized and tracked through risk management, using data 
from systems engineering, cost estimating, risk analysis, and program 
management. These efforts should result in an executable program 
baseline that is based on realistic cost, schedule, and technical goals 
and that provides a mechanism for addressing risks. 

Cost Estimation and EVM in System Development Oversight: 

Government cost estimating and EVM are often conducted by different 
groups that barely interact during system development. As a result, 
program managers do not benefit from an integration of their efforts. 
Once the cost estimate has been developed and approved, cost estimators 
tend to move on to the next program, often not updating the cost 
estimate with actual costs after a contract has been awarded. In some 
cases, cost estimators do not update a cost estimate unless significant 
cost overruns or schedule delays have occurred or major requirements 
have changed. 

Also, EVM analysts are usually not that familiar with a program’s 
technical baseline document, GR&As, and cost estimate data or 
methodology. They tend to start monitoring programs without adequate 
knowledge of where and why risks are associated with the underlying 
cost estimate. Limited integration can mean that: 

* cost estimators may update the program estimate without fully 
understanding what the earned value data represent, 
 
* EVM analysts do not benefit from cost estimators’ insight into the 
possible cost and schedule risks associated with the program, and, 
 
* neither fully understands how risks identified with the cost estimate 
S curve (or cumulative probability distribution) translate into the 
program’s performance measurement baseline.

Therefore, it is considered a best practice to link cost estimating and 
EVM analysis. Joining forces, cost estimators and EVM analysts can use 
each other’s data to update program costs and examine differences 
between estimated and actual costs. Scope changes, risks, and 
opportunities can be presented to management in time to plan for or 
mitigate them. Program status can be compared to historical data to 
understand variances. Finally, cost estimators can help EVM analysts 
calculate a cumulative probability distribution to determine the 
confidence level in the baseline. 

EVM and Acquisition: A Baseline for Risk Management: 

Using generally accepted risk management techniques, a program manager 
can decide how much management reserve budget to set aside to cover 
risks that were unknown at the program’s start. As the program develops 
according to the baseline plan, metrics from the EVM system can be 
analyzed to identify risks that have been realized, as well as emerging 
risks and opportunities. By integrating EVM data and risk management, 
program managers can develop EACs for all management levels, including 
OMB reporting requirements. In figure 21, EVM is integrated with risk 
management for a better program view. 

Figure 21: Integrating EVM and Risk Management: 

[Refer to DF for image: illustration] 

Earned value management: 
1) Define and organize work. 
2) Establish management reserve; Issue target budgets; Authorize 
planning. 
3) Plan program; Schedule IMS; Establish performance management 
baseline. 
4) Authorize work; Update schedules; Measure performance; Compute 
variances (relates to #8). 
5) Analyze results; Plan corrective action; Update estimate at 
completion (relates back to #1: Revisions and change control). 

Integrated guidance: 
 
6) Use risks to establish suitable management reserve; Ensure 
management reserve sufficient to handle high probability of risk 
(relates to #2). 
7) Incorporate risk mitigation plans into program schedules and budgets 
(relates to #3 and #13). 
8) Use earned value to monitor performance of risk mitigation plans; 
Identify newly developing risks and opportunities (relates to #14). 
9) Incorporate risk impacts into estimate at completion (relates to 
#5). 

Risk management: 

10) Plan risk management activities. 
11) Perform risk assessment; Identify and analyze risk; Determine risk 
exposure (relates to #6). 
12) Develop risk-handling plans (relates to #7). 
13) Assign responsibility; Execute risk-handling plans. 
14) Monitor and communicate risks and plans opportunities (relates to 
#9 and #10). 

Source: NDIA. 

[End of figure] 

Often, organizational barriers can keep the EVM and risk management 
processes independent of one another rather than tightly integrated. 
Senior management should encourage cross-organizational communication 
and training between these two disciplines to ensure that they are 
working together to better manage the risks facing a program. Doing so 
will promote a thorough understanding of program risks and help improve 
risk mitigation. Additionally, addressing risk in the formulation of 
the program EVM baseline will result in higher credibility and the 
greater likelihood of success. Risk identification and mitigation plans 
should be provided to the IBR team before the IBR and assessed as part 
of the IBR process. Next, we turn to what EVM is, what some of its 
concepts are, and how to use its tools and gain from its benefits. 

The Nature And History Of EVM: 
 
What EVM Is: 

Earned value management goes beyond simply comparing budgeted costs to 
actual costs. It measures the value of work accomplished in a given 
period and compares it with the planned value of work scheduled for 
that period and with the actual cost of work accomplished. By using the 
metrics derived from these values to understand performance status and 
to estimate cost and time to complete, EVM can alert program managers 
to potential problems sooner than expenditures alone can. 

Assume, for example, that a contract calls for 4 miles of railroad 
track to be laid in 4 weeks at a cost of $4 million. After 3 weeks of 
work, only $2 million has been spent. An analysis of planned versus 
actual expenditures suggests that the project is underrunning its 
estimated costs. However, an earned value analysis reveals that the 
project is in trouble because even though only $2 million has been 
spent, only 1 mile of track has been laid and, therefore, the contract 
is only 25 percent complete. Given the value of work done, the project 
will cost the contractor $8 million ($2 million to complete each mile 
of track), and the 4 miles of track will take a total of 12 weeks to 
complete (3 weeks for each mile of track) instead of the originally 
estimated 4 weeks. 

Thus, EVM is a means of cost and schedule performance analysis. By 
knowing what the planned cost is at any time and comparing that value 
to the planned cost of completed work and to the actual cost incurred, 
analysts can measure the program’s cost and schedule status. Without 
knowing the planned cost of completed work and work in progress (that 
is, earned value), true program status cannot be determined. Earned 
value provides the missing information necessary for understanding the 
health of a program; it provides an objective view of program status. 
Moreover, because EVM provides data in consistent units (usually labor 
hours or dollars), the progress of vastly different work efforts can be 
combined. For example, earned value can be used to combine feet of 
cabling, square feet of sheet metal, or tons of rebar with effort 
for systems design and development. That is, earned value can be 
employed as long as a program is broken down into well-defined tasks. 

EVM’s History: 

EVM is not a new concept. It has been around in one form or another 
since the early 1900s, when industrial engineers used it to assess 
factory performance. They compared physical work output—earned value, 
or something gained through some effort—to the planned physical work 
and subsequent actual costs. In the 1920s, General Motors used a form 
of EVM called flexible budgets; by the early 1960s, EVM had graduated 
to the Program Evaluation and Review Technique, which relied on 
resource loaded networked schedules and budgets to plan and manage 
work. 

In 1967, DOD adopted EVM as Cost/Schedule and Control System Criteria 
(C/SCSC). These criteria, based on the best management practices used 
in American industry since the early 1900s, defined for defense 
contractors the minimum acceptable standards for providing the 
government with objective program performance reporting. C/SCSC also 
required contractors to integrate effort, schedule, and cost into a 
single plan. This was a broad divergence from DOD’s typical analysis of 
“spend plans”—comparing planned costs to actual costs—which gave no 
insight into what was actually accomplished for the money spent. 

Earned value technique now required contractors to report progress on 
cost, schedule, and technical achievement, giving managers access for 
the first time to timely and accurate status updates. The data gave 
managers the ability to confidently predict how much money it would 
cost and how long it would take to complete a contract. Rather than 
enforcing a particular system for contractors to implement, however, 
C/SCSC required them to develop their own management control systems 
that could satisfy the standards to use earned value effectively. 

Along with the many benefits to implementing C/SCSC came many problems. 
For instance, some programs found C/SCSC criteria overwhelming, causing 
them to maintain two sets of data—one for managing the program and one 
for reporting C/SCSC data. In other instances, EVM was viewed only as a 
financial management tool to be administered with audit-like rigor. A 
1997 GAO report found that while EVM was intended to serve many 
different groups, program managers often ignored the data even though 
they could have benefited from responding to impending cost and 
schedule overruns on major contracts. 

To try to resolve these problems, the Office of the Secretary of 
Defense encouraged industry to define new EVM criteria that were more 
flexible and useful to industry and government. In 1996, DOD accepted 
industry’s revamped criteria, stating that they brought EVM back to its 
intended purposes of integrating cost, schedule, and technical effort 
for management and providing reliable data to decision makers. 

EVM Guidelines in Practice Today: 

The new EVM approach encompasses 32 guidelines, organized into 5 
categories of effort: (1) organizing, (2) planning and budgeting, (3) 
accounting, (4) analysis, and (5) making revisions. The guidelines 
define the major principles for managing programs, including, among 
other things, 
 
* defining and detailed planning of the scope of work using a WBS, 

* identifying organizational responsibility for doing the work, 
 
* scheduling authorized work, 

* applying realistic resources and budget to complete the work, 

* measuring the progress of work by objective indicators, 

* developing a project measurement baseline, 

* collecting the cost of labor and materials associated with the work 
performed, 

* analyzing variances from planned cost and schedules, 

* forecasting costs at completion, 

* taking management actions to control risk, and, 

* controlling changes. 

The EVM guidelines today are often viewed as common sense program 
management practices that would be necessary to successfully manage any 
development program, regardless of size, cost, or complexity. Moreover, 
they have become the standard for EVM and have been adopted by 
industry, major U.S. government agencies, and government agencies in 
Australia, Canada, Japan, Sweden, and the United Kingdom. Furthermore, 
when reviewing agencies’ annual budget requests, OMB uses agency-
reported EVM data to decide which acquisition programs to continue 
funding. Accordingly, government and industry consider EVM a worldwide 
best practice management tool for improving program performance. 

As a key management concept, EVM has evolved from an industrial 
engineering tool to a government and industry best practice, providing 
improved oversight of acquisition programs. Using EVM is like forming 
an intelligent plan that first identifies what needs to be done and 
then uses objective measures of progress to predict future effort. 
Commercial firms told us that they use the earned value concept to 
manage their programs because they believe that good up-front technical 
planning and scheduling not only make sense but are essential for 
delivering successful programs. 

Implementing EVM: 
 
For EVM to be effective, strong leadership from the top is necessary to 
create a shared vision of success that brings together areas often 
stove-piped by organizational boundaries. To accomplish this shared 
vision, senior management should set an expectation that reliable and 
credible data are key aspects to managing a successful program and show 
an active interest in program status to send a message to their staff 
that they are accountable and that results matter. Accordingly, 
stakeholders need to take an interest in and empower those doing the 
work and make sure that corporate practices are in place that allow 
them to know the truth about how a program is doing. Leadership must 
require information sharing in an open, honest, and timely fashion so 
it can provide resources and expertise immediately when problems begin 
to arise. 

To ingrain this expectation, agencies should set forth policies that 
clearly define and require disciplined program management practices for 
planning and execution. As part of that policy, the focus should be 
on integrating cost, schedule, and technical performance data so that 
objective program progress can be measured and deviations from the 
baseline acted upon quickly. Moreover, the policy should also address 
the importance of continuous training in cost estimating, EVM, 
scheduling, and risk and uncertainty analysis that will provide the 
organization with high-performing and accountable people who are 
experienced in these essential disciplines. Training should be provided 
and enforced for all program personnel needing such training, not just 
those with program management responsibilities. While program managers 
and staff need to be able to interpret and validate EVM data to 
effectively manage deliverables, costs, and schedules, oversight 
personnel and decision-makers also need to understand EVM terms and 
analysis products in order to ask the right questions, obtain 
performance views into the program, and make sound investment 
decisions. 

The Purpose of Implementing an EVM System: 

Using the value of completed work for estimating the cost and time 
needed to complete a program should alert program managers to potential 
problems early in the program and reduce the chance and magnitude 
of cost overruns and schedule delays. EVM also provides program 
managers with early warning of developing trends—both problems and 
opportunities—allowing them to focus on the most critical issues. 

The two main purposes for implementing an EVM system are to (1) 
encourage the use of effective internal cost and schedule management 
control systems and (2) allow the customer to rely on timely and 
accurate data for determining product-oriented contract status. To be 
effective, an EVM system should constitute management processes that 
serve as a comprehensive tool for integrating program planning and 
execution across cost, schedule, and technical disciplines. In essence, 
an EVM system should provide the means for planning, reporting, and 
analyzing program performance. 

EVM as a Planning Tool: 

EVM imposes the discipline of planning all work in sufficient detail so 
that the cost, technical effort, and schedule dependencies are known at 
the outset. When EVM is used as a planning tool, all work is planned 
from the beginning—current work in detail, future work outlined at 
higher levels. As the work is planned to a manageable level of detail, 
it is broken into descriptive work packages that are allocated a 
portion of the program budget. These units are then spread across the 
program schedule to form the performance measurement baseline, which is 
used to detect deviations from the plan and give insight into problems 
and potential impacts. 

EVM as a Management Reporting Tool: 

EVM objectively measures program status with objective methods such as 
discrete units and weighted milestones to determine work accomplished. 
These measures are based on specific criteria that are defined before 
the work starts. As work is accomplished, its value is measured against 
a time-phased schedule. While the guidelines require no specific 
scheduling technique, more complex programs typically use a networked 
schedule that highlights the program’s critical path.[Footnote 63] The 
earned value is measured in terms of the planned cost of work actually 
completed. This difference of including earned value allows for 
objective measurements of program status that other reporting systems 
cannot provide. 

EVM as an Analysis and Decision Support Tool: 

EVM indicates how past performance may affect future performance. For 
example, EVM data isolate cost and schedule variances by WBS element, 
allowing an understanding of technical problems that may be causing the 
variances. Problems can be seen and mitigated early. In addition, 
opportunity can be taken in areas that are performing well to 
reallocate available budgets for work that has not yet started. 
[Footnote 64] 

Key Benefits of Implementing EVM: 

Table 29 describes some of the key benefits that can be derived from 
successfully implementing an EVM system, and figure 22 shows the 
expected inputs and outputs associated with tracking earned value. 

Table 29: Key Benefits of Implementing EVM: 

Key benefit: Provides a single management control system; 

* The criteria for developing an EVM system promote the integration of 
cost, schedule, and technical processes with risk management, improving 
the efficiency and effectiveness of program management; they require 
measuring progress, accumulating actual costs, analyzing variances, 
forecasting costs at completion, and incorporating changes in a timely 
manner; 
* Implemented correctly, EVM provides a single management control 
system that prevents organizations from managing with one system and 
reporting from another. The concept that all work should be scheduled 
and traceable from the master plan to the details demonstrates that no 
specific scheduling software is required 

Key benefit: Improves insight into program performance; 
Description: 
* Enhanced insight into program performance results from the upfront 
planning, scheduling, and control EVM requires; this is important since 
the window of opportunity for correcting project problems occurs very 
early in a program; 
* Studies based on the performance of over 700 contracts show that 
performance trends indicate final outcome once they are about 15% to 
20% complete; thus, programs operating within an EVM system can quickly 
uncover, address, and resolve problems before they become out of 
control. 

Key benefit: Reduces cycle time to deliver a product; 
Description: 
* EVM imposes discipline and objective measurement and analysis on 
cost, schedule, and technical processes; planning and analysis often 
address and prevent problems from surfacing later; 
* If costly and untimely rework can be circumvented, the time to 
deliver the end product may also be reduced. 

Key benefit: Promotes management by exception; 
Description: 
* EVM directs management attention to only the most critical problems, 
reducing information overload. Since EVM allows quick communication of 
cost and schedule variances relative to the baseline plan, management 
can focus on the most pressing problems first. 

Key benefit: Fosters accountability; 
Description: 
* EVM requires breaking a program down into sufficiently detailed tasks 
to clearly define what is expected and when; this allows those 
responsible for implementing specific tasks to better understand how 
their work fits into the overall program plan, establishes 
accountability, gives personnel a sense of ownership, and can result in 
more realistic estimates at completion of future tasks; 
* When technical staff are held accountable for their performance, they 
tend to better understand the implications of how it affects overall 
program success; managers held accountable for their planning are more 
likely to implement a disciplined process for estimating work and 
tracking it through completion. 

Key benefit: Allows comparative analysis against completed projects; 
Description: 
* Consistent reporting of projects with EVM processes (following 
established guidelines) has for many decades resulted in a database 
useful for comparative analysis, giving managers insight into how their 
programs perform compared to historical program data. 

* They can also use the data for planning programs, improving the cost 
estimating process, and determining which suppliers provided the best 
value in the past. 

Key benefit: Provides objective information for managing the program; 
Description: 
* Measuring program performance gives objective information for 
identifying and managing risk; it allows early detection and resolution 
of problems by anticipating what could go wrong, based on past trends; 
* Objective data obtained from an EVM system enable management to 
defend and justify decisions and determine the best course of action 
when problems arise. 
 
Source: GAO, DOD, NASA, SCEA, and industry. 

[End of table] 

Figure 22: Inputs and Outputs for Tracking Earned Value: 

[Refer to PDF for image: illustration] 

Input: 

Performance based specifications; 
Goal metric approach; 
Binary quality gates at the inch pebble level; and; 
Establish clear goals and decision points: 

All combine to: 

Input: Facilitate establishing and tracking earned value metrics. 

Acquisition process improvement: 

Input: Supports/encourages use of earned value metrics. 

People aware management accountability; and; 
Configuration management; 

Input: Are necessary for implementing: Track Earned value. 

Formal inspections; and; 
Demonstration based reviews: 

Input: Provide objective data for determining earned value credits. 

All of the above are inputs to Track earned value. 

Output from Track earned value: 

Output: Alerts program managers to potential schedule and cost risks 
early; 
Formal risk management. 

Output: Provides a documented project performance trail: 
Acquisition process improvement; 
Best value awards.

Output: Provides quantitative data for decision making: 
Metrics-based scheduling and management; 
Quantitative program measurement. 

Output: Is a means of communicating project status: 
Demonstration-based reviews; 
Programwide visibility of progress vs. plan. 

Source: DOD and GAO. 

[End of figure] 

Obstacles to EVM: 

Obstacles, real or imagined, stop many programs and organizations from 
implementing EVM. Table 30 describes ten common concerns about EVM 
implementation and discusses the basis of each one. 

Table 30: Ten Common Concerns about EVM: 

Concern: 1. EVM is too expensive to implement; 
Basis for concern: 
* It is expensive to implement EVM when no formal EVM system is in 
place. Some companies spend $1 million to $2 million to put a good 
system in place from scratch; 
* Many have some elements in place and can get certified with less 
effort; even so, most of the time this is a significant investment, 
translating into several hundred thousand dollars. A simple spreadsheet 
workbook with worksheets for the plan and each time stamped snap 
shot of status to date can serve an effective EVM function for smaller 
projects; 
* Companies that do establish a good EVM system realize better project 
management decision making, fewer cost and schedule overruns, and 
potentially greater repeat business. It is hard to measure those gains, 
but some experts have noted that the return on investment is 
reasonable. The smaller the company, the more difficult it is to 
implement because upfront costs are prohibitive 
* While an EVM system is expensive to implement, not having one may 
cost a company future work because of the inability to compete with 
others that have a system. The cost of not getting potential business 
is also expensive. Balancing must be done to implement what is required 
in a manner that is sensitive to the corporate bottom line. 

Concern: 2. EVM is not useful for short-term, small-dollar projects; 
Basis for concern: 
* A certain amount of judgment must be applied to determine the 
viability and utility of a full-blown EVM system for short-term or 
small-dollar projects. Because typical EVM reporting is monthly, a 
project of 6 months or less cannot use trends (at least three data 
points) effectively: it would be half way completed before any trending 
could be adequately used, and then corrective action would take another 
data point or two to realize. Weekly reporting would impose 
significantly higher resource demands and costs that might not be 
acceptable for small-dollar contracts 
* Even on shorter, less costly projects, a well-structured, planned, 
and executed project is desirable. Most projects do not trip a 
threshold of $20 million or $50 million, for example. In some cases, 
for every large and high visibility project there are between 10 and 20 
small projects. Failure to execute on time or within costs on these 
small projects is just as unacceptable as on large projects, even 
though the relative impact is smaller. Several small projects can add 
up to a substantial loss of money and unhappy customers and can result 
in the loss of larger projects or future awards if a pattern of 
overrunning is evident. 
* EVM can be tailored and ingrained into the culture to ensure that 
project cost and schedule goals are met for smaller or shorter 
projects; smaller projects will benefit from having the work scope 
defined by a WBS and having a detailed plan and schedule for 
accomplishing the work. Small-dollar projects still need to have a 
baseline in place to manage changes and variances and require risk 
management plans to address issues. 
* On the corporate side, losing money is not an acceptable option, even 
if the project’s visibility is lower. Poor performance on a smaller 
project can damage a company’s reputation just as much as poor 
performance on a large, highly visible project. So even though a full 
EVM system is not required for small, short-term projects, the need to 
apply the fundamentals of EVM may still pertain. EVM is good, practical 
project management. 

Concern: 3. EVM practices go above and beyond basic project management 
practices; 
Basis for concern: 
* Our experts noted project managers who claim that they have been 
successfully not using EVM to manage their projects for years; when 
pressed to say how they ensure that cost and schedule goals are met, 
and how they manage their baselines along with changes, however, they 
inevitably resort to EVM by other means. 
* The biggest difference for successful project managers is the 
formality and rigor of EVM. Our experts noted that project managers who 
do not use a formal EVM system generally do so because they are not 
required to. Those who are forced to use formal EVM practices often do 
so grudgingly but warm up to it over time. Those who have been using 
formal EVM for years often do not know how they got by without it in 
the past. 
* A second difference between formal EVM practices and basic project 
management practices is the uniformity of data and formatting of 
information that makes it possible to draw comparisons against other 
like projects. Successful project managers who do not use a formal EVM 
system invariably have their “own system” that works for them and does 
much of the same things as a formal system. Unfortunately, it is very 
difficult to compare their systems to other projects, to do analysis, 
or to validate the data for good decision making. How much management 
visibility these systems have for timely decision making is debatable. 
Many companies, hindered by problem identification and corrective 
actions, have limited management insight into their projects. 
* The rigor and discipline of a formal EVM system ensure a certain 
continuity and consistency that are useful, notwithstanding the 
availability and turnover of knowledgeable personnel. When staff leave 
the job for an extended time, the structure of the system makes it 
possible for another person to take over for those who left. The new 
staff may not have the personal knowledge of the specific project, 
schedule, or EVM data but may understand enough about EVM to know how 
to interpret the data and evaluate the processes because of this 
disciplined structure. 
* Thus, EVM practices go beyond the basics, have greater rigor and 
formality; the benefit is that this ensures uniform practices that are 
auditable and consistent with other entities for relative comparison 
and benchmarking. Without this formality, it would be much more 
difficult to draw industry standard benchmarks and comparisons for 
improvement. 

Concern: 4. EVM is merely a government reporting requirement; 
Basis for concern: 
* It is, viewed only as a reporting requirement. But the benefit of a 
formal EVM system in government reporting is that the end-product 
occurs after organizing, planning, authorizing, executing, change 
management, analysis, and controlling are completed. The reports give 
management as well as government a view into the health of a project 
to make sure taxpayer money is being used judiciously. 
* While it makes for project visibility to the government, it is 
primarily intended as a systematic approach to help in managing a 
project or program. Reports are only as good as the data and the 
processes that support them; EVM serves more as a set of mandated 
government project management tools with reporting as a by-product. 

Concern: 5. Reports are a key product of EVM; 
Basis for concern: 
* Yes they are, but it would be short sighted to focus on reporting 
without recognizing the need for other subsets of an EVM system to 
provide reliable and auditable data. What comes out is a by-product of 
what goes in and how well it is maintained. 
* EVM reporting is intended to provide reliable information for timely 
decision making to maximize the probability of successfully executing a 
project; it is a project management “process tool set” that helps make 
certain that proven management techniques are used to run projects. 
* Where EVM is institutionalized, management uses reports to identify 
significant variances and drill down into areas of exception 
(management by exception) for corrective actions and decision making. 
When EVM is ingrained, reports are greatly anticipated and 
thoroughly discussed by senior management. 

Concern: 6. EVM is a financial management tool; 
Basis for concern: 
* Yes, to some degree, but in reality, it is an enhancement to 
traditional financial management; EVM requirements came about largely 
to reduce the high percentage of cost and schedule overruns that still 
ended up delivering a product that was technically inferior to the 
government's product. Trying to do forensic analysis of a failed 
project is tough enough without a reliable and rigorous system in 
place. If one can prevent having a failed project in the first place, 
forensics may not be necessary. 
* EVM enhances the traditional financial management tool by adding 
visibility of actual performance for budgeted tasks; this dimension of 
information, coupled with the traditional planned budget vs. actual 
costs, allows for better forecasting of final costs, as well as early 
warning of performance variances for timely decision making and 
corrective actions. 
* Because EVM is a more accurate mechanism for predicting costs than 
the traditional financial models, it is more reliable for determining 
funding requirements and use. 

Concern: 7. EVM data are backward looking and too old to be useful; 
Basis for concern: 
* This is only partially true. Some metrics data an EVM system produces 
are backward looking and show performance to date, both cumulative and 
by period; they can help identify past trends that can reliably be used 
to predict costs and schedule performance, along with the final cost of 
a project. 
* Presenting standard graphics is a best practice for reporting EVM 
trends and status to senior management. 
* Using EVM, management has the ability to make timely decisions and 
adjustments as needed to affect the final outcome of a project and 
maximize profitability. 

Concern: 8. Variances EVM reveals are bad and should always be avoided; 
Basis for concern: 
* Variances are expected because programs are rarely performed to plan: 
neither good nor bad, they simply measure how much actual performance 
has varied from the plan. 
* Variance thresholds try to quantify an acceptable range of deviation; 
those that do not exceed a threshold are not usually a concern while 
those that do are worthy of further inspection to determine the best 
course of action to minimize any negative impacts to cost and schedule 
objectives. 
* Variances can indicate one or more of the following: how well the 
project was planned (statement of work definition, estimating and 
estimating assumptions, execution strategy, procurement strategy, risk 
management); how well changes to the baseline plan are being 
implemented; how much planned and unplanned change has occurred since 
inception; how well the project is being executed. 

Concern: 9. No one cares about EVM data; 
Basis for concern: 
* False. That is like saying that the pilot of a jet aircraft does not 
care about what the navigation instrumentation says. EVM data are the 
navigation instrumentation that tells the project manager how well the 
flight plan is working. 
* If line managers and the project manager ignore the EVM data, they 
may not arrive at cost and schedule goals; the data help them make the 
necessary midcourse adjustments so they can arrive at the planned 
destination on time. 

Concern: 10. EVM does not help with managing a program; 
Basis for concern: 
* False: Refer to previous 9 items, especially 3, 7, 8, and 9, which 
apply to both projects and programs. 
* When managing a program, it is very important to identify and manage 
resources to ensure that over- or underallocations do not exist; EVM 
helps identify these conditions. 
* It helps identify and manage program and project risks and program 
and project funding requirements to ensure that funding shortfalls do 
not surprise the program manager. 
 
Source: GAO. 

[End of table] 

Implementing EVM at the Program Level: 
 
Implementing EVM at the program rather than just the contract level is 
considered a best practice. Furthermore, it directly supports federal 
law requiring executive agency heads to approve or define the cost, 
performance, and schedule goals for major agency acquisition programs. 
Specifically, the Federal Acquisition Streamlining Act of 1994 
established the congressional policy that the head of each executive 
agency should achieve, on average, 90 percent of the agency’s cost, 
performance, and schedule goals established for major acquisition 
programs.[Footnote 65] When it is necessary to implement this policy, 
agency heads are to determine whether there is a continuing need for 
programs that are significantly behind schedule, over budget, or not in 
compliance with the performance or capability requirements and identify 
suitable actions to be taken, including termination. Additionally, OMB 
Circular A-11, part 7, section 300, addresses the use of EVM as an 
important part of a program’s management and decision making.[Footnote 
66] That policy requires the use of an integrated EVM system across the 
entire program to measure how well the government and its contractors 
are meeting a program’s approved cost, schedule, and performance goals. 
Integrating government and contractor cost, schedule, and performance 
status should result in better program execution through more effective 
management. In addition, integrated EVM data can be used to justify 
budget requests. 

Requiring EVM at the program level also makes government functional 
area personnel accountable for their contributions to the program. 
Further, it requires government agencies to plan for a risk-adjusted 
program budget so that time and funds are available when needed to meet 
the program’s approved baseline objectives. Continuous planning through 
program-level EVM also helps government program managers adequately 
plan for the receipt of material, like government furnished equipment, 
to ensure that the contractor can execute the program as planned. 
Finally, program-level EVM helps identify key decision points up front 
that should be integrated into both the contractor’s schedule and the 
overall program master schedule, so that significant events and 
delivery milestones are clearly established and known by all. IBRs 
should include all government and contractor organizations involved in 
performing the program, as well as those responsible for establishing 
requirements, performing tests, and monitoring performance. 

Federal And Industry Guidelines For Implementing EVM: 

The benefits of using EVM are singularly dependent on the data from the 
EVM system. Organizations must be able to evaluate the quality of an 
EVM system in order to determine the extent to which the cost, 
schedule, and technical performance data can be relied on for program 
management purposes. In recognition of this, the American National 
Standards Institute (ANSI) and the Electronic Industries Alliance (EIA) 
have jointly established a national standard for EVM systems—ANSI/EIA-
748-B. The National Defense Industrial Association (NDIA) is the 
subject matter expert for the standard.[Footnote 67] 

Soon after the standard was established, leading companies, including 
commercial businesses, began using it to manage their programs, even 
though they did not mandate EVM. They saw ANSI and EIA standards as 
best practices that provided a scaleable approach to using EVM for any 
contract type, contract size, and duration. 

DOD adopted the ANSI guidelines for managing government programs with 
the expectation that program managers would be responsible for ensuring 
that industry-developed standards were being met by ongoing process 
surveillance. Other agencies soon followed DOD’s example. Recently, OMB 
imposed the use of EVM for all major capital acquisitions in accordance 
with OMB Circular A-11, Part 7—OMB stated in its 2006 Capital 
Programming Guide that all major acquisitions with development effort 
are to require that contractors use an EVM system that meets the ANSI 
guidelines.[Footnote 68] 

The ANSI guidelines were originally written for companies, but the EIA-
748-B version began introducing more generic terminology for government 
and other organizations. They consist of 32 guidelines in five basic 
categories: (1) organization; (2) planning, scheduling, and budgeting; 
(3) accounting considerations; (4) analysis and management reports; and 
(5) revisions and data maintenance (see table 31). In general, they 
define acceptable methods for organizations to define the contract or 
program scope of work using a WBS; identify the organizations 
responsible for performing the work; integrate internal management 
subsystems; schedule and budget authorized work; measure the progress 
of work based on objective indicators; collect the cost of labor and 
materials associated with the work performed; analyze variances from 
planned cost and schedules; forecast costs at contract completion; and 
control changes. 

Table 31: ANSI Guidelines for EVM Systems: 

Organization: 

Guideline: 1; 
Category and statement: Define the authorized work elements for the 
program. A WBS, tailored for effective internal management control, is 
commonly used in this process. 
 
Guideline: 2; 
Category and statement: Identify the program organizational structure, 
including the major subcontractors responsible for accomplishing the 
authorized work, and define the organizational elements in which work 
will be planned and controlled. 
 
Guideline: 3; 
Category and statement: Provide for the integration of the planning, 
scheduling, budgeting, work authorization, and cost accumulation 
processes with one another and, as appropriate, the program WBS and 
program organizational structure. 

Guideline: 4; 
Category and statement: Identify the organization or function 
responsible for controlling overhead (indirect costs). 

Guideline: 5; 
Category and statement: Provide for integration of the program WBS and 
the program organizational structure in a manner that permits cost and 
schedule performance measurement by elements of either structure or 
both, as needed. 

Planning, scheduling, and budgeting: 

Guideline: 6; 
Category and statement: Schedule the authorized work in a way that 
describes the sequence of work and identifies significant task 
interdependencies required to meet the program’s requirements. 

Guideline: 7; 
Category and statement: Identify physical products, milestones, 
technical performance goals, or other indicators that will be used to 
measure progress. 

Guideline: 8; 
Category and statement: Establish and maintain a time-phased budget 
baseline, at the control account level, against which program 
performance can be measured. Initial budgets established for 
performance measurement will be based on either internal management 
goals or the external customer-negotiated target cost, including 
estimates for authorized but undefinitized work.[A] Budget for 
far-term efforts may be held in higher-level accounts until an 
appropriate time for allocation at the control account level. If an 
overtarget baseline is used for performance measurement reporting 
purposes, prior notification must be provided to the customer. 

Guideline: 9; 
Category and statement: Establish budgets for authorized work with 
identification of significant cost elements (labor, material) as needed 
for internal management and control of subcontractors. 

Guideline: 10; 
Category and statement: To the extent it is practical to identify the 
authorized work in discrete work packages, establish budgets for this 
work in terms of dollars, hours, or other measurable units. Where the 
entire control account is not subdivided into work packages, identify 
the far-term effort in larger planning packages for budget and 
scheduling purposes. 

Guideline: 11; 
Category and statement: Provide that the sum of all work package 
budgets and planning package budgets within a control account equals 
the control account budget. 

Guideline: 12; 
Category and statement: Identify and control level-of-effort activity 
by time-phased budgets established for this purpose. Only effort not 
measurable or for which measurement is impractical may be classified as 
level of effort. 

Guideline: 13; 
Category and statement: Establish overhead budgets for each significant 
organizational component for expenses that will become indirect costs. 
Reflect in the program budgets, at the appropriate level, the amounts 
in overhead pools that are planned to be allocated to the program as 
indirect costs. 

Guideline: 14; 
Category and statement: Identify management reserves and undistributed 
budget. 

Guideline: 15; 
Category and statement: Provide that the program target cost goal is 
reconciled with the sum of all internal program budgets and management 
reserves. 

Accounting considerations: 

Guideline: 16; 
Category and statement: Record direct costs in a manner consistent with 
the budgets in a formal system controlled by the general books of 
account. 

Guideline: 17; 
Category and statement: When a WBS is used, summarize direct costs from 
control accounts into the WBS without allocating a single control 
account to two or more WBS elements. 

Guideline: 18; 
Category and statement: Summarize direct costs from the control 
accounts into the organizational elements without allocating a single 
control account to two or more organizational elements. 

Guideline: 19; 
Category and statement: Record all indirect costs that will be 
allocated to the program consistent with the overhead budgets. 

Guideline: 20; 
Category and statement: Identify unit costs, equivalent unit costs, or 
lot costs when needed. 

Guideline: 21; 
Category and statement: For EVM system, the material accounting system 
will provide for (1) accurate cost accumulation and assignment of costs 
to control accounts in a manner consistent with the budgets using 
recognized, acceptable, costing techniques; (2) cost recorded for 
accomplishing work performed in the same period that earned value is 
measured and at the point in time most suitable for the category of 
material involved but no earlier than the time of actual receipt of 
material; (3) full accountability of all material purchased for the 
program, including the residual inventory. 
 
Analysis and management reports: 

Guideline: 22; 
Category and statement: At least monthly, generate the following 
information at the control account and other levels as necessary for 
management control, using actual cost data from, or reconcilable with, 
the accounting system: (1) comparison of the amount of planned budget 
and the amount of budget earned for work accomplished (this comparison 
provides the schedule variance); (2) comparison of the amount of the 
budget earned and the actual (applied where appropriate) direct costs 
for the same work (this comparison provides the cost variance). 

Guideline: 23; 
Category and statement: Identify, at least monthly, the significant 
differences between both planned and actual schedule performance and 
planned and actual cost performance and provide the reasons for the 
variances in the detail needed by program management. 

Guideline: 24; 
Category and statement: Identify budgeted and applied (or actual) 
indirect costs at the level and frequency needed by management for 
effective control, along with the reasons for any significant 
variances. 

Guideline: 25; 
Category and statement: Summarize the data elements and associated 
variances through the program organization or WBS to support management 
needs and any customer reporting specified in the contract. 

Guideline: 26; 
Category and statement: Implement managerial actions taken as the 
result of earned value information. 

Guideline: 27; 
Category and statement: Develop revised estimates of cost at completion 
based on performance to date, commitment values for material, and 
estimates of future conditions. Compare this information with 
the performance measurement baseline to identify variances at 
completion important to management and any applicable customer 
reporting requirements, including statements of funding requirements. 

Revisions and data maintenance: 

Guideline: 28; 
Category and statement: Incorporate authorized changes in a timely 
manner, recording their effects in budgets and schedules. In the 
directed effort before negotiating a change, base such revisions on the 
amount estimated and budgeted to the program organizations. 

Guideline: 29; 
Category and statement: Reconcile current budgets to prior budgets in 
terms of changes to authorized work and internal replanning in the 
detail needed by management for effective control. 

Guideline: 30; 
Category and statement: Control retroactive changes to records 
pertaining to work performed that would change previously reported 
amounts for actual costs, earned value, or budgets. Adjustments should 
be made only for correcting errors, making adjustments for routine 
accounting or the effects of customer or management directed changes, 
or improving the baseline integrity and accuracy of performance 
measurement data. 

Guideline: 31; 
Category and statement: Prevent revisions to the program budget except 
for authorized changes. 

Guideline: 32; 
Category and statement: Document changes to the performance measurement 
baseline. 

Source: Excerpts from Earned Value Management Systems (ANSI/EIA 748-
B), Copyright © (2007), Government Electronics and Information 
Technology Association. All Rights Reserved. Reprinted by Permission. 

[A] An undefinitized contract is one in which the contracting parties 
have not fully agreed on the terms and conditions. 

[End of table] 

As noted earlier, OMB requires the use of EVM on all major acquisition 
programs for development. Further, it must be compliant with agencies’ 
implementation of the ANSI guidelines. Several other guides are 
available to help agencies implement EVM systems. We outlined these 
guides in table 3 and list them again here in table 32). 

Table 32: EVM Implementation Guides: 

Guide: DOD, The Program Manager’s Guide to the Integrated Baseline 
Review Process (Washington, D.C.: OSD (AT&L), April 2003) NDIA, 
National Defense Industrial Association; 
Applicable agency: DOD; 
Description: Defines the IBR’s purpose, goals, and objectives; 
discusses how it leads to mutual understanding of risks inherent in 
contractors’ performance plans and management control systems; and 
explains the importance of formulating a plan to handle and mitigate 
these risks. 

Guide: NDIA, National Defense Industrial Association (NDIA) Program 
Management Systems Committee (PMSC) Surveillance Guide (Arlington, Va.: 
October 2004); 
Applicable agency: All; 
Description: Defines a standard industry approach for monitoring 
whether an EVM system satisfies the processes and procedures outlined 
in the ANSI guidelines. 

Guide: NDIA, National Defense Industrial Association (NDIA) Program 
Management Systems Committee (PMSC) Earned Value Management Systems 
Intent Guide (Arlington, Va.: January 2005); 
Applicable agency: All; 
Description: Defines in detail the management value and intent for all 
32 ANSI guidelines. Contractors use it to assess initial compliance and 
perform implementation surveillance. 

Guide: Defense Contract Management Agency, Department of Defense Earned 
Value Management Implementation Guide (Alexandria, Va.: October 2006); 
Applicable agency: DOD, FAA, NASA; 
Description: Provides guidance on the framework to follow during 
implementation and surveillance of an EVM system. 

Guide: National Defense Industrial Association, Program Management 
Systems Committee, “NDIA PMSC ANSI/EIA 748 Earned Value Management 
System Acceptance Guide,” draft, working release for user comment 
(Arlington, Va.: November 2006); 
Applicable agency: All; 
Description: Defines an EVM system acceptance process that would apply 
to industry and government. NDIA has expanded this proposal to a draft 
EVM process implementation guide that will connect its guides with more 
specific information on how they relate to one another. 

Guide: National Defense Industrial Association, Program Management 
Systems Committee, “NDIA PMSC Earned Value Management Systems 
Application Guide,” draft, working release for use; 
Applicable agency: All; 
Description: Defines a standard approach for all organizations 
implementing an EVM system through all phases of acquisition. 
 
Source: GAO. 

[End of table] 

The remainder of the Cost Guide assumes that readers understand basic 
EVM principles. Readers unfamiliar with EVM can also obtain such 
information from, for example, the Defense Acquisition University and 
the Project Management Institute (PMI).[Footnote 69] 

The Thirteen Steps In The EVM Process: 

The EVM process has thirteen fundamental steps, outlined and described 
in this section: 

1. define the scope of effort using a WBS; 

2. identify who in the organization will perform the work; 

3. schedule the work; 

4. estimate the labor and material required to perform the work and 
authorize the budgets, including management reserve; 

5. determine objective measure of earned value; 

6. develop the performance measurement baseline; 

7. execute the work plan and record all costs; 

8. analyze EVM performance data and record variances from the PMB plan; 

9. forecast EACs using EVM; 

10. conduct an integrated cost-schedule risk analysis; 

11. compare EACs from EVM (step 9) with EAC from risk analysis (step 
10);[Footnote 70] 

12. take management action to mitigate risks; and; 

13. update the performance measurement baseline as changes occur. 

Define the Scope with a WBS: 

The WBS, a critical component of EVM that defines the work to be 
performed, should be the basis of the cost estimate and the project 
schedule. In the schedule, the WBS elements are linked to one another 
with logical relationships and lead to the end product or final 
delivery. The WBS progressively deconstructs the deliverables of the 
entire effort through lower-level WBS elements and control accounts. 

Figure 23 shows how the overall program plan breaks down. The 
hierarchical WBS ensures that the entire statement of work accounts for 
the detailed technical tasks and, when completed, facilitates 
communication between the customer and supplier on cost, schedule, 
technical information, and the progress of the work. It is important 
that the WBS is comprehensive enough to represent the entire program to 
a level of detail sufficient to manage the size, complexity, and risk 
associated with the program. In addition, the WBS should be the basis 
of the program schedule. Furthermore, there should be only one WBS for 
each program, and it should match the WBS used for the cost estimate 
and schedule so that actual costs can be fed back into the estimate and 
there is a correlation between the cost estimate and schedule. 
Moreover, while costs are usually tracked at lower levels, what is 
reported in an EVM system is usually summarized at a higher level, 
perhaps matching the summary level of the schedule that is often used 
for a schedule risk analysis, facilitating the preparation of an 
integrated cost-schedule risk analysis. However, through the fluidity 
of the parent-child relationship, the WBS can be expanded to varying 
degrees of detail so that problems can be quickly identified and 
tracked. 

Figure 23: WBS Integration of Cost, Schedule, and Technical 
Information: 

[Refer to PDF for image: illustrations] 

Requirements: 
System specification; 
* 1000 Prime mission; 
- 1100 Subsystem A; 
- 1110 Component 1. 

WBS elements: 
1000 Prime mission product 
* 1100 Subsystem A; 
- 1110 Component 1 through, 
- 1189 Component n. 

Technical description: 
Subsystem (WBS 1100); 
Design, develop, produce, and verify, complete subsystem A, defined as 
component 1, component 2, and other elements. 

Program cost estimate: 
1000 Prime mission product: $1,234,567; 
1100 Subsystem A: $456,890; 
1110 Component 1: $23,552. 

Program plan: 

Program plan: 1. Preliminary design review (PDR); 
Events: PDR; 
Accomplishment criteria: 1.
a. Duty cycle defined 
b. Preliminary analysis complete. 

Program schedule: 
 
Detailed tasks: Program events: 
1. Preliminary design complete, duty cycle defined 
20 XX: PDR; 
20 XY: CDR;
20 XZ: 

Source: NDIA. 

Note: CDR = critical design review. 

[End of figure] 

Identify Who Will Do the Work: 

Once the WBS has been established, the next step is to assign someone 
to do the work. Typically, someone from the organization is assigned to 
perform a specific task identified in the WBS. To ensure that someone 
is accountable for every WBS element, it is useful to determine levels 
of accountability, or control accounts, at the points of intersection 
between the organizational breakdown structure and the WBS. The control 
account becomes the management focus of an EVM system and the focal 
point for performance measurement. 

It is at the control account level that actual costs are collected and 
variances from the baseline plan are reported in the EVM system. Figure 
24 shows how control accounts are determined. The WBS is shown at the 
top, including program elements and contract reporting elements and 
detailed elements. To the left is the organizational breakdown 
structure. The control accounts lie in the center of the figure, where 
the WBS and organizational breakdown structure intersect. As the box at 
the far right of the figure indicates, each control account is further 
broken down into work packages and planning packages. Each of these has 
staff who are assigned responsibility for managing and completing the 
work. 

Figure 24: Identifying Responsibility for Managing Work at the Control 
Account: 

[Refer to PDF for image: illustration] 

Program WBS elements: 
Aquarium development program; 
Program management and systems engineering; 
Develop and integrate aquarium. 

Contract WBS reporting elements: 
Material acquisition; 
Material integration; 
Development documentation. 

Contract WBS detailed statements: 
Structural integration; 
Biological integration; 
Integration quality control; 
Environmental control; 
Plant life; 
Tropical fish. 

Organizational breakdown structure: 
Aquatics division: 
* Marketing; 
* Engineering: 
* Biological engineering: 
- Tropical engineering; 
-- Control account: work packages; planning packages: together form WBS 
data summary and CBS data summary; 
- Hydro engineering; 
-- Control account; 
- Hydrobotanical engineering; 
-- Control account; 
* Hardware engineering; 
* Operations. 
 
Source: © 2003 SCEA, “Earned Value Management Systems.” 

[End of figure] 

Control accounts represent the level by which actual costs are 
accumulated and compared to planned costs. A control account manager is 
responsible for managing, tracking, and reporting all earned value data 
defined within each control account. Thus, control accounts are the 
natural control point for EVM planning and management. 

Work packages—detailed tasks typically 4 to 6 weeks long—require 
specific effort to meet control account objectives and are defined by 
who authorizes the effort and how the work will be measured and 
tracked. They reflect near-term effort. Planning packages are far-term 
work and usually planned at higher levels. Budgets for direct labor, 
overhead, and material are assigned to both work and planning packages 
so that total costs to complete the program are identified at the 
outset. As time passes, planning packages are broken down into detailed 
work packages. This conversion of work from a planning to a work 
package, commonly known as “rolling wave” planning, occurs for the 
entire life of the program until all work has been planned in detail. A 
best practice is to plan the rolling wave to a design review, test, or 
other major milestone rather than to an arbitrary period such as 6 
months. 

In planning the baseline, programs ought to consider the allocation of 
risk into the baseline up front—especially when addressing the issue of 
rework and retesting. Experts have noted that to set up a realistic 
baseline, anticipated rework could be a separate work package so as to 
account for a reasonable amount of rework but still have a way to track 
variances. Using this approach, programs are not to exclude rework from 
the budget baseline, because they acknowledge efforts that are bound to 
involve a lot of revision like design. 

Schedule the Work to a Timeline: 

Developing a schedule provides a time sequence for the duration of the 
program’s activities and helps everyone understand both the dates for 
major milestones and the activities, often called “critical and near 
critical activities,” that drive the schedule. A program schedule also 
provides the vehicle for developing a time-phased budget baseline. The 
typical method of schedule analysis is the critical path method, 
implemented in standard scheduling software packages. 

Because some items such as labor, supervision, rented equipment and 
facilities, and escalation cost more if the program takes longer, a 
schedule can contribute to an understanding of the cost impact if the 
program does not finish on time. The program’s success also depends on 
the quality of its schedule. If it is well integrated, the schedule 
clearly shows the logical relationships between program activities, 
activity resource requirements and durations, and any constraints that 
affect their start or completion. The schedule shows when major events 
are expected as well as the completion dates for all activities leading 
up to them, which can help determine if the schedule is realistic and 
achievable. When fully laid out, a detailed schedule can be used to 
identify where problems are or could potentially be. Moreover, as 
changes occur within a program, a well-statused schedule will aid in 
analyzing how they affect the program. 

For these reasons, an integrated schedule is key in managing program 
performance and is necessary for determining what work remains and the 
expected cost to complete it. As program complexity increases, so must 
the schedule’s sophistication. To develop and maintain an integrated 
network schedule, 

* all activities must be defined (using the WBS) at some level of 
detail; 

* all activities must be sequenced and related using network logic. The 
schedule should be horizontally and vertically integrated;

* the activities must be resource-loaded with labor, material, and 
overhead; 

* the duration of each activity must be estimated, usually with 
reference to the resources to be applied and their productivity, along 
with any external factors affecting duration;
 
* the program master schedule and critical path must be identified; 

* float—the amount of time a task can slip before affecting the 
critical path—for activities must be calculated; 

* a schedule risk analysis must be run for larger, more complex, 
important, or risky programs; 

* the schedule should be continuously updated using logic and durations 
to determine dates; and; 

* the schedule should be analyzed continuously for variances and 
changes to the critical path and completion date. 

We discuss each of these items next. 

The schedule should reflect all activities (steps, events, outcomes), 
including activities the government and its contractors are to perform, 
and should be derived from the program’s work breakdown structure. The 
schedule’s activities should also be traceable to the program statement 
of work to ensure all effort is included. Steps 1 and 2 of the EVM 
process define the activities and provide input for loading the
activities with labor costs. 

The schedule should line up all activities in the order that they are 
to be carried out. In particular, activities that must finish before 
the start of other activities (that is, predecessor activities) as well 
as activities that cannot begin until other activities are completed 
(successor activities) should be identified. In this way, dependencies 
among activities that lead to the accomplishment of events or 
milestones can be established and used as a basis for guiding work and 
measuring progress. When activities are sequenced, using dependencies 
between them that reflect the program’s execution plan, the result is a 
network of activity chains like those shown in figure 25. 

Figure 25: An Activity Network: 

[Refer to PDF for image: illustration] 

Illustrates: 
Start; 
Activities in the critical path; 
Activities not in the critical path. 

Source: © 2005 MCR LLC, “Schedule Risk Analysis.” 

[End of figure] 

A network diagram not only outlines the order of the activities and 
their dependencies; it also documents how the program measures progress 
toward certain milestones. By linking activities with finish-to-start 
logic, one can know which activities must finish before others (known 
as predecessor activities) begin and which activities may not begin 
until others (successor activities) have been completed. Other 
relationships such as start-to-start and finish-to-finish are used as 
well. Using this approach, a valid Critical Path Method (CPM) network 
of logically linked tasks and events begins to emerge, enabling the 
schedule network to calculate dates and to predict changes in future 
task performance. A valid CPM network should be the basis for any 
schedule so that it best represents the plan and can respond to 
changes. This information fosters communication between team members 
and better understanding of the program as a whole, identifies 
disconnects as well as hidden opportunities, and promotes efficiency 
and accuracy. Moreover, this also provides a method for controlling the 
program by comparing actual to planned progress. 

Schedules should be integrated horizontally and vertically. Integrated 
horizontally, the schedule links the products and outcomes associated 
with already sequenced activities. These links are commonly referred to 
as hand offs and serve to verify that activities are arranged in the 
order that will achieve aggregated products or outcomes. Horizontal 
integration also demonstrates that the overall schedule is rational, 
planned in a logical sequence, accounts for interdependencies between 
work and planning packages, and provides a way to evaluate current 
status. Being traceable horizontally, however, is not a simple matter 
of making sure that each activity has a successor. Activities need to 
have certain predecessor-successor relationships so the schedule gives 
the correct results when they are updated or when durations change. 
Two logic requirements have to be provided: 

1. finish-to-start or start-to-start predecessors, so that if the 
activity is longer than scheduled it does not just start earlier 
automatically, and; 

2. finish-to-start or finish-to-finish successors that will be “pushed” 
if they take longer or finish later. 

These logical requirements are needed to prevent “dangling logic,” 
which 
happens when activities or tasks are created without predecessors 
or successors. Fundamentally, although a start-to-start successor is 
proper and sometimes useful, it is not sufficient to avoid danglers. 
With dangling logic, risk in activities will not cascade down to their 
successors automatically when schedules are updated. This is not only 
good critical path method scheduling but is also crucial during Monte 
Carlo simulation when activity durations are changed on purpose 
thousands of times. Without this logic, the simulation will not be able 
to identify the correct dates and critical paths when the durations 
change. 

The schedule should also be integrated vertically, meaning that 
traceability exists among varying levels of activities and supporting 
tasks and subtasks. Such mapping or alignment within the layers of the 
schedule among levels—master, intermediate, detailed—enables different 
groups to work to the same master schedule. When schedules are 
vertically integrated, lower-level schedules are clearly traced to 
upper-tiered milestones, allowing for total schedule integrity and 
enabling different teams to work to the same schedule expectations. 

More risky or more complex programs should have resource-loaded 
schedules—that is, schedules with resources of staff, facilities, and 
materials needed to complete the activities that use them. Resource 
loading can assist in two ways: 

1. scarce resources can be defined and their limits noted, so that when 
they are added to the activities and “resource-leveled,” the resources 
in scarce supply will not be over scheduled in any time period; and; 

2. all resources can be defined and have costs placed on them so that 
the program cost estimate can be developed within the scheduling 
package. 

The next step is estimating how long each activity will take—who will 
do the work, whether the resources are available and their 
productivity, and whether any external factors might affect the 
duration (funding or time constraints). It is crucial at this point in 
schedule development to make realistic assumptions and specify 
realistic durations for the activities. In determining the duration of 
each activity, the same rationale, data, and assumptions used for cost 
estimating should be used for schedule estimating. Further, these 
durations should be as short as possible and they should have specific 
start and end dates. Excessively long periods needed to execute an 
activity should prompt further decomposition of the activity so that 
shorter execution durations will result. 

Often the customer, management, or other stakeholder will ask to 
shorten the program schedule. Strategies may help. Some activities can 
be shortened by adding more people to do the work, although others will 
take a fixed amount of time no matter what resources are available. 
Other strategies often require “fast track” or “concurrent” scheduling 
that schedules successor activities or phases to finish before their 
logical predecessors have completed. In this case, activities or phases 
that would without the pressure for a shorter schedule, be scheduled in 
sequence are overlapped instead. This approach must be used with 
caution since shortening activity durations or overlapping activities 
may not be prudent or even possible. 

Further, schedules need to consider program calendars and special 
calendars that may be more appropriate for shared resources—test 
facilities may work 24/7; calendars recognize holidays and other 
vacations. If training is required, it should be provided in the 
schedule. Also, since it is sometimes unwise to assume 100 percent 
productivity, many organizations routinely provide sick leave in their 
estimates. Procurement time for ordering and receiving material and 
equipment must be added so it is available when needed—some material 
and equipment take time to obtain or produce and are often called long 
lead time items. Schedules need to recognize these items as critical, 
so they can be ordered before design is complete. 

It is useful to rely on historical data for scheduling information as 
much as possible when developing activity durations so they are as 
realistic as possible. Often parts of the program have no analogous 
estimates, so program participants will use expert judgment to estimate 
durations. Further , it is a best practice for schedule duration 
rationale to tie directly to the cost estimate documentation. Figure 26 
shows the typical output of the activity duration estimate. 

Figure 26: Activity Durations as a Gantt Chart: 

ID: 1; 
Name: Event 1; 
Start: 4/28/02; through 2002, Q3. 

ID: 2; 
Name: Accomplishment 1.1; 
Start: 4/28/02; through early 2002, Q3. 

ID: 3; 
Name: Criterion 1.1.1; 
Start: 4/28/02; through late 2002, Q2. 

ID: 4; 
Name: Task 1.1.1.1; 
Start: 4/28/02; through late 2002, Q2. 

ID: 5; 
Name: Criterion 1.1.2; 
Start: 5/12/02; through late 2002, Q2. 

ID: 6; 
Name: Task 1.1.2.1; 
Start: 5/12/02. through late 2002, Q2. 

ID: 7; 
Name: Accomplishment 1.2; 
Start: 5/5/02; through early 2002, Q3. 

ID: 8; 
Name: Criterion 1.2.1; 
Start: 5/5/02; through early 2002, Q3. 

ID: 9; 
Name: Task 1.2.1.1; 
Start: 5/5/02; through early 2002, Q3. 

ID: 10; 
Name: Event 2; 
Start: 5/3/02 

ID: 11; 
Name: Accomplishment 2.1; 
Start: 5/3/02; through late 2002, Q3. 

ID: 12; 
Name: Criterion 2.1.1; 
Start: 7/26/02; through late 2002, Q3. 

ID: 13; 
Name: Task 2.1.1.1; 
Start: 7/26/02; through late 2002, Q3. 

ID: 14; 
Name: Criterion 2.1.2
Start: 5/3/02; through middle 2002, Q3. 

Source: DOD. 

[End of figure] 

Historically, state-of-the-art technology development programs have 
taken longer than planned for the same reasons that costs often exceed 
the estimate: no point estimate for schedule duration is correct and 
risk is generally high in development programs. Instead, each estimate 
of activity duration has a range of possible outcomes, driven by 
various uncertainties such as lack of available technical capability, 
slow software development, integration problems, and test failures. 
Even if staff work overtime, schedule overruns may still occur, since 
overworked staff are less efficient. Understanding how program risks 
may affect durations is often accomplished by using 3-point estimate 
durations (optimistic, most likely and pessimistic) to check the 
reasonableness of the durations used. A standard way to use these 
values to improve the accuracy of the schedule durations is to average 
them. The resulting single-point or “deterministic” durations are 
usually more accurate than simply the most likely durations without 
considering other possible scenarios. 

After the activity durations have been estimated, scheduling software 
can be used to determine the program’s overall schedule and critical 
path, which represents the chain of dependent activities with the 
longest total duration.[Footnote 71] Along the critical path—the shaded 
boxes in figure 25—if any activity slips, the entire program will be 
delayed. Therefore, management must focus not only on problems in 
activities along the critical path (activities with zero total float) 
but also on near-critical activities (activities with low total float), 
because these activities typically have the least time to slip before 
they delay the total program. Management should also identify whether 
the problems are associated with items being tracked on the program’s 
risk management list. This helps management develop workarounds, shift 
resources from noncritical path activities to cover critical path 
problems, and implement risk management actions to address problem 
areas. In addition, the critical path in the overall schedule is 
invaluable in helping determine where management reserve and unfunded 
contingencies may exist. 

The schedule should identify how long a predecessor activity can slip 
before the delay affects successor activities, known as float. As a 
general rule, activities along the critical path have the least amount 
of float. Therefore, critical path tasks have the least schedule 
flexibility. 

Also called slack or total float or total slack, float is the time an 
activity can slip before it impacts the end date of the program. The 
time a predecessor activity can slip before the delay affects the 
successor activities is called free-float, or free-slack. It is a 
subset of the total float and is calculated, for a finish-to-start 
relationship, as the early start of the successor minus the early 
finish of the predecessor. For other relationships, this calculation is 
similar, going with the “flow” of their relationship. This concept of 
freefloat is important, because some resources of the affected 
activities may be available only during certain time periods, which 
could be detrimental to the completion of the subsequent activities and 
even the entire program. 

As the schedule is statused, float will change and can be positive or 
negative. Positive float indicates the amount of time the schedule can 
fluctuate before affecting the end date. Negative float indicates 
critical path effort and may require management action such as 
overtime, second or third shifts, or resequencing of work. As a result, 
float should be continuously assessed. 

A schedule risk analysis should be performed using a good CPM schedule 
and data about project schedule risks, as well as Monte Carlo 
simulation techniques to predict the level of confidence in meeting a 
program’s completion date, the contingency time needed for a level of 
confidence, and identify of high-priority risks. This analysis focuses 
not only on critical path activities but also on other schedule paths 
that may become critical, since they can potentially affect program 
status. A schedule and cost risk assessment recognizes the 
interrelationship between schedule and cost and captures the risk 
that schedule durations and cost estimates may vary because of, among 
other things, limited data, optimistic estimating, technical 
challenges, lack of qualified personnel, unrealistic durations, poor or 
inadequate logic, overuse of constraints, several parallel paths, 
multiple merge points, material lead times, and external factors 
(weather, funding); it identifies activities that most affect the 
finish date. This helps management focus on important risk mitigation 
efforts. As a result, the baseline schedule should include a reserve of 
extra time for contingencies based on the results of a schedule risk 
analysis. This reserve should be held by the project manager and 
applied as needed to activities that take longer than scheduled because 
of the identified risks. 

To determine the full effect of risks on the schedule, a schedule risk 
analysis should be conducted to determine the level of uncertainty. A 
schedule risk analysis can help answer three questions that are 
difficult for deterministic critical path method scheduling to address: 

1. How likely is it that the program will finish on or before the 
scheduled completion or baseline date? 

2. How much schedule reserve time is needed to provide a date that 
satisfies the stakeholders’ desires for certainty? 

3. Which activities or risks are the main drivers of schedule risk and 
the need for schedule reserve? 

This last type of information helps management mitigate schedule risk 
to improve the chances of finishing on time. In addition, an 11-point 
assessment should be conducted (more detail is in appendix X). 

Risk inherent in a schedule makes it prudent to add in schedule reserve 
for contingencies—a buffer for the schedule baseline. Typically, 
schedule reserve is calculated by conducting a schedule risk analysis, 
choosing a percentile that represents the organization’s tolerance for 
overrun risk, and selecting the date that provides that degree of 
certainty. As a general rule, the reserve should be held by the project 
manager and applied as needed to activities that take longer than 
scheduled because of the identified risks. Reserves of time should not 
be apportioned in advance to any specific activity, since the risks 
that will actually occur and the magnitude of their impact are not 
known in advance. 

Schedule reserve is a management tool for dealing with risk and should 
be identified separately in the schedule baseline. It is usually 
defined as an activity at the end of the schedule that has no specific 
scope assigned, since it is not known which risks may materialize. Best 
practices call for schedule reserve to be allocated, based on the 
results of the schedule risk analysis so that high-risk activities have 
first priority for schedule reserve. 

Once this analysis is done, the schedule should use logic and durations 
in order to reflect realistic start and completion dates for program 
activities. Maintaining the integrity of the schedule logic is not only 
necessary to reflect true status, but is also required before 
conducting follow-on schedule risk analyses. The schedule should avoid 
logic overrides and artificial constraint dates that are chosen to 
create a certain result on paper. 

To ensure that the schedule is properly updated, individuals trained in 
critical path method scheduling should be responsible for statusing the 
schedule. The schedule should be continually monitored to determine 
when forecasted completion dates differ from planned dates, which can 
be used to determine whether schedule variances will affect downstream 
work. In this analysis, the schedule should be monitored and progress 
reported regularly so that the current status of the activities, total 
float, and the resulting critical path can be determined. Variances 
between the baseline and current schedule should be examined and 
assessed for impact and significance. Changes to the program scope 
should also be incorporated with the appropriate logic. 

From the analysis, management can make decisions about how best to 
handle poor schedule performance. For example, management could decide 
to move resources to critical path activities to improve status or 
allocate schedule reserve to immediately address a risk that is turning 
into an issue. Thus, schedule analysis is necessary for monitoring the 
adequacy of schedule reserve and determining whether the program can 
finish on time. It is also important for identifying problems early, 
when there is still time to act. 

Estimate Resources and Authorize Budgets: 

Budgets should be authorized as part of the EVM process, and they must 
authorize the resources needed to do the work. They should not be 
limited to labor and material costs. All required resources should be 
accounted for, such as the costs for special laboratories, facilities, 
equipment, and tools. It is imperative that staff with the right skills 
have access to the necessary equipment, facilities, and laboratories. 
In step 3, we discussed how the schedule is resource loaded. This feeds 
directly into the EVM process and should tie back to the cost estimate 
methodology so it can be considered reasonable. 

Management reserve should be included in the budget to cover 
uncertainties such as unanticipated effort resulting from accidents, 
errors, technical redirections, or contractor-initiated studies. When a 
portion of the management reserve budget is allocated to one of these 
issues, it becomes part of the performance measurement baseline that is 
used to measure and control program cost and schedule performance. 
Management reserve provides management with flexibility to quickly 
allocate budget to mitigate problems and control programs. However, it 
cannot be used to offset or minimize existing cost variances; it can be 
applied only to in-scope work. 

Programs with greater risk, such as development programs, usually 
require higher amounts of management reserve than programs with less 
risk, such as programs in production. The two issues associated are how 
much management reserve should be provided to the program and how will 
it be controlled? Regarding the first issue, research has found that 
programs typically set their contract value so they can set aside 5 to 
10 percent as management reserve. This amount may not be sufficient for 
some programs and may be more than others need. The best way to 
calibrate the amount of management reserve needed is to conduct a risk 
analysis for schedule (to determine the schedule reserve needed) and 
for cost (to determine the management reserve for cost). 

The second issue is very important because if budgets are not spread 
according to the amount of anticipated risk, then control accounts that 
are overbudgeted will tend to consume all the budget rather than return 
it to management reserve—“budget allocated equals budget spent.” If 
reserve is not set aside for risks further downstream, it tends to get 
consumed by early development activities, leaving inadequacies for 
later complex activities like integration and testing. 

Experts agree that some form of integration of the program risk 
management system with the EVM system should exist. As a best practice, 
therefore, management reserve should be linked to a program’s 
risk analysis so that WBS cost elements with the most risk are 
identified for risk mitigation (figure 21). Prioritizing and 
quantifying total management reserve this way helps ensure that 
adequate budget is available to mitigate the biggest risks that 
typically occur later in a program. Typically held at a high level, 
the management reserve budget may be controlled directly by the program 
manager or distributed among functional directors or team leaders. In 
any case, it must be identified and accounted for at all times. 

In addition, the risks from the cost estimate uncertainty analysis 
should be compared against the management reserve allocation. This 
practice further ties the cost estimating risk analysis with EVM (as 
noted in figure 21). It can also help avoid using management reserve 
whenever a part of the program encounters a problem, ensuring that as 
more complicated tasks occur later in the program there will still be 
management reserve left if problems arise. When uncertainty analysis is 
used to specify the probability that cost of work will be performed 
within its budget, then the likelihood of meeting the budget can be 
increased by establishing a sufficient management reserve budget. Using 
this approach, the probability of achieving the budget as a whole can 
be understood up front. Moreover, using decision analysis tools, 
managers can use the overall probability of success as the basis for 
allocating budgets for each WBS element, increasing their ability to 
manage the entire program to successful completion. This method also 
allows allocating budget in a way that matches each control account’s 
expected cost distribution, which is imperative for minimizing cost 
overruns. 

Determine an Objective Measure for Earned Value: 

Performance measurement is key to earned value because performance 
represents the value of work accomplished. Before any work is started, 
the control account managers or teams should determine which 
performance measures will be used to objectively determine when work is 
completed. These measures are used to report progress in achieving 
milestones and should be integrated with technical performance 
measures. Examples of objective measures are requirements traced, 
reviews successfully completed, software units coded satisfactorily, 
and number of units fully integrated. Table 33 describes several 
acceptable, frequently used methods for determining earned value 
performance. 

Table 33: Typical Methods for Measuring Earned Value Performance: 

Method: 0/100; 
Description: No performance is taken until a task is finished; 
Types of tasks using this method: Take less than 1 month to complete; 
Advantages and disadvantages: Objective; commonly used for quick 
turnaround as in procuring material or brief meetings or trips; no 
partial credit is given. 

Method: 50/50; 25/75, etc. 
Description: Half the earned value is taken when the task starts, the 
other half when it is finished; other percentage combinations can be 
used; 
Types of tasks using this method: Usually completed within 2 months; 
Advantages and disadvantages: Objective; provides for some credit when 
the task starts. 

Method: Apportioned effort; 
Description: Effort that by itself is not readily divisible into short-
span work packages but is related in direct proportion to measured 
effort;
Types of tasks using this method: Historically depend on another task 
that can be measured discretely; 
Advantages and disadvantages: Provides more objective status 
information than the level-of-effort method.

Method: Level of effort; 
Description: Performance always equals planned cost;
Types of tasks using this method: Related to the passage of time with 
no physical products or defined deliverables, such as program 
management;
Advantages and disadvantages: Because performance always equals the 
scheduled amount, no schedule variances occur; cost variances may occur 
if actual costs are higher than planned. 

Method: Milestone; 
Description: Objective monthly milestones are established and the 
assigned budget is divided by the value assigned each milestone; earned 
value is taken as milestones are completed; 
Types of tasks using this method: Work packages exceed 2 months; 
Advantages and disadvantages: Best for accurately and objectively 
measuring performance but not always practical or possible. 

Method: Percent complete; 
Description: Performance is equal to the percent a task is complete. 
Percent complete should be based on underlying, quantifiable measures 
as much as possible (e.g., number of drawings completed) and can be 
measured by the statusing of the resource-loaded schedule;
Types of tasks using this method: Do not have obvious interim 
milestones;
Advantages and disadvantages: If truly based on underlying quantifiable 
measures, this method is actually the most objective. If that is not 
possible, it can be too subjective and a more objective method should 
be utilized. 
 
Method: Weighted milestone;
Description: Performance is taken as defined milestones are 
accomplished; objective milestones (weighted by importance) are 
established monthly and the budget is divided by milestone weights; 
as milestones are completed, value is earned; 
Types of tasks using this method: Tasks that can be planned using 
interim milestones—and the like; 
Advantages and disadvantages: Best method for work packages that exceed 
2 months; the most accurate and objective way to measure earned value. 

Source: DOD, © 2003 SCEA “Earned Value Management Systems Tracking Cost 
and Schedule Performance on Projects.” 

[End of table] 

No one method for measuring earned value status is perfect for every 
program. Several WBS elements may use different methods. What is 
important is that the method be the most objective approach for 
measuring true progress. Therefore, level of effort should be used 
sparingly: programs that report using a high level of effort for 
measuring earned value are not providing objective data and the EVM 
system will not perform as expected. As a general rule, if more than 15 
percent of a program’s budget is classified as level of effort, then 
the amount should be scrutinized. When level of effort is used 
excessively for measuring status, the program is not really 
implementing EVM as intended and will fall short of the benefits EVM 
can offer. While the 15 percent benchmark is widely accepted as a 
trigger point for analysis, no given percentage should be interpreted 
as a hard threshold, because the nature of work on some programs and 
contracts does not always lend itself to more objective measurement. 

The other methods provide a more solid means for objectively reporting 
work status. As work is performed, it is earned using the same units as 
it was planned with, whether dollars, labor hours, or other 
quantifiable units. Therefore, the budget value of the completed work 
is credited as earned value, which is then compared to the actual cost 
and planned value to determine cost and schedule variances. Figure 27 
shows how this works. 

Figure 27: Earned Value, Using the Percent Complete Method, Compared to 
Planned Costs: 

[Refer to PDF for image: illustration of a project plan] 

Source: GAO and Quentin W. Fleming at [hyperlink, 
http://www.quentinf.com]. 

[End of figure] 

Figure 27 displays how planned effort is compared with work 
accomplished. It also shows how earned value represents the budgeted 
value of the work completed and directly relates to the percentage 
complete of each activity. 

When earned value is compared to the planned value for the same work 
and to its actual cost, management has access to program status. This 
big picture provides management with a better view of program risks and 
better information for understanding what resources are needed to 
complete the program. 

Develop the Performance Measurement Baseline: 

The performance measurement baseline represents the cumulative value of 
the planned work over time. It takes into account that program 
activities occur in a sequenced order, based on finite resources, with 
budgets representing those resources spread over time. The performance 
measurement baseline is essentially the resource consumption plan for 
the program and forms the time-phased baseline against which 
performance is measured. Deviations from the baseline identify areas 
where management should focus attention. 

Figure 28 shows how it integrates cost, schedule, and technical effort 
into a single baseline. 

Figure 28: The Genesis of the Performance Measurement Baseline: 

[Refer to PDF for image: illustration] 

1. Define the work: 
* Identify statement of work; 
* Extend WBS to control account/work package. 

2. Schedule the work: 
* Arrange work packages in order; 
* Sequence over time. 

3. Allocate budgets: 
* Budget work packages; 
* Classify the work and select an earned value technique; 
* Aggregate cumulative BCWS. 

Source: © 2005 MCR, LLC, “Using Earned Value Data.” 

[End of figure] 

The performance measurement baseline includes all budgets for resources 
associated with completing the program, including direct and indirect 
labor costs, material costs, and other direct costs associated with the 
authorized work. It represents the formal baseline plan for 
accomplishing all work in a certain time and at a specific cost. It 
includes any undistributed budget, used as a short-term holding account 
for new work until it has been planned in detail and distributed to a 
particular control account. To help ensure timely performance 
measurement, it is important that undistributed budget be distributed 
to specific control accounts as soon as practicable. Some sources we 
reviewed stated that undistributed budget should be distributed within 
60 to 90 days of acquiring the new funds or authorization. 

The performance measurement baseline does not equal the program 
contract value, because it does not include management reserve or any 
fee. The budget for management reserve is accounted for outside the 
performance measurement baseline, since it cannot be associated with 
any particular effort until it is distributed to a particular control 
account when a risk occurs and leads to a recovery action. Together, 
the performance measurement baseline and the management reserve 
represent the contract budget base for the program, which in turn 
represents the total cost of the work. However, fee must be added to 
the contract budget base to reflect the total contract price. 

Figure 29 depicts a typical time-phased cumulative performance 
measurement baseline that typically follows the shape of an S curve, 
portraying a gradual build-up of effort in the beginning, followed by 
stabilization in the middle, and finally a gradual reduction of effort 
near program completion. The management reserve and performance 
measurement baseline values together make up the contract budget base. 

Figure 29: The Time-Phased Cumulative Performance Measurement Baseline: 

[Refer to PDF for image: s curve graph] 
 
Funds plotted vs. time. 

Source: © 2003 SCEA, “Earned Value Management Systems.” 

Note: 
BCWS = budgeted cost for work scheduled; 
CBB = contract budget base; 
PMB = performance measurement baseline. 

[End of figure] 

Common problems in developing and managing the performance measurement 
baseline are, first, that it may be front-loaded—that is, a 
disproportionate share of budget has been allocated to early tasks. In 
this case, budget is typically insufficient to cover far-term work. 
Front-loading tends to hide problems until it is too late to correct 
them. The program can severely overrun in later phases, causing 
everyone involved to lose credibility and putting the program at risk 
of being canceled. 

Second, the performance measurement baseline can have a rubber 
baseline—that is, a continual shift of the baseline budget to match 
actual expenditures in order to mask cost variances. Both problems 
result in deceptive baselines by covering up variances early in the 
program, delaying insight until they are difficult if not impossible to 
mitigate. Third, the performance measurement baseline can become 
outdated if changes are not incorporated quickly. As a result, 
variances do not reflect reality, and this hampers management in 
realizing the benefits of EVM. 

Execute the Work Plan and Record All Costs: 

For this step, program personnel execute their tasks according to the 
performance measurement baseline and the underlying detailed work 
plans. Actual costs are recorded by the accounting system and are 
reconciled with the value of the work performed so that effective 
performance measurement can occur. A program cost-charging structure 
must be set up before the work begins, to ensure that actual costs 
can be compared with the associated budgets for each active control 
account. In particular, accounting for material costs should be 
consistent with how the budget was established, to keep variances due 
to accounting accrual issues to a minimum. 

Analyze EVM Performance Data and Record Variances from the Performance 
Measurement Baseline Plan: 

Because programs all carry some degree of risk and uncertainty, cost 
and schedule variances are normal. Variances provide management with 
essential information on which to assess program performance and 
estimate cost and schedule outcomes. EVM guidelines provide for 
examining cost and schedule variances at the control account level at 
least monthly and for focusing management attention on variances with 
the most risk to the program. This means that for EVM data to be of any 
use, they must be regularly reviewed. In addition, management must 
identify solutions for problems early if there is any hope of averting 
degradation of program performance. 

Forecast Estimates at Completion Using EVM: 

As in step 8, managers should rely on EVM data to generate EACs at 
least monthly. EACs are derived from the cost of work completed along 
with an estimate of what it will cost to complete all unaccomplished 
work. A best practice is to continually reassess the EAC, obviating the 
need for periodic bottoms-up estimating. It should be noted, however, 
that DOD requires an annual comprehensive EAC. 

Conduct an Integrated Cost-Schedule Risk Analysis: 

An integrated schedule can be used, in combination with risk analysis 
data (often including traditional 3-point estimates of duration) and 
Monte Carlo simulation software, to estimate schedule risk and the 
EAC. Using the results of the schedule risk analysis, the cost elements 
that relate to time uncertainty (labor, management, rented facilities, 
escalation) can be linked directly to the uncertainty in the schedule. 

In this approach, the schedule risk analysis provides—in addition to an 
estimate of when the program may finish and the key risk 
drivers—uncertainty in the schedule activities or summary tasks that 
relate to time-dependent cost elements. These results, which are 
probability distributions produced by the Monte Carlo simulation of the 
schedule, can be imported to a spreadsheet where cost models and 
estimates are often developed and stored. 

The cost risk analysis uses these schedule risks to link the 
uncertainty in cost to the uncertainty in schedule. This approach 
models the way labor cost will be determined and converts time to a 
cost estimate by using headcount and labor and overhead rates with any 
material costs added to the final result. (Appendix X has more details 
on performing a schedule risk analysis.) 

Compare EACs from EVM with EAC from Risk Analysis: 

The integrated cost-schedule risk analysis produces a cumulative 
probability distribution for the program’s cost. This estimate can be 
compared to the estimate using EVM extrapolation techniques. The 
reasons to compare the two are that they use quite different 
approaches. EVM uses baseline and actual data from the program. The 
variances are used to estimate future performance. Risk analysis uses 
data that represent the probability that risks will occur and, usually, 
3-point estimates of the risks’ impact on the schedule and cost. These 
data are projections. Although historical data can be used, much of the 
risk analysis data is derived from interviews and workshops and 
represents expert judgment. 

If two methods so different in their approach, models, software, and 
input data make forecasts of the EAC given the current plan, it makes 
sense to compare their results. If their results are in general 
agreement, their conclusions are probably sound. If not, one or the 
other method (or both) should be reviewed for changes and revisions. 

Take Management Action to Mitigate Risk: 

Management should integrate the results of information from steps 8 
through 11 with the program’s risk management plan to address and 
mitigate emerging and existing risks. Management should focus 
on corrective actions and identify ways to manage cost, schedule, and 
technical scope to meet program objectives. It should also keep track 
of all risks and analyze EVM data trends to identify future problems. 
(Chapter 19 discusses this step further.) 

Update the PMB as Changes Occur: 

Because changes are normal, the ANSI guidelines allow for incorporating 
changes—unless it is a retroactive change to the performance data (with 
the exception of error correction). However, it is imperative that 
changes be incorporated into the EVM system as soon as possible to 
maintain the validity of the performance measurement baseline. When 
they occur, both budgets and schedules are reviewed and updated so that 
the EVM data stay current. Furthermore, the EVM system should outline 
procedures for maintaining a log of all changes and for incorporating 
them into the performance measurement baseline, and the log should be 
maintained so that changes can be tracked. 

Integrated Baseline Reviews: 

Just as EVM supports risk management by identifying problems when there 
is still time to act, so an IBR helps program managers fully understand 
the detailed plan to accomplish program objectives and identifies risks 
so they can be included in the risk register and closely monitored. The 
purposes of the IBR are to verify as early as possible whether the 
performance measurement baseline is realistic and to ensure that the 
contractor and government (or implementing agency) mutually understand 
program scope, schedule, and risks. To do this, the IBR assesses the 
following risks: 
 
* Is the technical scope of the work fully included and consistent with 
authorizing documents? 

* Are key schedule milestones identified and does the schedule reflect 
a logical flow? 

* Are resources involving cost—budgets, facilities, skilled 
staff—adequate and available for performing assigned tasks?
 
* Are tasks well planned and can they be measured objectively relative 
to technical progress? 
 
* Are management processes in place and in use? 

OMB requires the government to conduct an IBR for all programs in which 
EVM is required. While agency procedures dictate when the IBR should be 
conducted, the FAR allows contracting officers the option of conducting 
an IBR before a contract is awarded—this is known as a preaward IBR. 
Preaward IBRs help ensure that cost, schedule, and performance goals 
have been thoroughly reviewed before the contractor is selected. 
[Footnote 72] 

Although not mandatory, preaward IBRs verify that a realistic and fully 
inclusive technical and cost baseline has been established. This helps 
facilitate proposal analysis and negotiation. The benefits from doing 
an IBR (and when appropriate, a pre-award IBR) are that it: 

* ensures that both the government and offeror understand the statement 
of work as stated in the contract or request for proposals; 

* allows the government to determine if the offeror’s EVM system 
complies with agency implementation of the ANSI guidelines;

* ensures that the offeror’s schedule process adequately maintains, 
tracks, and reports significant schedule conditions to the government; 
 
* assesses the offeror’s risk management plans for the program; 

* assesses the offeror’s business system’s adequacy to maintain program 
control and report program performance objectively; and; 
 
* evaluates the adequacy of available and planned resources to support 
the program. 

Preaward IBRs support confidence in proposal estimates. However, 
caution must be taken to safeguard competition-sensitive or source 
selection information if multiple offerors are engaged in the 
competition. To lessen the risk of inadvertent disclosure of sensitive 
information, additional steps such as firewalls may be necessary. 

Although the pre-and postaward IBRs share the same overall goal, they 
are noticeably different in execution. On the contractor side, a 
preaward IBR requirement can involve the contract-pricers, marketers, 
and EVM specialists working together to develop the proposal. On the 
government side, EVM specialists and cost analysts become members of 
the technical evaluation team. However, unlike a traditional IBR, 
the government’s EVM evaluation is limited to the proposal evaluation. 
Consequently, the government EVM or cost analysts cannot conduct 
control account manager interviews; instead, they submit technical 
evaluation questions to the contractors’ equivalent personnel about any 
issues found in a preaward IBR proposal.

In the preaward IBR, the government reviews the adequacy of the 
proposed performance measurement baseline and how it relates to the 
integrated master schedule and Integrated Master Plan (IMP) milestones 
and deliverables. In addition, the government reviews the amount of 
management reserve in relation to the risk identified in the proposal. 
A preaward IBR can also be used to determine potential critical path 
issues by comparing proposed staff levels and costs to these events and 
associated risks. 

The benefits of conducting a preaward IBR are numerous. First, it 
provides a new tool to the acquisition community that can give insight 
into contractor performance and management disciplines before contract 
award. Second, it requires the government and contractor to work 
together to determine the reasonableness of a contractor’s proposed 
baseline. Third, it can allow a contractor to showcase how it plans to 
use EVM to manage the proposed solution. Finally, a preaward IBR forces 
competing contractors to establish high-level baselines that the 
government can assess for risks before contract award. This analysis 
can help in choosing viable contractors and reducing baseline setup 
time after contract award. 

While there is a cost to both the government and contractor to perform 
a preaward IBR, the view on risks and proposed performance management 
is worth the extra effort. Subsequently, if a preaward IBR is 
performed, a less-detailed IBR will likely occur after award, resulting 
in a quicker postaward review. (The details of conducting IBRs are 
discussed in chapter 19.) 

Award Fees: 

Contracts with provisions for award fees allow the government to adjust 
the fee based on how the contractor is performing. The purpose of award 
fee contracting is to provide motivation to the contractor for 
excellence in such areas as quality, timeliness, technical ingenuity, 
and cost effective management. Before issuing the solicitation, the 
government establishes the award fee criteria. It is important that 
the criteria be selected to properly motivate the contractor to perform 
well and encourage improved management processes during the award fee 
period. 

It is bad management practice to use EVM measures, such as variances or 
indexes, as award fee criteria, because they put emphasis on the 
contractor’s meeting a predetermined number instead of achieving 
program outcomes. Award fees tied to reported EVM measures may 
encourage the contractor to behave in undesirable ways, such as 
overstating performance or changing the baseline budget to “make the 
number” and secure potential profit. These actions undermine the 
benefits to be gained from the EVM system and can result in a loss of 
program control. For example, contractors may front-load the 
performance measurement baseline or categorize discrete work as level 
of effort, with the result that variances are hidden until the last 
possible moment. Moreover, tying award fee criteria to specific dates 
for completing contract management milestones, such as the IBR, is also 
bad practice, because it may encourage the contractor to conduct the 
review before it is ready. 

Best practices indicate that award fee criteria should motivate the 
contractor to effectively manage its contract using EVM to deliver the 
best product possible. For example, criteria that reward the contractor 
for integrating EVM with program management, establishing realistic 
budgets and schedules and estimates of costs at completion, providing 
meaningful variance analysis, performing adequate cost control, and 
providing accurate and timely data represent best practices. In 
addition, experts agree that award fee periods should be tied to 
specific contract events like preliminary design review rather than 
monthly cycles. (More detail on award fee best practices criteria for 
EVM is in appendix XIII.) 

Progress And Performance-Based Payments Under Fixed-Price Contracts: 

The principles of EVM are best management practices that are applicable 
in the administration of certain fixed-price type contracts that 
typically involve non-commercial items. These contracts use performance-
based payments or progress payments based on a percentage or stage of 
completion. Applying relevant EVM principles is particularly useful in 
setting up the progress or performance-based payment structure at 
contract inception and in administering progress payments during 
contract performance. The informal use of EVM principles here does not 
involve applying the comprehensive “ANSI-compliant EVM” that is often 
used in large cost-reimbursement type contracts where the government 
faces more risks. 

The Federal Acquisition Regulation authorizes progress payments and 
performance-based payments in certain circumstances for fixed-price 
type contracts for non-commercial items.[Footnote 73] Progress payments 
are based on (1) costs incurred by the contractor as work progresses 
under the contract or (2) a percentage or stage of completion. Progress 
payments based on a percentage or stage of completion may be used as a 
payment method for work accomplished that meets the quality standards 
established under the contract. Performance-based payments are contract 
financing payments made on the basis of performance measured by 
objective and quantifiable methods, accomplishment of defined events, 
or other quantifiable measures or results. 

The FAR addresses in detail the use of progress payments based on costs 
incurred. However, it is the category of progress payments based on 
percentage or stage of completion that provides the opportunity 
to apply EVM principles. Specifically, a schedule of values is 
established between the contractor and the government that divides the 
contract value into quantifiable scope elements. In some cases the 
contract requires that this schedule of values be generated as an 
output of the resource-loaded, critical path method (CPM) schedule, 
thereby reinforcing the EVM concept of cost and schedule integration. 
The percent complete method, based on either quantifiable units of 
measure or statused schedule activities can be utilized to assess 
partial progress prior to each scope element’s completion. For this 
reason, progress payments are usually preferred by contractors over 
milestone-based payments (discussed next), since progress they allow 
for a more favorable cash flow position throughout the project’s 
execution. 

The performance-based payments arrangement also provides opportunities 
to apply EVM principles. Performance-based payments differ from the 
more traditional progress payments in that they are based on the 0/100 
or milestone methods as shown in figure 30. Establishing the 
performance-based payments structure requires the government customer 
and contractor to agree on a set of milestones that will become the 
basis for the performance-based payments. Choosing the milestones 
usually results in selecting critical path activities that lead up to 
successfully achieving a significant event. This effort requires 
detailed planning to fully identify the work that needs to be 
accomplished and the relative dollar value of the milestones. After the 
parties have agreed on the performance plan, actual performance is 
monitored and payments are made according to the actual achievement of 
the established milestones. When properly planned and implemented, the 
performance-based payments approach can result in less oversight costs 
for the government, compared to a progress payment arrangement, and 
enhanced technical and schedule focus for the contractor. However, as 
mentioned above, such an arrangement may not be preferred by the 
contractor because of the impact on cash flow. (See figure 30.) 

Figure 30: A Performance-Based Payments Structured Contract: 

[Refer to PDF for image: illustration] 

Source: GAO and Quentin W. Fleming at [hyperlink, 
http://www.quentinf.com]. 

Note: M/S = Milestone. 

[End of figure] 

In the example in figure 30, eight milestones will be used to determine 
payments. At this point in time, two milestones have been met at a cost 
of $110,000. However, under the performance-based payments arrangement, 
the government would pay the contractor only $99,000, since it will 
hold back the final 10 percent until the work is complete (this method 
of withholding “retainage” is also common practice in progress-based 
payment systems). 

Both progress payment and performance-based payment methods require 
detailed planning to fully identify the work that needs to be 
accomplished and the relative dollar value of the scope elements or 
milestones. While both are a simplified form of EVM in that physical 
progress is the basis for payments, the government does not have 
visibility into actual costs borne by the contractor because of the 
fixed-price nature of the contract. Therefore, care must be taken to 
ensure that in either method the contractor has not “front-loaded” the 
schedule of values or the performance baseline to increase early 
payments. In addition, progress and milestone events should represent 
measurable performance in terms of quality and technical performance as 
well as cost and schedule. This is why government review and approval 
is required in both cases at contract inception—the government can 
thereby guard against paying too much for work as it is actually 
accomplished (which is important should the initial contractor need to 
be replaced and the remaining work resolicited), while maintaining the 
protection against paying for any final cost overruns that a fixed-
price type contract normally provides. By focusing on these issues, 
government projects performed under fixed-price type contracts have 
reported improved ability to meet requirements, better focus on 
outcomes, and improved completion times.

Validating The EVM System: 

If EVM is to be used to manage a program, the contractor’s (and 
subcontractors’) EVM system should be validated to ensure that it 
complies with the agency’s implementation of the ANSI guidelines, 
provides reliable data for managing the program and reporting its 
status to the government, and is actively used to manage the program. 
This validation process is commonly referred to as system acceptance. 
The steps involved in the system acceptance process are shown in figure 
31. Sometimes these steps may overlap rather than go in sequence 
because of resource or capability constraints between the EVM system 
owner, the government customer, or both. However, all steps leading up 
to actual acceptance must be addressed for an EVM system owner or 
agency program to implement an ANSI-compliant EVM system.[Footnote 74] 

Figure 31: The EVM System Acceptance Process: 

[Refer to PDF for image: illustration] 

Process step: Establish EVM policy; 
By: EVM system owner; 
Phase: Design and implementation. 

Process step: Establish EVM system; 
By: EVM system owner; 
Phase: Design and implementation. 

Process step: Implement EVM program; 
By: EVM system owner; 
Phase: REview. 

Process step: Conduct compliance evaluation review (CER); 
By: Compliance evaluation team; 
Phase: Review. 

Process step: Prepare assessment report; 
By: Compliance evaluation team; 
Phase: Assessment. 

Process step: Comment on assessment report; 
By: EVM system owner; 
Phase: Assessment. 

Process step: Develop surveillance and system revision procedures; 
By: EVM system owner; 
Phase: Assessment. 

Process step: Implement surveillance and system revision procedures; 
By: EVM system owner; 
Phase: Assessment. 

Process step: Submit compliance evaluation review report; 
By: Compliance evaluation team; 
Phase: Assessment. 

Process step: Accept EVM system compliance; 
By: Acceptance authority; 
Phase: Assessment. 

Process step: Issue EVMS letter of acceptance; 
By: Acceptance authority; 
Phase: Acceptance. 

Or: 

Process step: Issue EVMS advance agreement; 
By: Acceptance authority; 
Phase: Acceptance. 

Source: Copyright 2004/2005 National Defense Industrial Association 
(NDIA) Program Management Systems Committee (PMSC). 

[End of figure] 

The system acceptance process has four phases. In system design and 
implementation, establishing the EVM policy (which includes documented 
processes and procedures) is followed by developing and implementing an 
EVM system. Once complete, the compliance evaluation review can begin. 
The purpose of this review is to verify that the EVM system meets the 
ANSI guidelines and has been fully implemented on selected contracts, 
projects, or programs. Data traces are necessary for verifying that
lower-level reporting aligns with higher levels and that the data 
provide accurate management information. Interviews verify that the EVM 
system is fully implemented and actively used to manage the program.
Additionally, the compliance review process and its results should be 
documented. 

The compliance evaluation review is an independent review conducted by 
an individual or organization that: 

* has no stake in the EVM system, project, or contract being reviewed; 
[Footnote 75] 

* has the knowledge, skills, and abilities to fairly evaluate the 
fitness of the EVM system's implementation or surveillance; and; 

* relies on the NDIA EVMS intent guide to determine whether the EVM 
system is compliant with the ANSI guidelines. 

Upon successful completion of EVM system acceptance, an acceptance 
recognition document should be prepared and released. When cross-agency 
acceptance occurs, this is best accomplished by mutual agreements 
between agencies and organizations to recognize EVM system ANSI 
compliant acceptance or recognition documents. 

An agency can accept another organization’s EVMS acceptance with the 
understanding that it will need to instill a rigorous surveillance 
process (see chapter 20) to ensure that the written system description 
meets the intent of the 32 guidelines and is actively being followed. 
An alternative acceptance procedure is for a partner agency (or cross-
agency) to review the documentation from the EVM system owner’s 
compliance evaluation review. 

When no independent entity exists to perform EVM acceptance, the 
assessment may be performed by a qualified source that is independent 
from the program’s development, implementation, and direct 
supervision—for example, an agency’s inspector general. Moreover, 
civilian agencies may negotiate an interagency agreement to conduct 
acceptance reviews to satisfy the criteria for independence. For this 
arrangement to succeed, staff trained in EVM system reviews are 
required, and these resources are scarce in the government. 

Best practices call for centers of excellence that include staff who 
are experienced in EVM system design, implementation, and validation 
and have a strong knowledge of ANSI guidelines. In addition, these 
staff should have good evaluation skills, including the ability to 
review and understand EVM data and processes and the ability to 
interview personnel responsible for the EVM system implementation to 
determine how well they understand their own system description and 
processes. 

Case studies 44 and 45 highlight what can happen to a program when an 
EVM system has not been validated as being compliant with the ANSI 
guidelines. 

Case Study 44: Validating the EVM System, from Cooperative Threat 
Reduction, GAO-06-692: 

In September 2004, DOD modified its contract with Parsons Global 
Services, allocating about $6.7 million and requiring the company to 
apply EVM to the Shchuch’ye project. Parsons was expected to have a 
validated EVM system by March 2005, but by April 2006, it had not yet 
developed an EVM system that provided useful and accurate data to the 
chemical weapons destruction facility’s program managers. In addition, 
GAO found that the project’s EVM data were unreliable and inaccurate: 
in numerous instances, data had not been added properly for scheduled 
work. Parsons’ EVM reports, therefore, did not accurately capture data 
that project management needed to make informed decisions about the 
Shchuch’ye facility. 

For example, Parsons’ EVM reports from September 2005 through January 
2006 contained errors in addition that did not capture almost $29 
million in actual project costs. Such omissions and other errors may 
have caused DOD and Parsons project officials to overestimate the 
available project funding. GAO also found several instances in which 
the accounting data were not allocated to the correct cost accounts, 
causing large cost over- and underruns. Accounting data had been placed 
in the wrong account or Parsons’ accounting system was unable to track 
costs at all levels of detail within EVM. 

GAO concluded that until Parsons fixed its accounting system, manual 
adjustments would have to be made monthly to ensure that costs were 
properly aligned with the correct budget. Such adjustments meant that 
the system would consistently reflect inaccurate project status for 
Parsons and DOD managers. Parsons’ outdated accounting system had 
difficulty capturing actual costs for the Shchuch’ye project and 
placing them in appropriate cost categories. Parsons management should 
have discovered such accounting errors before the EVM report was 
released to DOD. 

The Defense Contract Audit Agency therefore questioned whether Parsons 
could generate correct accounting data and recommended that it update 
its accounting system. DOD expected Parsons to use EVM to estimate cost 
and schedule impacts and their causes and, most importantly, to help 
eliminate or mitigate identified risks. GAO recommended that DOD ensure 
that Parsons’ EVM system contained valid, reliable data and that the 
system reflect actual cost and schedule conditions. GAO also 
recommended that DOD withhold a portion of Parsons’ award fee until the 
EVM system produced reliable data. 

However, before GAO issued its report, Parsons had begun to improve its 
EVM processes and procedures. It had established a new functional lead 
position to focus on cost management requirements in support of 
government contracts. In addition, Parsons installed a new EVM focal 
point to address the lack of progress made in achieving validation of 
the EVM system for the Shchuch’ye project. 

Immediately after GAO’s report was issued, Parsons’ new EVM focal point 
was able to identify and correct the system problems that had led to 
the unreliable and inaccurate EVM data. The new focal point also found 
that the data integrity problems GAO had identified were not directly 
related to a need to update Parsons’ accounting system. First, the 
project’s work breakdown structure had not been developed to the level 
of detail required to support a validated EVM system before Parsons 
received the contract modification to implement the system, and the 
project’s original cost management practices, policies, and procedures 
had not been robust enough to effectively prevent the historical 
miscoding of actual costs against the existing WBS. Second, the more 
recent data quality issues GAO cited resulted from the lack of a 
reconcilable means of downloading actual cost information from Parsons’ 
accounting system into a cost processor that had not yet been 
optimized. 

Parsons’ accounting system was deemed adequate in an August 2006 
Defense Contract Audit Agency audit report. DOD chose not to withhold 
Parsons’ award fee, given the progress being made toward improving the 
data integrity issues GAO had identified. The Shchuch’ye project’s EVM 
system was formally validated in a May 2007 Defense Contract 
Management Agency letter. 

Source: GAO, Cooperative Threat Reduction: DOD Needs More Reliable Data 
to Better Estimate the Cost and Schedule of the Shchuch’ye Facility, 
GAO-06-692, Washington, D.C.: May 31, 2006. 

[End of case study] 
 
Case Study 45: Validating the EVM System, from DOD Systems 
Modernization, GAO-06-215: 

The Naval Tactical Command Support System (NTCSS) elected to use EVM, 
but Navy and DOD oversight authorities did not have access to the 
reliable and timely information they needed to make informed decisions. 
The EVM system that NTCSS implemented to measure program performance 
did not provide data for effectively identifying and mitigating risks. 
According to the NTCSS central design agency’s self-assessment of its 
EVM system, 17 of industry’s 32 best practices criteria were not being 
met. GAO also found 29 of the 32 criteria were not satisfied. 

Two NTCSS projects for which EVM activities were reportedly being 
performed were 2004 Optimized Organizational Maintenance Activity 
(OOMA) software development and 2004 NTCSS hardware installation and 
integration. GAO found several examples of ineffective EVM 
implementation on both projects. 

The estimate at completion for the 2004 OOMA software project—a 
forecast value expressed in dollars representing final projected costs 
when all work was completed—showed a negative cost for the 6 months 
November 2003 to April 2004. If EVM had been properly implemented, this 
amount, which is always a positive number, should have included all 
work completed. 

The cost performance index for the OOMA software project—which was to 
reflect the critical relationship between the actual work performed and 
the money spent to accomplish the work—showed program performance 
during a time when the program office stated that no work was being 
performed. 

The estimate at completion for the OOMA hardware installation project 
showed that almost $1 million in installation costs had been removed 
from the total sunk costs, but no reason for doing so was given in the 
cost performance report. 

The cost and schedule indexes for the OOMA hardware installation 
project showed improbably high program performance when the 
installation schedules and installation budget had been drastically cut 
because OOMA software failed operational testing. 

GAO concluded that because EVM was ineffectively implemented in these 
two projects, NTCSS program officials did not have access to reliable 
and timely information about program status or a sound basis for making 
informed program decisions. Therefore, GAO recommended that the NTCSS 
program implement effective program management activities, including 
EVM. 

Source: GAO, DOD Systems Modernization: Planned Investment in the Naval 
Tactical Command Support System Needs to Be Reassessed, GAO-06-215, 
Washington, D.C.: Dec. 5, 2005. 

[End of case study] 

15. Best Practices Checklist: Managing Program Costs: Planning: 

* A cost estimate was used to measure performance against the original 
plan, using EVM. 

* EVM and risk management were tightly integrated to ensure better 
program outcomes. 
- Strong leadership demands EVM be used to manage programs. 
- Stakeholders make it clear that EVM matters and hold staff 
accountable for results. 
- Management is willing to hear the truth about programs and relies on 
EVM data to make decisions on how to mitigate risk. 
- Policy outlines clear expectations for EVM as a disciplined 
management tool and requires pertinent staff to be continuously trained 
in cost estimating, scheduling, EVM, and risk and uncertainty analysis. 

* EVM is implemented at the program level so that both government and 
contractor know what is expected and are held accountable. 
- EVM relied on the cost of completed work to determine true program 
status. 
- EVM planned all work to an appropriate level of detail from the 
beginning. 
- It measured the performance of completed work with objective 
techniques. 
- It used past performance to predict future outcomes. 
- It integrated cost, schedule, and performance with a single 
management control system. 
- It directed management to the most critical problems, reducing 
information overload. 
- It fostered accountability between workers and management. 

* The EVM system complied with the agency’s implementation of ANSI’s 32 
guidelines. 

* The following steps in the EVM process were taken: 
- The work’s scope was defined with a WBS, and effort was broken into 
work and planning packages. 
- The WBS and organizational breakdown structure were cross-walked to 
identify control accounts that show who will do the work. 
- An acceptable technique was used to schedule work to resource load 
activities. 
-- All activities were identified and sequenced, logically networked, 
clearly showing horizontal and vertical integration. 
-- Activities were resource loaded with labor, material, and overhead 
and durations were estimated with historical data when available, and 
float was identified. 
-- A schedule risk analysis was performed based on an 11-point schedule 
assessment. 
-- Schedule reserve was chosen and prioritized for high-risk 
activities. 
-- The schedule was updated using logic and durations to determine 
dates and reflects accomplishments and is continuously analyzed for 
variances and changes to the critical path and completion date. 
- Resources were adequate to complete each activity and were estimated 
to do the work, authorize budgets, and identify management reserve for 
high-risk efforts. 
- Objective methods for determining earned value were used. 
- The performance measurement baseline was developed for assessing 
program performance; EVM performance data were analyzed and variances 
from the baseline plan were recorded; the performance measurement 
baseline was updated. 
- EACs were forecast using EVM. 
- An integrated cost-schedule risk analysis was conducted. 
- EACs from EVM were compared with an EAC from risk analysis. 
- Management took action to mitigate risk. 
- A preaward IBR was performed where provided for to verify the 
performance measurement baseline’s realism and compliance with ANSI 
guidelines. 
- Award fee criteria were developed to motivate the contractor to 
manage its contract with EVM to deliver the best possible product, were 
tied to specific contract events, and did not predetermine specific EVM 
measures. 
- A performance-based-payment contract was considered for fixed price 3
contracts where technical effort and risk are low. 
- The EVM system implemented was validated for compliance with the 
ANSI guidelines by independent and qualified staff and therefore can be 
considered to provide reliable and valid data from which to manage the 
program. 

[End of Chapter 18] 

Chapter 19: Managing Program Costs: Execution: 
 
Studies of more than 700 defense programs have shown limited 
opportunity for getting a wayward program back on track once it is more 
than 15 percent to 20 percent complete.[Footnote 76] EVM data allow 
management to quickly track deviations from a program’s plan for prompt 
understanding of problems. Proactive management results in better focus 
and increases the chance that a program will achieve its goals on time 
and within the expected cost. 

To rely on EVM data, an IBR must be conducted to ensure that the 
performance measurement baseline accurately captures all the work to be 
accomplished. Data from the CPR can then be used to assess program 
status—typically, monthly. Cost and schedule variances are examined and 
various estimates at completion are developed and compared to available 
funding. The results are shared with management for evaluating 
contractor performance. Finally, because EVM requires detailed planning 
for near-term work, as time progresses, planning packages are converted 
into detailed work packages. This cycle continues until all work has 
been planned and the program is complete. 

Validating The Performance Measurement Baseline With An IBR: 

An IBR is an evaluation of the performance measurement baseline to 
determine whether all program requirements have been addressed, risks 
identified, and mitigation plans put in place and all available and 
planned resources are sufficient to complete the work. Too often, 
programs overrun because estimates fail to account for the full 
technical definition, unexpected changes, and risks. Using poor 
estimates to develop the performance measurement baseline will result 
is an unrealistic baseline for performance measurement. 

The IBR concept to ensure comprehensive baselines for managing programs 
was developed in 1993 as a best practice after numerous DOD programs 
experienced significant cost and schedule overruns because their 
baselines were too optimistic. An IBR’s goal is to verify that the 
technical baseline’s budget and schedule are adequate for performing 
the work. Key benefits are that: 

* it lays a solid foundation for successfully executing the program, 

* it gives the program manager and contractor mutual understanding of 
the risks, 

* the program manager knows what to expect at the outset of the 
program, 

* planning assumptions and resource constraints are understood, 

* errors or omissions in the baseline plan can be corrected early in 
the program, 
 
* developing variances can be discovered sooner, and, 
 
* resources for specific challenges and risks can be identified. 

Conducting an IBR increases everyone’s confidence that the performance 
measurement baseline provides reliable cost and schedule data for 
managing the program and that it projects accurate estimated costs at 
completion. OMB has endorsed the IBR as a critical process for risk 
management on major investments and requires agencies to conduct IBRs 
for all contracts that require EVM. 

The IBR is the crucial link between cost estimating and EVM because it 
verifies that the cost estimate has been converted into an executable 
program plan. While the cost estimate provides an expectation of what 
could be, based on a technical description and assumptions, the 
baseline converts that expectation into a specific plan for achieving 
the desired outcome. Once the baseline is established, the IBR assesses 
whether its estimates are reasonable and risks have been clearly 
identified. 

OMB directs agencies to conduct IBRs in accordance with DOD’s Program 
Manager’s Guide to the Integrated Baseline Review Process, which 
outlines four activities to be jointly executed by the program 
manager and contractor staff: performance measurement baseline 
development, IBR preparation, IBR execution, and management processes. 
[Footnote 77] 

Experts agree that it is a best practice for the government and prime 
contractor to partner in conducting an IBR on every major subcontractor 
in conjunction with the prime contractor IBR. This practice cannot be 
emphasized enough, especially given that many major systems 
acquisitions are systems of systems with the prime contractor acting as 
the main integrator. The expert community has seen up to 60 to 70 
percent of work being subcontracted out. Pair this risk with the lack 
of focus on systems engineering, and many risks may go unnoticed until 
they are realized. Furthermore, the increasing roles and 
responsibilities assumed by subcontractors in these contracts make the 
accuracy of subcontractor EVM data that much more important. 

Performance Measurement Baseline Development: 

As the principal element of EVM, the performance measurement baseline 
represents the time-phased budget plan against which program 
performance is measured for the life of the program. This plan comes 
from the total roll-up of work that has been planned in detail through 
control accounts, summary planning packages, and work packages with 
their schedules and budgets. 

Performance measurement baseline development examines whether the 
control accounts encompass all contract requirements and are 
reasonable, given the risks. To accomplish this, the government and 
contractor management teams meet to understand whether the program plan 
reflects reality. They ask, 
 
* Have all tasks in the statement of work been accounted for in the 
baseline? 

* Are adequate staff and materials available to complete the work? 

* Have all tasks been integrated, using a well-defined schedule? 

Since it is not always feasible for the IBR team to review every 
control account, the team often samples control accounts to review. To 
ensure a comprehensive and value-added review, teams can consider: 

* medium to high technical risk control accounts, 

* moderate to high dollar value control accounts, 

* critical path activities, 

* elements identified in the program risk management plan, and 

* significant material subcontracts and non-firm-fixed-price 
subcontracts. 

The IBR team should ask the contractor for a list of all performance 
budgets in the contract. The contractor can typically provide a matrix 
of all control accounts, their managers, and approved budget amounts. 
Often called a dollarized responsibility assignment matrix, it is a 
valuable tool in selecting control accounts that represent the most 
risk. 

At the end of the IBR, the team’s findings inform the program’s risk 
management plan and should give confidence in the quality of the 
contractor’s performance reports. If no IBR is conducted, confidence is 
less that monthly EVM reporting will be meaningful or accurate. 

IBR Preparation: 

An IBR is most effective if the focus is on areas of greatest risk to 
the program. Government and contractor program managers should try for 
mutual understanding of risks and formulate a plan to mitigate and 
track them through the EVM and risk management processes. In addition, 
developing cooperation promotes communication and increases the chance 
for effectively managing and containing program risks. 

Depending on the program, the time and effort in preparing for the IBR 
varies. Specific activities include: 

* identifying program scope to review, including appropriate control 
accounts, and associated documentation needs;

* identifying the size, responsibilities, and experience of the IBR 
team; 

* program management planning, such as providing training, obtaining 
required technical expertise, and scheduling review dates; 

* classifying risks by severity and developing risk evaluation 
criteria; and; 

* developing an approach for conveying and summarizing findings. 

Program managers should develop a plan for conducting the review by 
first defining the areas of the program scope the team will review. To 
do this, they should be familiar with the contract statement of work 
and request the appropriate documents, including the LCCE and program 
risk assessment, to decide areas that have the most risk. They should 
also have a clear understanding of management processes that will be 
used to support the program, including how subcontractors will be 
managed. 

Each IBR requires participation from specific program, technical, and 
schedule experts. Staff from a variety of disciplines—program 
management, systems engineering, software engineering, manufacturing, 
integration and testing, logistics support—should assist in the review. 
In addition, experts in functional areas like cost estimating, schedule 
analysis, EVM, and contracting should be members of the team. In 
particular, EVM specialists and contract management personnel should be 
active participants. The IBR team may at times also include 
subcontractor personnel. The team’s size should be determined by the 
program’s complexity and the risk associated with achieving its 
objectives. 

While IBRs have traditionally been conducted by government program 
offices and their contractors, OMB guidance anticipates that EVM will 
be applied at the program level. Therefore, program-level IBR teams 
should include participants from other stakeholder organizations, such 
as the program’s business unit, the agency’s EVM staff, and others, as 
appropriate. 

Team members must have appropriate training before the IBR is conducted 
to ensure that they can correctly identify and assess program risks. 
Team members should be trained so they understand the cost, schedule, 
and technical aspects of the performance measurement baseline and the 
processes that will be used to manage them. 

As we stated earlier, identifying potential program risk is the main 
goal of an IBR. Risks are generally categorized as cost, management 
process, resource, schedule, and technical (table 34). 

Table 34: Integrated Baseline Review Risk Categories: 

Category: Cost; 
Definition: Evaluates whether the program can succeed within budget, 
resource, and schedule constraints as depicted in the performance 
measurement baseline; cost risk is driven by the quality and 
reasonableness of the cost and schedule estimates, accuracy of 
assumptions, use of historical data, and whether the baseline covers 
all efforts outlined in the statement of work. 

Category: Management process; 
Definition: Evaluates how well management processes provide effective 
and integrated technical, schedule, cost planning, and baseline change 
control; it examines whether management processes are being implemented 
in accordance with the EVM system description. Management process risk 
is driven by the need for early view into risks, which can be 
hampered by inability to establish and maintain valid, accurate, and 
timely performance data, including subcontractors’ data. 

Category: Resource; 
Definition: Represents risk associated with the availability of 
personnel, facilities, and equipment needed to perform program-specific 
tasks; includes staff lacking because of other company priorities, 
unexpected downtime precluding or limiting the use of specific 
equipment or facilities when needed. 

Category: Schedule; 
Definition: Addresses whether all work scope has been captured in the 
schedule and time allocated to lower-level tasks meets the program 
schedule; schedule risk is driven by the interdependency of scheduled 
activities and logic and the ability to identify and maintain the 
critical path. 

Category: Technical; 
Definition: Represents the reasonableness of the technical plan for 
achieving the program’s objectives and requirements; deals with issues 
such as the availability of technology, capability of the software 
development team, technology, and design maturity. 
 
Source: Adapted from DOD, The Program Manager’s Guide to the Integrated 
Baseline Review Process (Washington, D.C.: Office of the Secretary of 
Defense (AT&L), April 2003). 

[End of table] 

Program managers should also outline the criteria for evaluating risks 
in table 34 and develop a method for tracking them within the risk 
management process. In addition, they should monitor the progress of 
all risks identified in the IBR and develop action plans for resolving 
them. 

IBR Execution: 

Because an IBR provides a mutual understanding of the performance 
measurement baseline and its associated risk, identifying potential 
problems early allows for developing a plan for resolving and 
mitigating them. Thus, the IBR should be initiated as early as 
possible—before award, when appropriate, and no later than 6 months 
after. To be most effective, maturity indicators should be assessed to 
ensure that a value-added assessment of the performance measurement 
baseline can be accomplished: 

1. Work definition: 
 
* a WBS should be developed; 
 
* specifications should flow down to subcontractors; 
 
* internal statement of work for work package definitions should be 
defined. 

2. Integrated schedule: 

* lowest and master level should be vertically integrated; 

* tasks should be horizontally integrated; 

* product handoffs should be identified; 

* subcontractor schedules should be integrated with the prime master 
schedule. 

3. Resources, labor, and material should be fully planned and 
scheduled; 
 
* constrained resources should be identified or rescheduled; 

* staffing resources should be leveled off; 

* subcontractor baselines should be integrated with the prime baseline; 

* schedule and budget baselines should be integrated; 

* work package earned value measures should be defined; 

* the baseline should be validated at the lowest levels and approved by 
management. 

The absence of maturity indicators is itself an indication of risk. An 
IBR should not be postponed indefinitely; it should begin, with a small 
team, as soon as possible to clarify plans for program execution. In 
executing the IBR, the team assesses the adequacy, realism, and risks 
of the baseline by examining if: 

* the technical scope of work is fully included (an allowance for 
rework and retesting is considered), 

* key schedule milestones are identified, 
 
* supporting schedules reflect a logical flow to accomplish tasks, 

* the duration of each task is realistic and the network schedule logic 
is accurate, 

* the program’s critical path is identified, 

* resources—budgets, facilities, personnel, skills—are available and 
sufficient for accomplishing tasks, 

* tasks are planned to be objectively measured for technical progress, 

* the rationale supporting performance measurement baseline control 
accounts is reasonable, and 

* managers have appropriately implemented required management 
processes. 

After it has been determined that the program is defined at an 
appropriate level, interviewing control account managers is the next 
key IBR objective. Interviews should focus on areas of significant risk 
and management processes that may affect the ability to monitor risks. 
Discussions should take place among a small group of people, addressing 
how the baseline was developed and the supporting documentation. If the 
contractor has reasonably developed an integrated baseline, preparing 
for the IBR should require minimal time. 

During the interview process, the IBR team meets with specific control 
account managers to understand how well they use EVM to manage their 
work and whether they have expertise in their area of discipline. 
Typical discussion questions involve how the control account managers 
receive work authorization, how they ensure that the technical content 
of their effort is covered, and how they use the schedule to plan and 
manage their work. In addition, interviews are an excellent way to 
determine whether a control account manager needs additional training 
in EVM or lacks appropriate resources. A template gives interviewers a 
consistent guide to make sure they cover all aspects of the IBR 
objectives. Figure 32 is a sample template. 

Figure 32: IBR Control Account Manager Discussion Template: 

[Refer to PDF for image: illustration] 

Baseline discussion starter: 

Step 1: Introductions: 5 minutes. 
 
Step 2: Overview of control accounts; General description, work 
content: 5 minutes. 

Step 3: Describe control account or work packages, briefly describe 
performance to date: 5 minutes. 
 
Step 4: Evaluate baseline for each work package: 90 minutes; 

Work scope: 
All work included? 
Clear work description? 
Technical risk? 
Risk mitigation? 
Trace from scope of work to WBS to control account or work package 
descriptions. 

Schedule: 
Realistic? Complete? 
Subcontractors? 
Task durations? 
Network logic? 
Handoffs? 
Vertical and horizontal integration? 
Critical path? 
Concurrence? 
Developing schedule variance? 
Completion variance from schedule? 
Budget risk? 

Budget: 
Basis for estimate? 
Management challenges? 
Realistic budget? (focus on hours); 
Phasing? 
Developing cost variance? 
Variance at complete? 
Budget risk? 

BCWP method: 
Objective measures of work? 
Level of effort minimized? 
Subcontractor performance? 
Milestones defined? 
Method for calculating percentage complete? 

Documents to review: 
Statement of work, contractor WBS dictionary, work package 
descriptions, risk plans. 

Documents to review: 
IMS, work package schedules, staffing plans. 
 
Documents to review: 
Control account plan, basis of estimate, variance reports, 
purchase order for material. 

Documents to review: 
Control account plan, back-up worksheets for BCWP, subcontractor 
reports. 

Step 5: Document. Complete control account risk evaluation sheet, reach 
concurrence on risk and action items: 10 minutes. 

No. Title Budget at completion % complete BCWP method 

Source: DCMA. 

[End of figure] 

After completing the IBR, the program managers assess whether they have 
achieved its purpose—they report on their understanding of the 
performance measurement baseline and their plan of action for handling 
risks. They should develop a closure plan that assigns staff 
responsibility for each risk identified in the IBR. Significant risks 
should then be included in the program’s risk management plan, while 
lower-level risks are monitored by responsible individuals. An overall 
program risk summary should list each risk by category and severity in 
order to determine a final risk rating for the program. This risk 
assessment should be presented to senior management—government and 
contractors—to promote awareness. 

The IBR team should document how earned value will be assessed and 
whether the measurements are objective and reasonable. It should 
discuss whether management reserve will cover new risks identified in 
the IBR. Finally, if the team found deficiencies in the EVM system, it 
should record them in a corrective action request and ask the EVM 
specialist to monitor their status. 

Although a formal IBR report is not usually required, a memorandum for 
the record describing the findings with all backup documentation should 
be retained in the official program management files. And, while the 
IBR is not marked with an official pass or fail, a determination should 
be made about whether the performance measurement baseline is reliable 
and accurate for measuring true performance. 

Management Processes: 

When the IBR is complete, the focus should be on the ongoing ability of 
management processes to reveal actual program performance and detect 
program risks. The IBR risk matrix and risk management plan should give 
management a better understanding of risks facing the program, allowing 
them to manage and control cost and schedule impacts. The following 
management process should continue after the IBR is finished: 
 
* the baseline maintenance process should continue to ensure that the 
performance measurement baseline reflects a current depiction of the 
plan to complete remaining work and follows a disciplined process for 
incorporating changes, and; 
 
* the risk management process should continue to document and classify 
risks according to the probability that they will occur, their 
consequences, and their handling. 

Other typical business processes that should continue to support the 
management of the program involve activities like scheduling, 
developing estimates to complete, and EVM analysis so that risks may be 
monitored and detected throughout the life of the program. (Appendix 
XIV has a case study example on IBRs.) 

Contract Performance Reports: 

The IBR completed and the PMB validated, now EVM data can be used to 
assess performance and project costs at completion. EVM data are 
typically summarized in a standard CPR. This report becomes the primary 
source for program cost and schedule status and provides the 
information needed for effective program control. The CPR provides cost 
and schedule variances, based on actual performance against the plan, 
which can be further examined to understand the causes of any 
differences. Management can rely on these data to make decisions 
regarding next steps. For example, if a variance stems from an 
incorrect assumption in the program cost estimate, management may 
decide to obtain more funding or reduce the scope. 

Reviewing CPR data regularly helps track program progress, risks, and 
plans for activities. When variances are discovered, CPR data identify 
where the problems are and the degree of their impact on the program. 
Therefore, the ANSI guidelines specify that, at least monthly, cost and 
schedule variance data should be generated by the EVM system to give a 
view into causes and allow action. Since management may not be able to 
review every control account, relying on CPR data enables management to 
quickly assess problems and focus on the most important issues. 

CPR data come from monthly assessment of and reports on control 
accounts. Control account managers summarize the data to answer the 
following questions: 

* How much work should have been completed by now—or what is the 
planned value or BCWS? 

* How much work has been done—or what is the earned value or BCWP? 

* How much has the completed work cost—or what is the actual cost or 
ACWP? 

* What is the planned total program cost—or what is the BAC? 

* What is the program expected to cost, given what has been 
accomplished—or what is the EAC? 

Figure 33 is an example of this type of monthly assessment. It shows 
that the performance measurement baseline is calculated by summarizing 
the individual planned costs (BCWS) for all control accounts scheduled 
to occur each month. Earned value (BCWP) is represented by the amount 
of work completed for each active control account. Finally, actual 
costs (ACWP) represent what was spent to accomplish the completed work. 

Figure 33: Monthly Program Assessment Using Earned Value: 

[Refer to PDF for image: illustration] 

Source: Naval Air Systems Command (NAVAIR). 

[End of figure] 

According to the data in figure 33, by the end of April the control 
account for concrete has been completed, while the framing and roofing 
control accounts are only partially done—60 percent and 30 percent 
complete, respectively. Examining what was expected to be done by the 
end of April—$39,000 worth of work—with what was actually 
accomplished—$27,000 worth of work—one can determine that $12,000 worth 
of work is behind schedule. Likewise, by assessing what was 
accomplished—$27,000 worth of work—with what was spent—$33,000—one can 
see that the completed work cost $6,000 more than planned. These data 
can also be graphed to quickly obtain an overall program view, as in 
figure 34. 

Figure 34: Overall Program View of EVM Data: 

[Refer to PDF for image: multiple s curve graph] 

Source: © 2003 SCEA, “Earned Value Management Systems.” 

Note: 
ACWP = actual cost of work performed; 
BAC = budget at completion; 
BCWP = budgeted cost for work performed; 
BCWS = budgeted cost for work scheduled; 
CBB = contract budget baseline; 
EAC = estimate at completion; 
PMB = performance measurement baseline. 

[End of figure] 

Figure 34 shows that in October, the program is both behind schedule 
and overrunning cost. The EAC shows projected performance and expected 
costs at completion. Cost variance is calculated by taking the 
difference between completed work (BCWP) and its cost (ACWP), while 
schedule variance is calculated by taking the difference between 
completed work (BCWP) and planned work (BCWS). Positive variances 
indicate that the program is either underrunning cost or performing 
more work than planned. Conversely, negative variances indicate that 
the program is either overrunning cost or performing less work than 
planned. 

It is important to understand that variances are neither good nor bad. 
They are merely measures that indicate that work is not being performed 
according to plan and that it must be assessed further to understand 
why. From this performance information, various estimates at completion 
can be calculated. The difference between the EAC and the budget at 
completion (BAC) is the variance at completion, which represents either 
a final cost overrun or an underrun. 

Management should use the EVM data captured by the CPR data to (1) 
integrate cost and schedule performance data with technical performance 
measures, (2) identify the magnitude and impact of actual and potential 
problem areas causing significant cost and schedule variances, and (3) 
provide valid and timely program status to higher management. As a 
management report, the CPR provides timely, reliable summary EVM data 
with which to assess current and projected contract performance. 

The primary value of the report is its ability to reflect current 
contract status and reasonably project future program performance. When 
the data are reliable, the report can facilitate informed, timely 
decisions by a variety of program staff—engineers, cost estimators, and 
financial management personnel, among others. CPR data are also used to 
confirm, quantify, and track known or emerging problems and to 
communicate with the contractor. As long as the CPR data accurately 
reflect how work is being planned, performed, and measured, they can be 
relied on for analyzing actual program status. The five formats within 
a CPR are outlined in figure 35. 

Figure 35: A Contract Performance Report’s Five Formats: 

[Refer to PDF for image: illustration] 

Format 1: Work breakdown structure: 
* WBS level 2; 
* WBS level 3. 
 
Format 2: Functional categories. 

Format 3: Baseline: 
* Changes; 
* Undistributed budget; 
* Management reserve. 

Format 4: Staff loading. 

Format 5: Explanation of variances: 
Identify: 
* Nature of the problem; 
* Reason for cost or schedule variance; 
* Impact on total program; 
* Corrective action taken; 
* Amounts attributed to rate changes; 
* Undistributed budget application; 
* Management reserve application; 
* Baseline changes. 

Source: Naval Air Systems Command (NAVAIR). 

[End of figure] 

All five formats in a CPR should be tailored to ensure that only 
information essential to management on cost and schedule is required 
from contractors. Format 1 provides cost and schedule data for each 
element in the program’s product-oriented WBS—typically, hardware, 
software, and other services necessary for completing the program. Data 
in this format are usually reported to level three of the WBS, but high-
cost or high-risk elements may be reported at lower levels to give 
management an appropriate view of problems. 

Format 2 provides the same cost and schedule data as format 1 but 
breaks them out functionally, using the contractor’s organizational 
breakdown structure. Format 2 may be optional for agencies other than 
DOD. It need not be obtained, for example, when a contractor does not 
manage along functional lines. 

Format 3 shows the budget baseline plan, against which performance is 
measured, as well as any changes that have occurred. It also displays 
cumulative, current, and forecasted data, usually in detail for the 
next 6 months and in larger increments beyond 6 months. This format 
forecasts the time-phased budget baseline cost to the end of the 
program—in other words, the reported data primarily look forward—and 
should be correlated with the cost estimate. 

Format 4 forecasts the staffing levels by functional category required 
to complete the contract and is an essential component to evaluating 
the EAC. This format—also forward looking—allows the analyst to 
correlate the forecast staffing levels with contract budgets and cost 
and schedule estimates. 

Format 5 is a detailed, narrative report explaining significant cost 
and schedule variances and other contract problems and topics. 

The majority of EVM analysis comes from the CPR’s format 1—that is, 
from examining lower-level control account status to determine lower-
level variances—and format 5—that is, from explanations for what is 
causing the variances in format 1. Table 35 describes some of the major 
data elements in format 1. 

Table 35: Contract Performance Report Data Elements: Format 1: 

Contract data: 

Data element: Contract budget base; 
Description: Includes the negotiated contract cost plus the estimated 
cost of any authorized, unpriced work. 

Data element: Negotiated cost; 
Description: Includes the dollar value (excluding fee or profit) of the 
contractually agreed-to program cost, typically the definitized 
contract target cost for an incentive-type contract;[A] excludes costs 
for changes that have not been priced and incorporated into the 
contract through a modification or supplemental agreement. 
 
Data element: Estimated cost of authorized, unpriced work; 
Description: Excludes fee or profit; represents work that has been 
authorized but the contract price for it has not been definitized by 
either a contract change order or supplemental agreement[A]. 
 
Data element: Budget at completion (BAC); 
Description: The sum of all estimated budgets, representing at the 
program level the cumulative value of BCWS over the life of the 
program; at lower levels, such as a control account or WBS element, it 
represents a roll-up of total estimated cost for the individual element 
(within a contract, the summary BAC is, in effect, the official spend 
plan for the contract). 
 
Data element: Estimated cost at completion (EAC); 
Description: Represents a range of estimated costs at completion so 
that management has flexibility to analyze possible outcomes; it should 
be as accurate as possible, consider known or anticipated risks, and be 
reported without regard to the contract ceiling cost; it is derived by 
adding to actual costs the forecasted cost of work remaining (budgeted 
cost for work remaining), using a statistically based forecasting 
method. 

Data element: Variance at completion; 
Description: Representing the entire program overrun or underrun, it is 
calculated by taking the difference between the BAC and EAC. 
 
Performance data: 
 
Data element: Budgeted cost for work scheduled (BCWS); 
Description: Representing the amount of work set aside for a specific 
effort over a stated period of time, it specifically describes the 
detailed work that was planned to be accomplished according to the 
program schedule; it is the sum of the budgets for all the work 
packages, planning packages, etc., scheduled to be accomplished within 
a given time period; it is the monthly spread of the BAC at the 
performance measurement level. 

Data element: Budgeted cost for work performed (BCWP); 
Description: Representing the earned value for the work accomplished, 
it is the prime schedule item in the CPR; as earned value, it is the 
sum of the budgets for completed work packages and completed portions 
of open work packages, plus the applicable portion of the budgets for 
apportioned effort and level of effort; BCWP represents that portion of 
BCWS earned. 

Data element: Actual cost of work performed (ACWP); 
Description: Represents actual or accrued costs of the work performed 
 
Data element: Cost variance; 
Description: The difference between BCWP and ACWP represents the cost 
position—a positive number means that work cost less than planned, a 
negative number that it cost more.

Data element: Schedule variance; 
Description: The difference between BCWP and BCWS represents the 
schedule status—a positive number means that planned work was completed 
ahead of schedule, a negative number that it was not completed as 
planned. Although it is expressed in dollars and not time, one needs to 
consider that work takes time to complete and requires resources such 
as money; therefore, schedule variance is reported as a dollar amount 
to reflect the fact that scheduled work has a budget; it does not 
always translate into an overall program schedule time delay; if it is 
caused by activities on the critical path, then it may cause a time 
delay in the program. 

Data element: Budgeted cost for work remaining; 
Description: Represents the planned work that still needs to be done; 
its value is determined by subtracting budgeted cost for work performed 
from budget at completion. 

Source: DOD and SCEA. 

[A] Definitized cost or price = contract cost or contract price that 
has been negotiated. 

[End of table] 

Using the measures in format 1 at the control account level, management 
can easily detect problems. The sooner a problem is detected, the 
easier it will be to reduce its effects or avoid it in future. However, 
it is not enough just to know there is a problem. It is also critical 
to know what is causing it. The purpose of format 5 of the CPR is to 
provide necessary insight into problems. This format focuses on how the 
control account manager will make corrections to avoid future cost 
overruns and schedule delays or change cost and schedule forecasts when 
corrective action is not possible. In addition, format 5 reports on 
what is driving past variances and what risks and challenges lie ahead. 
It is an option, though, to focus the format 5 analyses on the top 
problems of the program instead of looking at each significant variance 
found in format 1 or 2. Thus, to be useful for providing good insight 
into problems, the format 5 variance report should discuss: 
 
* changes in management reserve; 

* differences in various EACs; 

* performance measurement milestones that are inconsistent with 
contractual dates, perhaps indicating an over target schedule;
 
* formal reprogramming or over target baseline; 

* significant staffing estimate changes; and; 

* a summary analysis of the program. 

It should also discuss in detail significant problems for each cost or 
schedule variance, including their nature and reason, the effect on 
immediate tasks and the total program, corrective actions taken or 
planned, the WBS number of the variance, and whether the variance is 
driven primarily by labor or material. 

That is, the format 5 variance report should provide enough information 
for management to understand the reasons for variances and the 
contractor’s plan for fixing them. Good information on what is causing 
variances is critical if EVM data are to have any value. If the format 
5 is not prepared in this manner, then the EVM data will not be 
meaningful or useful as a management tool, as case study 46 
illustrates. 

Case Study 46: Cost Performance Reports, from Defense Acquisitions, GAO-
05-183: 

The quality of the Navy’s cost performance reports, whether submitted 
monthly or quarterly, was inadequate in some cases—especially with 
regard to the variance analysis section describing the shipbuilder’s 
actions on problems. The Virginia class submarine and the Nimitz class 
aircraft carrier variance analysis reports discussed the root causes of 
cost growth and schedule slippage and described how the variances were 
affecting the shipbuilders’ projected final costs. However, the 
remaining ship programs tended to report only high-level reasons for 
cost and schedule variances, giving little to no detail regarding root 
cause analysis or mitigation efforts. For example, one shipbuilder did 
not provide written documentation on the reasons for variances, making 
it difficult for managers to identify risk and take corrective action. 

Variance analysis reporting was required and being conducted by the 
shipbuilders, but the quality of the reports differed greatly. DOD 
rightly observed that the reports were one of many tools the 
shipbuilders and DOD used to track performance. To be useful, however, 
the reports should have contained detailed analyses of the root causes 
and impacts of cost and schedule variances. CPRs that consistently 
provided a thorough analysis of the causes of variances, their 
associated cost impacts, and mitigation efforts would have allowed the 
Navy to more effectively manage, and ultimately reduce, cost growth. 

Therefore, to improve management of shipbuilding programs and promote 
early recognition of cost issues, GAO recommended that the Navy require 
shipbuilders to prepare variance analysis reports that identified root 
causes of reported variances, associated mitigation efforts, and 
estimated future cost impacts. 

Source: GAO, Defense Acquisitions: Improved Management Practices Could 
Minimize Cost Growth in Navy Shipbuilding Programs, GAO-05-183, 
Washington, D.C.: Feb. 28, 2005. 

[End of case study] 

The level of detail for format 5 is normally determined by specific 
variance analysis thresholds, which, if exceeded, require problem 
analysis and narrative explanations. Therefore, each program has its 
own level of detail to report. Thresholds should be periodically 
reviewed and adjusted to ensure that they continue to provide 
management with the necessary view on current and potential problems. 
In addition, because the CPR should be the primary means of documenting 
ongoing communication between program manager and contractor, it should 
be detailed enough that cost and schedule trends and their likely 
effect on program performance are transparent. 

Monthly EVM Analysis: 

EVM data should be analyzed and reviewed at least monthly so that 
problems can be addressed as soon as they occur and cost and schedule 
overruns can be avoided or at least their effect can be lessened. Some 
labor intensive programs review the data weekly, using labor hours as 
the measurement unit, to spot and proactively address specific problems 
before they get out of control. 

Using data from the CPR, a program manager can assess cost and schedule 
performance trends. This information is useful because trends can be 
difficult to reverse. Studies have shown that once programs are 
15 percent complete, performance indicators can predict the final 
outcome. For example, a CPR showing an early negative trend for 
schedule status would mean that work is not being accomplished and the 
program is probably behind schedule. By analyzing the CPR and the 
schedule, one could determine the cause of the schedule problem, such 
as delayed flight tests, changes in requirements, or test problems. A 
negative schedule variance can be a predictor of later cost problems, 
because additional spending is often necessary to resolve problems. CPR 
data also provide the basis for independent assessments of a program’s 
cost and schedule status and can be used to project final costs at 
completion, in addition to determining when a program should be 
completed. 

Analyzing past performance provides great insight into how a program 
will continue to perform and can offer important lessons learned. 
Effective analysis involves communicating to all managers and 
stakeholders what is causing significant variances and developing 
trends and what corrective action plans are in place so informed 
decisions can be made. Analysis of the EVM data should be a team effort 
that is fully integrated into the program management process so results 
are visible to everyone. Finally, while the analysis focuses on the 
past and what can be learned from variances, it also projects into the 
future by relying on historical performance to predict where a program 
is heading. The principal steps for analyzing EVM data are: 
 
1. Analyze performance: 

* check data to see if they are valid, 

* determine what variances exist, 

* probe schedule variances to see if activities are on the critical 
path, 

* develop historical performance data indexes, 

* graph the data to identify any trends, and, 

* review the format 5 variance analysis for explanations and corrective 
actions. 

2. Project future performance: 

* identify the work that remains, 

* calculate a range of EACs and compare the results to available 
funding, 

* determine if the contractor’s EAC is feasible, and, 

calculate an independent date for program completion. 

3. Formulate a plan of action and provide analysis to management. 

These steps should be taken in sequence, since each step builds on 
findings from the previous one. Skipping the analysis steps to start 
off with projecting independent EACs would be dangerous if the EVM data 
have not been checked to see if they are valid. In addition, it is 
important to understand what is causing problems before making 
projections about final program status. For example, if a program is 
experiencing a negative schedule variance, it may not affect the final 
completion date if the variance is not associated with an activity on 
the critical path or if the schedule baseline represents an early 
“challenge” date. Therefore, it is a best practice to follow the 
analysis steps in the right order so that all information is known 
before making independent projections of costs at completion. 

Analyze Performance: 

Check to See If the Data Are Valid: 

It is important to make sure that the CPR data make sense and do not 
contain anomalies that would make them invalid. If errors are not 
detected, then the data will be skewed, resulting in bad decision-
making. To determine if the data are valid, they should be checked at 
all levels of the WBS, focusing on whether there are errors or data 
anomalies such as: 
 
* negative values for ACWP, BAC, BCWP, BCWS, or EAC; 

* unusually large performance swings (BCWP) from month to month; 

* BCWP and BCWS data with no corresponding ACWP; 

* BCWP with no BCWS; 

* BCWP with no ACWP; 

* ACWP with no BCWP; 

* ACWP that is way above or below the planned value; 

* inconsistency between EAC and BAC—for example, no BAC but an EAC or a 
BAC with no EAC; 

* ACWP exceeds EAC; 

* BCWP or BCWS exceed BAC. 

If the CPR data contain anomalies, the performance measurement data 
will be distorted. For example, a CPR reporting actual costs (ACWP) 
with no corresponding earned value (BCWP) could indicate that 
unbudgeted work is being performed but not captured in the CPR. When 
this happens, the performance measurement data will not reflect true 
status. 

In addition to checking the data for anomalies, the EVM analyst should 
check whether the CPR data are consistent. For instance, the analyst 
should review whether the data reported at the bottom line in format 
1 match the total in format 2. The analyst should also assess whether 
program cost is consistent with the authorized budget. 

Determine What Variances Exist: 

Cost and schedule deviations from the baseline plan give management at 
all levels information about where corrective actions are needed to 
bring the program back on track or to update completion dates and EACs. 
While variances are often perceived as something bad, they provide 
valuable insight into program risk and its causes. Variances empower 
management to make decisions about how best to handle risks. For 
example, management may decide to allocate additional resources or hire 
technical experts, depending on the nature of the variance. 

Because negative cost variances are predictive of a final cost overrun 
if performance does not change, management needs to focus on containing 
them as soon as possible. A negative schedule variance, however, does 
not automatically mean program delay; it means that planned work was 
not completed. 

To know whether the variance will affect the program’s completion date, 
the EVM analyst also needs to analyze the time-based schedule, 
especially the critical path. Because EVM data cannot provide this 
information, data from the contractor’s scheduling system are needed. 
Therefore, EVM data alone cannot provide the full picture of program 
status. Other program management tools and information are also needed 
to better understand variances. 

Probe Schedule Variances for Activities on the Critical Path: 

Schedule variances should be investigated to see if the effort is on 
the critical path. If it is, then the whole program will be delayed. 
And, as we mentioned before, any delay in the program will result in 
additional cost unless other measures are taken. The following methods 
are often used to mitigate schedule problems: 
 
* consuming schedule reserve if it is available, 

* diverting staff to work on other tasks while dealing with unforeseen 
delays, 

* preparing for follow-on activities early so that transition time can 
be reduced, 

* consulting with experts to see if process improvements can reduce 
task time, 

* adding more people to speed up the effort, and 

* working overtime. 

Caution should be taken with adding more people or working overtime, 
since these options cost money. In addition, when too many people work 
on the same thing, communication tends to break down. Similarly, 
working excessive overtime can make staff less efficient. Therefore, 
careful analysis should precede adding staff or instituting overtime. 

A good network schedule that is kept current is a critical tool for 
monitoring program performance. Carefully monitoring the contractor’s 
network schedule will allow for quickly determining when forecasted 
completion dates differ from the planned dates. Tasks may be 
resequenced or resources realigned to reduce the schedule condition. It 
is also important to determine whether schedule variances are affecting 
downstream work. For example, a schedule variance may compress 
remaining activities’ duration times or cause “stacking” of activities 
toward the end of the program, to the point at which it is no longer 
realistic to predict success. If this happens, then an overtarget 
schedule may be necessary (discussed in chapter 20). 

Various schedule measures should be analyzed to better understand the 
impact of schedule variances. For example, the amount of float, as well 
as the number of tasks with lags, constraints, or lack of progress, 
should be examined each month. Excess float usually indicates that the 
schedule logic is flawed, broken, or absent. Large float values should 
be checked to determine if they are real or a consequence of incomplete 
scheduling. Similarly, a large number of tasks with constraints (such 
as limitations on when an activity can start or finish), typically are 
substitutes for logic and can mean that the schedule is not well 
planned. Lags are often reserved for time that is unchanging, does not 
require resources, and cannot be avoided (as in waiting for concrete to 
cure), but lags are often inappropriately used instead of logic to put 
activities on a specified date. Similarly, if open work packages are 
not being statused regularly, it may be that the schedule and EVM are 
not really being used to manage the program. Analyzing these issues can 
help assess the schedule’s progress. 

In addition to monitoring tasks on the critical path, close attention 
should be paid to near-critical tasks and near-term critical path 
effort, as these may alert management to potential schedule problems. 
If a task is not on the critical path but is experiencing a large 
schedule variance, the task may be turning critical. Therefore, 
schedule variances should be examined for their causes. For instance, 
if material is arriving late and the variance will disappear once the 
material is delivered, its effect is minimal. But if the late material 
is causing tasks to slip, then its effect is much more significant. 

Remember that while a negative schedule variance eventually disappears 
when the full scope of work is ultimately completed, a negative cost 
variance is not corrected unless work that has been overrunning begins 
to underrun—a highly unlikely occurrence. Schedule variances are 
usually followed by cost variances; as schedule increases, costs such 
as labor, rented tools, and facilities increase. Additionally, 
management tends to respond to schedule delays by adding more resources 
or authorizing overtime. 

Develop Historical Performance Data Indexes: 

Performance indexes are necessary for understanding the effect a cost 
or schedule variance has on a program. For example, a $1 million cost 
variance in a $500 million program is not as significant as it is in 
a $10 million program. Because performance indexes are ratios, they 
provide a level of program efficiency that easily shows how a program 
is performing. 

The cost performance index (CPI) and schedule performance index (SPI) 
in particular can be used independently or together to forecast a range 
of statistical cost estimates at completion. They also give managers 
early warning of potential problems that need correcting to avoid 
adverse results. Table 36 explains what the values of three performance 
indexes indicate about program status. 

Table 36: EVM Performance Indexes: 

Index: Cost performance index (CPI), the ratio of work performed (or 
earned value) to actual costs for work performed; 
Formula: CPI = BCWP/ACWP; 
Indicator: Like a negative cost variance, a CPI less than 1 is 
unfavorable, because work is being performed less efficiently than 
planned; a CPI greater than 1 is favorable, implying that work is being 
performed more efficiently than planned. CPI can be expressed in 
dollars: 0.9 means 
that for every dollar spent, the program has received 90 cents worth of 
completed work. 
 
Index: Schedule performance index (SPI), the ratio of work performed 
(or earned value) to the initial planned schedule; 
Formula: SPI = BCWP/BCWS; 
Indicator: Like a negative schedule variance, an SPI less than 1 
indicates that work is not being completed as planned and the program 
may be behind schedule if the incomplete work is on the critical path; 
an SPI greater 
than 1 means work has been completed ahead of the plan. An SPI can be 
thought of as describing work efficiency: 0.9 means that for every 
dollar planned, the program is accomplishing 90 cents worth of work. 

Index: To complete performance index (TCPI), cost performance to be 
achieved if remaining work is to meet contractor EAC. 
Formula: TCPI = BCWR/(EAC – ACWP))[A]; 
Indicator: CPI takes into account what the contractor has done and can 
be compared to TCPI to test the EAC’s reasonableness; if TCPI is higher 
than CPI, the contractor expects productivity to improve, which may not 
be feasible given past performance. 
 
Source: DOD and SCEA. 

[A] BCWR = budgeted cost for work remaining. 

[End of table] 

Just like variances, performance indexes should be investigated. An 
unfavorable CPI—one less than 1.0—may indicate that work is being 
performed less efficiently or that material is costing more than 
planned. Or it could mean that more expensive labor is being employed, 
unanticipated travel was necessary, or technical problems were 
encountered. Similarly, a mistake in how earned value was taken or 
improper accounting could cause performance to appear to be less 
efficient. The bottom line: more analysis is needed to know what is 
causing an unfavorable condition. Likewise, favorable cost or schedule 
performance may stem from errors in the EVM system, not necessarily 
from work’s taking less time than planned or overrunning its budget. 
Thus, not assessing the full meaning behind the indexes runs the risk 
of basing estimates at completion on unreliable data. 

Further, when using the CPI as a sanity check against the TCPI, if the 
TCPI is much greater than the current or cumulative CPI, then the 
analyst should discover whether this gain in productivity is even 
possible. If not, then the contractor is most likely being optimistic. 
A rule of thumb is that if the TCPI is more than 5 percent higher than 
the CPI, it is too optimistic. In addition, a CPI less than 1 is a 
cause for concern, because without exception, the cumulative CPI tends 
not to improve but, rather, declines after a program is 15 percent 
complete. 

An SPI different from 1.0 warrants more investigation to determine what 
effort is behind or ahead of schedule. To do this, one needs to examine 
the WBS to identify issues at the activity level associated with 
completing the work. Using this information, management could decide to 
reallocate resources, where possible, from activities that might be 
ahead of schedule (SPI greater than 1.10) to help activities that are 
struggling (SPI less than 0.90) to get back on track. There also should 
be a discussion on analyzing the free-float of activities that are 
slipping to see if proactive actions should take place so resources are 
not lost in future activities. 

Performance reported early in a program tends to be a good predictor of 
how the program will perform later, because early control account 
budgets tend to have a greater probability of being achieved than those 
scheduled to be executed later. DOD’s contract analysis experience 
suggests that all contracts are frontloaded to some degree, simply 
because more is known about near-term work than far-term. To the extent 
possible, the IBR should check for this condition. 

In addition to the performance indexes, three other simple and useful 
calculations for assessing program performance are: 
 
* % planned = BCWS/BAC, 

* % complete = BCWP/BAC, and, 

* % spent = ACWP/BAC. 

Examining these formulas, one can see quickly whether a program is 
doing well or is in trouble. For example, if percent planned is much 
greater than percent complete, the project is significantly behind 
schedule. Similarly, if percent spent is much greater than percent 
complete, the project is significantly overrunning its budget. 
Moreover, if the percent of management reserve consumed is much higher 
than percent complete, the program is likely not to have sufficient 
budget to mitigate all risks. For example, if a program is 25 percent 
complete but has spent more than 50 percent of its management reserve, 
there may not be enough management reserve budget to cover remaining 
risks because, this early in the program, it is being consumed at twice 
the rate at which work is being accomplished. 

Graph the Data to Discover Trends: 

For reasons we discussed in chapter 10, EVM data should be analyzed 
graphically to see what trends are apparent. Performance trends provide 
valuable information about how a program has been doing in terms 
of cost and schedule. They also help in understanding performance, 
important for accurately predicting costs at completion. Knowing what 
has caused problems in the past can help determine whether they will 
continue in the future. 

Trend analysis should plot current and cumulative EVM data and track 
the use of management reserve for a complete view of program status and 
an indication of where problems exist. Typical EVM data trend plots 
that can help managers know what is happening in their programs are: 

* BAC and contractor EAC over the life of the contract; 

* historical, cumulative, and current cost and schedule variance 
trends; 

* CPI and SPI (cumulative and current), monthly burn rate, or current 
ACWP; 

* TCPI versus CPI (cumulative and current), format 3 baseline data; 

* projected versus actual staffing levels from format 4; and 

* management reserve allocations and burn rate. 

Plotting the BAC over the life of the contract will quickly show any 
contract rebaselines or major contract modifications. BACs that follow 
a stairstep trend mean that the program is experiencing changes or 
major overruns. Both should be investigated to see if the EVM data are 
still reliable. For example, if the contract has been modified, then an 
IBR may be necessary to ensure that the changes were incorporated and 
flowed down to the right control accounts. In figure 36, BAC for an 
airborne laser program has been plotted over time to show the effect of 
major contract modifications and program rebaselines. 

Figure 36: Understanding Program Cost Growth by Plotting Budget at 
Completion Trends: 

[Refer to PDF for image: line graph] 

Dollars plotted vs. time. Line represents rebaselining and 
restructuring efforts specifically indicated. 

Source: GAO. 

Note: The trend examples in figures 36–38, shown for learning purposes, 
are drawn from GAO, Uncertainties Remain Concerning the Airborne 
Laser’s Cost and Military Utility, GAO-04-643R (Washington, D.C.: May 
17, 2004), pp. 17–20. 

[End of figure] 

The figure reveals a number of contract modifications, program 
restructurings, and rebaselines in the airborne laser program over the 
7 years 1997 to 2004. Looking at the plot line, one can quickly see 
that the program more than doubled in cost. The trend data also show 
instances of major change, making it easy to pinpoint exactly which 
CPRs should be examined to best understand the circumstances. 

In this example, cost growth occurred when the program team encountered 
major problems with manufacturing and integrating advanced optics and 
laser components. Initial cost estimates underestimated the complexity 
in developing these critical technologies, and funding was insufficient 
to cover these risks. To make matters worse, the team was relying on 
rapid prototyping to develop these technologies faster, and it 
performed limited subcomponent testing. These shortcuts resulted in 
substantial rework when parts failed during integration. 

Besides examining BAC trends, it is helpful to plot cumulative and 
current cost and schedule variances for a high-level view of how a 
program is performing. If downward trends are apparent, the next step 
is to isolate where these problems are in the WBS. Figure 37 shows 
trends of increasing cost and schedule variance associated with the 
airborne laser program. 

Figure 37: Understanding Program Performance by Plotting Cost and 
Schedule Variances: 

[Refer to PDF for image: multiple line graph] 

Dollars plotted vs. time. Lines represent schedule performance and cost 
performance. 

Source: GAO. 

Note: The trend examples in figures 36–38, shown for learning purposes, 
are drawn from GAO, Uncertainties Remain Concerning the Airborne 
Laser’s Cost and Military Utility, GAO-04-643R (Washington, D.C.: May 
17, 2004), pp. 17–20. 

[End of figure] 
 
In figure 37, cost variance steadily declined over fiscal year 2003, 
from an unfavorable $50 million to an almost $300 million overrun. At 
the same time, schedule variance also declined, but during the first 
half of the year it leveled off, after the program hired additional 
staff in March to meet schedule objectives. While the additional staff 
helped regain the schedule, they also caused the cost variance to 
worsen. 

Plotting various EACs along with the contractor’s estimate at 
completion is a very good way to see if the contractor’s estimate is 
reasonable. Figure 38, for example, shows expected cost overruns at 
contract completion for the airborne laser program. 

Figure 38: Understanding Expected Cost Overruns by Plotting Estimate at 
Completion: 

[Refer to PDF for image: multiple line graph] 

Dollars plotted vs. time. Lines represent: 
contractor variance at completion; 
GAO best case; 
GAO most likely case; 
GAO worst case. 

Source: GAO. 

Note: The trend examples in figures 37 and 38, shown for learning 
purposes, are drawn from GAO, Uncertainties Remain Concerning the 
Airborne Laser’s Cost and Military Utility, GAO-04-643R (Washington, 
D.C.: May 17, 2004), pp. 17–20. 

[End of figure] 

Figure 38 plots various EACs that GAO generated from the contractor’s 
EVM data. GAO’s independent EACs showed that an overrun of between $400 
million and almost $1 billion could be expected from recent program 
performance. The contractor, in contrast, was predicting no overrun at 
completion—despite the fact that the program had already incurred a 
cost overrun of almost $300 million, (seen in figure 37). 

That the program was facing huge technology development problems made 
it highly unlikely that the contractor could finish the program with no 
additional cost variances. In fact, there was no evidence that the 
contractor could improve its performance enough to erase the almost 
$300 million cumulative cost variance. Knowing this, the reasonable 
conclusion was that the contractor’s estimate at completion was not 
realistic, given that it was adding more personnel to the contract and 
still facing increasing amounts of uncompleted work from prior years. 

Another way to check the reasonableness of a contractor’s estimate at 
completion is to compare the CPI, current and cumulative, with the TCPI 
to see if historical trends support the contractor’s EAC. 

Other trends that can offer insight into program performance include 
plotting the monthly burn rate, or ACWP. If the plotting shows a rate 
of increase, the analyst needs to determine whether the growth stems 
from the work’s becoming more complex as the program progresses or from 
overtime’s being initiated to make up for schedule delays. Reviewing 
monthly ACWP and BCWP trends can also help determine what is being 
accomplished for the amount spent. In the data in figures 37 and 38, 
for example, it was evident that the program was paying a large staff 
to make a technological breakthrough rather than paying its staff 
overtime just to meet schedule goals. It is important to know the 
reasons for variances, so management can make decisions about the best 
course of action. For the program illustrated in the figures, we 
recognized that since the airborne laser program was in a period of 
technology discovery that could not be forced to a specific schedule, 
any cost estimate would be highly uncertain. Therefore, we recommended 
that the agency develop a new cost estimate for completing technology 
development and perform an uncertainty analysis to quantify its level 
of confidence in that estimate. 

Other trend analyses include plotting CPR format 3 data over time to 
show whether budget is being moved to reshape the baseline. Comparing 
planned to actual staffing levels—using a waterfall chart to analyze 
month-to-month profiles—can help determine whether work is behind 
schedule for lack of available staff.[Footnote 78] This type of trend 
analysis can also be used to determine whether projected staffing 
levels shown in CPR format 4 represent an unrealistic expectation of 
growth in labor resources. 

Finally, plotting the allocation and burn rate of management reserve is 
helpful for tracking and analyzing risk. Since management reserve is a 
budget tool to help manage risks, analyzing its rate of allocation is 
important because when management reserve is consumed, any further risk 
that is realized can only be manifested as unfavorable cost variance. 
Accordingly, risks from the cost estimate uncertainty analysis should 
be compared against the management reserve allocation to understand 
where in the WBS risks are turning into issues. This analysis is a best 
practice because it further ties the cost estimating risk analysis with 
EVM. It can also prevent the handing out of budget whenever a program 
encounters a problem, ensuring that as more complicated tasks occur 
later in the program, management reserve will be available to mitigate 
any problems. Therefore, to meet this best practice, risks in the cost 
estimate should be identified up front and conveyed to the EVM 
analysts, so they can keep a look out for risks in specific WBS 
elements. Thus, it is absolutely necessary to integrate cost estimating 
and EVM in order to have the right information to make good judgments 
about when to allocate management reserve. 

Review the Format 5 Variance Analysis: 

After determining which WBS elements are causing cost or schedule 
variances, examining the format 5 variance analysis can help determine 
the technical reasons for variances, what corrective action plans are 
in place, and whether or not the variances are recoverable. Corrective 
action plans for cost and schedule variances should be tracked through 
the risk mitigation process. In addition, favorable cost variances 
should be evaluated to see if they are positive as a result of 
performance without actual cost having been recorded. This can happen 
when accounting accruals lag behind invoice payments. Finally, the 
variance analysis report should discuss any contract rebaselines and 
whether any authorized unpriced work exists and what it covers. 

Examining where management reserve has been allocated within the WBS is 
another way to identify potential issues early on. An alarming 
situation arises if the CPR shows that management reserves are being 
used faster than the program is progressing toward completion. For 
example, management should be concerned if a program has used 80 
percent of its management reserves but has completed only 40 percent of 
its work. EVM experts agree that a program’s management reserves should 
be sufficient to mitigate identified program risk so that budget will 
always be available to cover unexpected problems. 

This is especially important toward the latter half of a program, when 
adequate management reserve is needed to cover problems during testing 
and evaluation. When management reserve is gone, any work that could 
have been budgeted from it can only manifest as additional cost 
overrun. And, when it is gone, the analyst should be alert to 
contractor requests to increase the contract value to avoid variances.

Project Future Performance: 
 
Identify The Work That Remains: 

Two things are needed to project future performance: the actual costs 
spent on completed work and the cost of remaining work. Actual costs 
spent on completed work are easy to determine because they are captured 
by the ACWP. The remaining work is determined by subtracting BCWP from 
BAC to derive the budgeted cost of work remaining. However, to be 
accurate, the EAC should take into account performance to date when 
estimating the cost of the remaining work. 

Calculate a Range of EACs and Compare to Available Funding: 

It is a best practice to develop more than one EAC, but determining an 
accurate EAC is difficult because EVM data can be used to develop a 
multitude of EACs. Picking the right EAC is challenging since the 
perception is that bad news about a contract’s performance could put a 
program and its management in jeopardy. By calculating a range of EACs, 
management can know a likely range of costs for completing the program 
and take action in response to the results. 

While plenty of EACs can be generated from the EVM data, each EAC is 
calculated with a generic index-based formula, similar to: 

EAC = ACWP (cumulative) + (BAC – BCWP (cumulative)) / efficiency index. 

The difference in EACs is driven by the efficiency index that is used 
to adjust the remaining work according to the program’s past cost and 
schedule performance. The idea in using the efficiency index is that 
how a program has performed in the past will indicate how it will 
perform in the future. The typical performance indexes include the CPI 
and SPI, but these could represent cumulative, current, or average 
values over time. In addition, the indexes could be combined to form a 
schedule cost index—as in CPI x SPI—which can be weighted to emphasize 
either cost or schedule impact. Further, EACs can be generated with 
various regression analyses in which the dependent variable is ACWP and 
the independent value is BCWP, a performance index, or time. Thus, many 
combinations of efficiency indexes can be applied to adjust the cost of 
remaining work. 

Table 37 summarizes findings from studies in which EACs make the best 
predictors, depending on where the program is in relation to its 
completion. The findings are based on extensive research that compared 
efficiency factors that appeared to best predict program costs. The 
conclusion was that no one factor was superior. Instead, the best EAC 
efficiency factor changes by the stage of the program. For example, 
the research found that assigning a greater weight to SPI is 
appropriate for predicting costs in the early stage of a program but 
not appropriate later on. SPI loses its predictive value as a program 
progresses and eventually returns to 1.0 when the program is complete. 
The research also found that averaging performance over a shorter 
period of time—3 months, for example—was more accurate for predicting 
costs than longer periods of time—such as 6 to12 months—especially in 
the middle of a program, when costs are being spent at a greater rate. 

Table 37: Best Predictive EAC Efficiency Factors by Program Completion 
Status: 

EAC efficiency factor: CPI Cumulative; 
Percent complete: Early: 0%–40%: [Check]; 
Percent complete: Middle: 20%–80%: [Check]; 
Percent complete: Late: 60%–100%: [Check]; 
Comment: Assumes the contractor will operate at the same efficiency for 
remainder of program; typically forecasts the lowest possible EAC. 
 
EAC efficiency factor: CPI 3-month average; 
Percent complete: Early: 0%–40%: [Check]; 
Percent complete: Middle: 20%–80%: [Check]; 
Percent complete: Late: 60%–100%: [Check]; 
Comment: Weights current performance more heavily than cumulative past 
performance. 

EAC efficiency factor: CPI 6-month average; 
Percent complete: Early: 0%–40%: [Empty]; 
Percent complete: Middle: 20%–80%: [Check]; 
Percent complete: Late: 60%–100%: [Check]; 
Comment: Weights current performance more heavily than cumulative past 
performance. 

EAC efficiency factor: CPI 12-month average; 
Percent complete: Early: 0%–40%: [Empty]; 
Percent complete: Middle: 20%–80%: [Check]; 
Percent complete: Late: 60%–100%: [Check]; 
Comment: Weights current performance more heavily than cumulative past 
performance. 

EAC efficiency factor: CPI x SPI Cumulative; 
Percent complete: Early: 0%–40%: [Check]; 
Percent complete: Middle: 20%–80%: [Check]; 
Percent complete: Late: 60%–100%: [Empty]; 
Comment: Usually produces the highest EAC. 
 
EAC efficiency factor: CPI x SPI 6-month average; 
Percent complete: Early: 0%–40%: [Empty]; 
Percent complete: Middle: 20%–80%: [Check]; 
Percent complete: Late: 60%–100%: [Check]; 
Comment: A variation of this formula (CPI6 x SPI), also proven 
accurate[A]. 

EAC efficiency factor: SPI Cumulative; 
Percent complete: Early: 0%–40%: [Check]; 
Percent complete: Middle: 20%–80%: [Empty]; 
Percent complete: Late: 60%–100%: [Empty]; 
Comment: Assumes schedule will affect cost also but is more accurate 
early in the program than later. 

EAC efficiency factor: Regression; 
Percent complete: Early: 0%–40%: [Check]; 
Percent complete: Middle: 20%–80%: [Empty]; 
Percent complete: Late: 60%–100%: [Empty]; 
Comment: Using CPI that decreases within 10% of its stable value can be 
a good predictor of final costs and should be studied further. 

EAC efficiency factor: Weighted; 
Percent complete: Early: 0%–40%: [Check]; 
Percent complete: Middle: 20%–80%: [Empty]; 
Percent complete: Late: 60%–100%: [Check]; 
Comment: Weights cost and schedule based on .x(CPI) + .x(SPI); 
statistically the most accurate, especially when using 50% CPI x 50% 
SPI[A]. 
 
Source: Industry. 

[A] According to DOD comments based on the work of David S. 
Christensen. 

[End of table] 

Other methods, such as the Rayleigh model, rely on patterns of manpower 
build-up and phase-out to predict final cumulative cost. This model 
uses a nonlinear regression analysis of ACWP against time to predict 
final cumulative cost and duration and has been known to be a high-end 
EAC forecast. One benefit of using this model is that as long as actual 
costs are available, they can be used to forecast cumulative cost at 
completion and to assess overall cost and schedule risk. 

Relying on the CPI and SPI performance factors usually results in 
higher EACs if their values are less than 1.0. How much the cost will 
increase depends on the specific index and how many months are included 
in determining the factor. Research has also shown that once a program 
is 20 percent complete, the cumulative CPI does not vary much from its 
value (less than 10 percent) and most often tends to get worse as 
completion grows nearer. Therefore, projecting an EAC by using the 
cumulative CPI efficiency factor tends to generate a best case EAC. 

In contrast, the schedule cost index—some form of CPI x SPI—takes the 
schedule into account to forecast future costs. This index produces an 
even higher EAC by compounding the effect of the program’s being behind 
schedule and over cost. The theory behind this index is that to get 
back on schedule will require more money because the contractor will 
either have to hire more labor or pay for overtime. As a result, the
schedule cost index forecast is often referred to as a worst case 
predictor.

A more sophisticated EAC method relies on using actual costs to date 
plus the remaining work with a cost growth factor applied plus a cost 
impact for probable schedule delays. Using this method takes into 
account cost, schedule, and technical risks that can result from test 
failures or other external factors that have occurred in other past 
programs and relies on simulation to determine the probability effect. 
Finally, an integrated schedule can be used, in combination with risk 
analysis data and Monte Carlo simulation software, to estimate schedule 
risk and the EAC (chapter 18, step 10, has more details). 

EACs should be created not only at the program level but also at lower 
levels of the WBS. By doing this, areas that are performing poorly will 
not be masked by other areas doing well. If the areas performing worse 
represent a large part of the BAC, then this method will generate a 
higher and more realistic EAC. Once a range of EACs has been developed, 
the results should be analyzed to see if additional funding is 
required. Independent EACs provide a credible rationale for requesting 
additional funds to complete the program, if necessary. Their 
information is critical for better program planning and avoiding a 
situation in which work must be stopped because funds have been 
exhausted. Early warning of impending funding issues enables management 
to take corrective action to avoid any surprises. 

Determine Whether the Contractor’s EAC Is Feasible: 

While EVM data are useful for predicting independent EACs, the 
contractor should also look at other information to develop its EAC. In 
particular, the contractor should: 
 
* evaluate its performance on completed work and compare it to the 
remaining budget, 

* assess commitment values for material needed to complete remaining 
work, and, 

* estimate future conditions to generate the most accurate EAC. 
 
Further, the contractor should periodically develop a comprehensive 
EAC, using all information available to develop the best estimate 
possible. This estimate should also take into account an assessment of 
risk based on technical input from the team. Once the EAC is developed, 
it can be compared for realism against other independent EACs and 
historical performance indexes. 

A case in point is the Navy’s A-12 program, cancelled in January 1991 
by the Secretary of Defense because estimates based on EVM of the cost 
to complete it showed substantial overruns. Many estimates had been 
developed for the program. The program manager had relied on the lower 
EAC, even though higher EACs had been calculated. The inquiry into the 
A-12 program cancellation concluded that management tended to suppress 
bad news and that this was not a unique problem but common within DOD. 

Since a contractor typically uses methods outside EVM to develop an 
EAC, EVM and risk analysis results can be used to assess the EAC’s 
reliability. While the contractor’s EAC tends to account for special 
situations and circumstances that cannot be accurately captured by 
looking only at statistics, it also tends to include optimistic views 
of the future. One way to assess the validity of the EAC is to compare 
the TCPI to the CPI. Because the TCPI represents the ratio of remaining 
work to remaining funding and indicates the level of performance the 
contractor must achieve and maintain to stay within funding goals, it 
can be a good benchmark for assessing whether the EAC is reasonable. 
Therefore, if the TCPI is greater than the CPI, this means that the 
contractor expects productivity to be higher in the future. To 
determine whether this is a reasonable assumption, analysts should look 
for supporting evidence that backs up this claim. 

A typical rule of thumb is that if the CPI and TCPI differ by more than 
5 percent to 10 percent, and the program is more than 20 percent 
complete, the contractor’s EAC is too optimistic. For example, if a 
program’s TCPI is 1.2 and the cumulative CPI is 0.9, it is not 
statistically expected that the contractor can improve its performance 
that much through the remainder of the program. To meet the EAC cost, 
the contractor must produce $1.20 worth of work for every $1.00 spent. 
Given the contractor’s historical performance of $0.90 worth of work 
for every $1.00 spent, it is highly unlikely that it can improve its 
performance that much. One could conclude that the contractor’s EAC is 
unrealistic and that it underestimates the final cost. 

Another finding from more than 500 studies is that once a contract is 
more than 15 percent complete, the overrun at completion will usually 
be more than the overrun already incurred.[Footnote 79] Looking again 
at the example of the airborne laser program discussed around figures 
37–38, we see that while the contractor predicted no overrun at 
completion, there was a cumulative unfavorable cost variance of almost 
$300 million. According to this research statement, one could conclude 
that the program would overrun by $300 million or more. Using EVM data 
from the program, we predicted that the final overrun could be anywhere 
between $400 million and almost $1 billion by the time the program was 
done. 

Calculate an Independent Date for Program Completion: 

Dollars can be reallocated to future control accounts by management, 
but time cannot. If a cost underrun occurs in one cost account, the 
excess budget can be transferred to a future account. But if a control 
account is 3 months ahead and another is 3 months behind, time cannot 
be shifted from the one account to the other to fix the schedule 
variance. Given this dynamic, the schedule variance should be examined 
in terms of the network schedule’s critical and near-critical paths to 
determine what specific activities are behind schedule, and a schedule 
risk analysis should determine which activities may cause the schedule 
to extend in the future. 

In the simplest terms, the schedule variance describes what was or was 
not accomplished but does not provide an accurate assessment of 
schedule progress. To project when a program will finish, management 
must know whether the activities that are contributing to a schedule 
variance are on the critical path or may ultimately be on the path, if 
mitigation is not pursued. If they are, then any slip in the critical 
path activities will result in a slip in the program, and sufficient 
slippage in near-critical paths may ultimately have the same result. 
Therefore, the program manager should analyze the activities undergoing 
delay to see if they may ultimately delay the program. If they may, 
then the program may be in danger of not finishing on schedule and an 
analysis, generally a schedule risk analysis, should be conducted to 
determine the most likely completion date. In addition, a schedule risk 
analysis (described in appendix X) should be made periodically to 
assess changes to the critical path and explain schedule reserve 
erosion and mitigation strategies for keeping the program on schedule. 

Provide Analysis To Management: 

The ability to act quickly to resolve program problems depends on 
having an early view of what is causing them. Management’s having 
accurate progress assessments makes for a better picture of program 
status and leads to better decisions. When problems are identified, 
they should be captured within the program’s risk management process so 
that someone can be assigned responsibility for tracking and correcting 
them. 

In addition, using information from the independent EACs and the 
contractor’s EAC, management should decide whether additional program 
funding should be requested and, if so, make a convincing case for more 
funds. Management should also be sure to link program outcomes to award-
fee objectives. For example, management can look back to earlier CPRs 
to see if they objectively depicted contract status and predicted 
certain problems. This approach supports performance-based reporting 
and rewards contractors for managing effectively and reporting actual 
conditions, reducing the need for additional oversight. 

Continue EVM until The Program Is Complete: 

EVM detail planning is never ending and continues until the program is 
complete. Converting planning packages into detailed work packages so 
that near-term effort is always detailed is called “rolling wave” 
planning. This approach gives the contractor flexibility for planning 
and incorporating lessons learned. 

Rolling-wave planning that is based solely on calendar dates is an 
arbitrary practice that may result in insufficient detail. When this 
approach is used, work is planned in 6-month increments; all effort 
beyond a 6-month unit is held in a planning package. Each month, near-
term planning packages are converted to detailed work packages to 
ensure that 6 months of detailed planning are always available to 
management. This continues until all work has been planned in detail 
and the program is complete. A better method is to plan in detail a 
significant technical event, such as the preliminary design review. By 
using technical milestones rather than calendar dates, better cost, 
schedule, and technical performance integration can be achieved, as 
depicted in figure 39. 

Figure 39: Rolling Wave Planning: 

[Refer to PDF for image: s curve graph] 

Resources plotted vs. time. 

Indicated on the graph are: 
Preliminary design review; 
Critical design review; 
Project end; 
Initial planning detail; 
Future work in planning packages; 
Detail planning based on technical objectives; 
Facilitates event-based reporting incentives; 
Better integration of technical, schedule, and cost performance and 
risk management. 

Source: Abba Consulting. 

[End of figure] 

The unwritten rule that 1 month of detailed planning should be added to 
previously detailed planning is related more to creating than managing 
to a baseline, which is the heart of EVM. Therefore, managing to a 
technical event is the best practice and yields the best EVM benefits. 

Continually planning the work supports an EVM system that will help 
management complete the program within the planned cost and proposed 
schedule. This is important, since EVM data are essential to effective 
program management and can be used to answer basic program management 
questions such as those in table 38. 

Table 38: Basic Program Management Questions That EVM Data Help Answer: 

Question: How much progress has the program made so far? 
Answer: Percent complete. 

Question: What are the significant deviations from the plan? 
Answer: 
* Cost variance; 
* Schedule variance; 
* Variance at completion. 

Question: How efficiently is the program meeting cost and schedule 
objectives? 
Answer: 
* Cost performance index (CPI); 
* Schedule performance index (SPI). 

Question: Are cost and schedule trends getting better or worse? 
Answer: Plotting cost and schedule variance, CPI, SPI, etc. 

Question: Will the program be completed within the budget? 
Answer: To complete performance index (TCPI) for the budget at 
completion (BAC). 

Question: Is the contractor’s estimate at completion (EAC) reasonable? 
Answer: TCPI for the contractor’s EAC 

Question: What other estimates are reasonable for completing the 
authorized scope of work? 
Answer: Independent EACs using statistical forecasting techniques based 
on various efficiency factors. 

Question: What action will bring the program back on track? 
Answer: Acting on format 5 variance analysis information. 

Source: © 2003, Society of Cost Estimating and Analysis (SCEA), “Earned 
Value Management Systems”. 

From questions such as those in table 38, reliable EVM data can help 
inform the most basic program management needs. The questions also 
provide an objective way of measuring progress so that accurate 
independent assessments of EACs can be developed and presented to 
stakeholders. 

16. Best Practices Checklist: Managing Program Costs: Execution: 
 
* An IBR verified that the baseline budget and schedule captured the 
entire scope of work, risks were understood, and available and planned 
resources were adequate. 
- -Separate IBRs were conducted at the prime contractor and all major 
subcontractors. 
- A performance measurement baseline assessment made a 
comprehensive and value-added review of control accounts. 
-- Before award, or not more than 6 months after, an IBR categorized 
risks by severity and provided team training. 
-- Work definition (including provisions for rework and retesting), 
schedule integration, resource identification, earned value 
measures, and baseline validation were matured and reviewed. 
-- Interviewers used a template in discussions with control account 
managers and identified where additional training was needed. 
-- An action plan for assigning responsibility for handling risks was 
developed, and a final program risk rating was based on a summary 
of all identified risks. 
-- Management reserve was set aside that covered identified risks and 
care was taken to include risks identified during the IBR in the risk 
management plan. 
-- An EVM analyst monitored corrective action requests for closure. 
-- A memorandum for the record described the IBR findings. 

* A contract performance report summarized EVM data. 
- The data were reviewed monthly to track program progress, risks, and 
plans. 
- Management used the data to 
-- integrate cost and schedule performance data with technical 
measures; 
-- identify the magnitude and effect of problems causing significant 
variances; 
-- inform higher management of valid and timely program status and 
project future performance. 
- Format 1 of the CPR reported data to at least level 3 of the WBS, and 
format 5 explained variances and the contractor’s plans for fixing 
them. 

* Program managers analyzed EVM data monthly and sequentially for 
variances and EACs. 
- The EVM data were checked for validity and anomalies. 
- Performance indexes were analyzed and plotted for trends and 
variances. 
- Schedule variances were analyzed against the most recently statused 
schedule to see if problems were occurring on or near the critical 
path. 
- Management reserve allocations in the WBS were examined and 
compared against risks identified in the cost estimate. 
- A range of EACs was developed, using a generic index-based formula or 
relying on probable cost growth factors on remaining work, combined 
with an integrated cost schedule risk analysis. 
- An independent date for program completion was determined, using 
schedule risk analysis that identifies which activities need to be 
closely 
monitored. 
- Senior management used EVM data to answer basic program questions. 

[End of Chapter 19] 

Chapter 20: Managing Program Costs: Updating: 

Programs should be monitored continuously for their cost effectiveness 
by comparing planned and actual performance against the approved 
program baseline. In addition, the cost estimate should be updated with 
actual costs so that it is always relevant and current. The continual 
updating of the cost estimate as the program matures not only results 
in a higher-quality estimate but also gives opportunity to incorporate 
lessons learned. Future estimates can benefit from the new knowledge. 
For example, cost or schedule variances resulting from incorrect 
assumptions should always be thoroughly documented so as not to repeat 
history. Finally, actual cost and technical and historic schedule data 
should be archived in a database for use in supporting future 
estimates. 

Most programs, especially those in development, do not remain static; 
they tend to change in the natural evolution of a program. Developing a 
cost estimate should be not a one-time event but, rather, a recurrent 
process. Before changes are approved, however, they should be examined 
for their advantages and effects on the program cost. If changes are 
deemed worthy, they should be managed and controlled so that the cost 
estimate baseline continuously represents the new reality. Effective 
program and cost control requires ongoing revisions to the cost 
estimate, budget, and projected estimates at completion. 

Incorporating Authorized Changes Into The Performance Measurement 
Baseline: 

While the 32 ANSI guidelines are for the overarching goal of 
maintaining the integrity of the baseline and resulting performance 
measurement data, changes are likely throughout the life of the 
program, so that the performance measurement baseline should be updated 
to always reflect current requirements or changes in scope. Some 
changes may be simple, such as modifying performance data to correct 
for accounting errors or other issues that can affect the accuracy of 
the EVM data. Other changes can be significant, as when major events or 
external factors beyond the program manager’s control result in changes 
that will greatly affect the performance measurement baseline. Key 
triggers for change include: 

* contract modifications, including engineering change proposals; 

* shifting funding streams; 

* restricting funding levels; 

* major rate changes, including overhead rates; 
 
* changes to program scope or schedule; 

* revisions to the acquisition plan or strategy; and 

* executive management decisions. 

Since the performance measurement baseline should always reflect the 
most current plan for accomplishing authorized work, incorporating 
changes accurately and in a timely manner is especially important for 
maintaining the effectiveness of the EVM system. Table 39 describes the 
ANSI guidelines with regard to correctly revising the performance 
measurement baseline. 

Table 39: ANSI Guidelines Related to Incorporating Changes in an EVM 
System: 
 
Guideline: Incorporate authorized changes in a timely manner, recording 
their effects in budgets and schedules; in the directed effort before 
negotiating a change, base the changes on the amount estimated 
and budgeted to the program organizations; 
Description: Incorporating authorized changes quickly maintains the 
performance measurement baseline’s effectiveness for managing and 
controlling the program; therefore, authorized changes in the baseline 
should be incorporated in a documented, disciplined, and timely manner 
so that budget, schedule, and work remain coupled for true performance 
measurement. The contractor will develop its best estimate for planning 
and budgeting into changes not yet negotiated; when changes are 
incorporated, existing cost and schedule variances should not be 
arbitrarily eliminated, but economic price and rate adjustments may be 
made as appropriate. 
 
Guideline: Reconcile current budgets to prior budgets in terms of 
changes to the authorized work and plan the effort in the detail needed 
by management for effective control; 
Description: When budget revisions can be reconciled, the integrity of 
the performance measurement baseline can be verified; budget changes 
should be controlled and understood in terms of scope, resources, and 
schedule so the baseline reflects current levels of authorized work. 
Budget revisions should also be traceable to authorized control account 
budgets; if additional in-scope work has been identified, management 
reserve can augment existing control account budgets. 
 
Guideline: Control retroactive changes to records pertaining to work 
performed that would change previously reported amounts for actual 
costs, earned value, or budgets; 
Description: To avoid masking historic variance trends needed to 
project estimates at completion, retroactive changes need to be 
controlled; retroactive adjustments to costs should happen only as a 
result of routine accounting adjustments—e.g., change orders that have 
not been priced, rate changes, and economic price adjustments—customer-
directed changes, or data entry corrections. Limiting retroactive 
changes to these conditions ensures baseline integrity and accurate 
performance measurement data. 
 
Guideline: Prevent revisions to the program budget except for 
authorized changes; 
Description: If changes are not made within a controlled process, the 
integrity of performance trend data may be compromised and 
understanding of overall program status will be delayed; to maintain 
baseline integrity, unauthorized revisions to the performance 
measurement baseline should be prevented. All changes must be approved 
and implemented following a well-defined baseline management control 
process; this avoids implementing a budget baseline that is greater 
than the program budget. Only in the situation of an overtarget 
baseline should the performance budget or schedule objectives exceed 
the program plan. 
 
Guideline: Document changes to the performance measurement baseline; 
Description: Properly maintaining the performance measurement baseline 
enables control account managers to accurately measure performance; it 
should always reflect the most current plan for accomplishing the work. 
All authorized changes should be quickly incorporated; before any new 
work begins, all planning documents should be updated to maintain 
the EVM system’s integrity. 
 
Source: Adapted from National Defense Industrial Association (NDIA) 
Program Management Systems Committee (PMSC), ANSI/EIA-748-A Standard 
for Earned Value Management Systems Intent Guide (Arlington, Va.: 
January 2005). 

[End of table] 

It is also important to note that a detailed record of the changes made 
to the performance measurement baseline should be established and 
maintained. Doing so makes it easy to trace all changes to the program 
and lessens the burden on program personnel when compiling this 
information for internal and external program audits, EVM system 
surveillance reviews, and updates to the program cost estimate. If 
changes are not recorded and maintained, the program’s performance 
measurement baseline will not reflect reality. The performance 
measurement baseline will become outdated and the data from the EVM 
system will not be meaningful. Case study 47 highlights a program in 
which this occurred. 

Case Study 47: Maintaining Performance Measurement Baseline 
Data, from National Airspace System, GAO-03-343: 
 
The Federal Aviation Administration (FAA) obtained monthly cost 
performance reports from the contractor on the Standard Terminal 
Automation Replacement System (STARS). The agency should have been able 
to use the reports for overseeing the contractor’s performance and 
estimating the program’s remaining development costs. FAA did not use 
these reports, however, because they were not current. Their central 
component, the performance measurement baseline-which established 
performance, cost, and schedule milestones for the contract—had not 
been updated since May 2000 and therefore did not incorporate the 
effects of later contract modifications. 

For example, the September 2002 cost performance report did not reflect 
FAA’s March 2002 reduction in STARS’ scope from 188 systems to 74 
systems, and it did not include the cost of new work that FAA 
authorized between May 2000 and September 2002. Consequently, the 
report indicated that STARS was on schedule and within 1 percent of 
budget, even though—compared to the program envisioned in May 2000—FAA 
was now under contract to modernize fewer than half as many facilities 
at more than twice the cost per facility. 

FAA had not maintained and controlled the baseline because, according 
to program officials, the program was “schedule driven.” Without a 
current, valid performance management baseline, FAA could not compare 
what the contractor had done with what the contractor had agreed to do. 
And, because the baseline had not been maintained and was not aligned 
with the program’s current status, the reports were not useful for 
evaluating the contractor’s performance or for projecting the 
contract’s remaining costs. Therefore, FAA lacked accurate, valid, 
current data on the STARS program’s costs and progress. Without such 
data, FAA was limited in its ability to effectively oversee the 
contractor’s performance and reliably estimate future costs. 

Source: GAO, National Airspace System: Better Cost Data Could Improve 
FAA’s Management of the Standard Terminal Automation Replacement 
System, GAO-03-343, Jan. 31, 2003. 

[End of case study] 

The performance measurement baseline should be the official record of 
the current program plan. If it is updated in a timely manner to 
reflect inevitable changes, it can provide valuable management 
information that yields all the benefits discussed in chapter 18. 

Using EVM System Surveillance To Keep the Performance Measurement
Baseline Current: 

Surveillance is reviewing a contractor’s EVM system as it is applied to 
one or more programs. Its purpose is to focus on how well a contractor 
is using its EVM system to manage cost, schedule, and technical 
performance. For instance, surveillance checks whether the contractor’s 
EVM system: 

* summarizes timely and reliable cost, schedule, and technical 
performance information directly 
from its internal management system; 
 
* complies with the contractor’s implementation of ANSI guidelines; 

* provides timely indications of actual or potential problems by 
performing spot checks, sample data traces, and random interviews; 

* maintains baseline integrity; 

* gives information that depicts actual conditions and trends; 

* provides comprehensive variance analyses at the appropriate levels, 
including corrections for cost, schedule, technical, and other problem 
areas; 

* ensures the integrity of subcontractors’ EVM systems; 

* verifies progress in implementing corrective action plans to mitigate 
EVM system deficiencies; and, 

* discusses actions taken to mitigate risk and manage cost and schedule 
performance. 

Effective surveillance ensures that the key elements of the EVM process 
are maintained over time and on subsequent applications. The two goals 
associated with EVM system surveillance ensure that the contractor is 
following its own corporate processes and procedures and confirm that 
the contractor’s processes and procedures continue to satisfy the ANSI 
guidelines. OMB has endorsed the NDIA surveillance guide we listed in 
tables 3 and 32 to assist federal agencies in developing and 
implementing EVM system surveillance practices, which include: 

* establishing a surveillance organization, 

* developing an annual corporate-level surveillance plan, 

* developing a program-level surveillance plan, 

* executing the program surveillance plan, and, 
 
* managing system surveillance based on program results[Footnote 80]. 

Establishing a Surveillance Organization: 
 
An organization must have designated authority and accountability for 
EVM system surveillance to assess how well a contractor applies its EVM 
system relative to the ANSI guidelines. Surveillance organizations 
should be independent of the programs they assess and should have 
sufficient experience in EVM. These requirements apply to all 
surveillance organizations, whether internal or external to the agency, 
such as consultants. Table 40 further describes the elements of an 
effective surveillance organization. 

Table 40: Elements of an Effective Surveillance Organization: 

Element: Independent organizational; 
Description: The surveillance organization reports to a management 
structure different level from the programs it surveys; it is 
independent to ensure that its findings are objective and that it will 
identify systemic issues on multiple programs; it has sufficient 
authority to resolve issues and typically rests at an agency’s higher 
levels. 

Element: Organizational charter; 
Description: The organization’s charter is defined through agency 
policy, outlining its role, responsibilities, resolution process, and 
membership; responsibilities include developing annual surveillance 
plans, appointing surveillance review team leaders, assigning resources 
for reviews, communicating surveillance findings, tracking findings to 
closure, developing and maintaining databases of surveillance measures, 
and recommending EVM system process and training to fix systemic 
findings. 
 
Element: Membership consistent with chartered responsibilities; 
Description: The organization’s staff are consistent with its chartered 
responsibilities; their key attributes include multidisciplinary 
knowledge of the agency and its programs, practical experience in using 
EVM, good relationships with external and internal customers, and 
strong support of EVM systems compliance. 
 
Source: Adapted from National Defense Industrial Association (NDIA) 
Program Management Systems Committee (PMSC), Program Management Systems 
Committee Surveillance Guide (Arlington, Va.: October 2004). 

[End of table] 

OMB states that full implementation of EVM includes performing periodic 
system surveillance reviews to ensure that the EVM system continues to 
meet the ANSI guidelines. Periodic surveillance therefore subjects 
contractors’ EVM systems to ongoing government oversight. 

DCMA, a DOD support agency that provides a range of acquisition 
management services, monitors contractor performance through data 
tracking and analysis, onsite surveillance, and tailored support to 
program managers. DCMA also leads EVM system validation reviews before 
contract award, supports programs with monthly predictive EVM analysis, 
and participates in IBRs as requested. 

Unlike DOD, however, nonmilitary agencies do not have the equivalent of 
a DCMA, and since DCMA does not have enough staff to cover all DOD 
demands, it is not possible for all nonmilitary agencies to ask DCMA to 
provide their surveillance. Therefore, they often hire outside 
organizations or establish an independent surveillance function, such 
as an inspector general. Without an independent surveillance function, 
agencies’ abilities to use EVM as intended may be hampered, since 
surveillance monitors problems with the performance measurement 
baseline and EVM data. If these kinds of problems go undetected, EVM 
data may be distorted and may not be meaningful for decision making. 

Developing a Corporate Surveillance Plan: 

An annual corporate-level surveillance plan should contain a list of 
programs for review. The plan’s objective is to address, over the 
course of the year, the question of whether the contractor is applying 
the full content of its EVM system relative to the 32 ANSI guidelines. 
The surveillance organization therefore should have the utmost 
flexibility to schedule its reviews so as not interfere with major 
program events. Surveillance findings may also rely on the results of 
other related reviews, such as reviews by DCMA or DCAA or other 
external organizations. Table 41 lists the key processes for each of 
the 32 ANSI guidelines. 

Table 41: Key EVM Processes across ANSI Guidelines for Surveillance: 
 
Process: Organizing; 
Applicable ANSI guideline: 1, 2, 3, 5. 

Process: Scheduling; 
Applicable ANSI guideline: 6, 7. 

Process: Work and budget authorization; 
Applicable ANSI guideline: 8, 9, 10, 11, 12, 14, 15. 

Process: Accounting; 
Applicable ANSI guideline: 16, 17, 18, 20, 22, 30. 

Process: Indirect management; 
Applicable ANSI guideline: 4, 8, 13, 19, 24, 27. 

Process: Managerial analysis; 
Applicable ANSI guideline: 22, 23, 25, 26, 27. 

Process: Change incorporation; 
Applicable ANSI guideline: 28, 29, 30, 31, 32. 

Process: Material management; 
Applicable ANSI guideline: 21 (9, 10, 12, 22, 23, 27). 

Process: Subcontract management; 
Applicable ANSI guideline: (2, 9, 10, 12, 16, 22, 23, 27). 

Source: DCMA. 

Note: Guidelines in parentheses are cross process guidelines. 

[End of table] 

In addition to addressing the 32 ANSI guidelines, senior management may 
ask the surveillance organization to focus its review on specific 
procedures arising from government program office concerns, interest in 
a particular process application, or risks associated with remaining 
work. This enables the surveillance organization to concentrate on 
processes that are the most relevant to the program phase. For example, 
 
* a surveillance review of the change incorporation process would be 
more appropriate for a program in which a new baseline had recently 
been implemented than for a program that had just started and had not 
undergone any changes (reviewing the work authorization process would 
be more beneficial); 

* a surveillance review of the EAC process would yield better insight 
to a development program in which technological maturation was the 
force behind growing EAC trends than it would to a production program 
that had stable EAC trends; 

* although the goal is to review all 32 ANSI guidelines each year, if a 
program were almost complete, it would not make sense to focus on work 
authorization, since this process would not then be relevant. 

In line with the approach for selecting EVM processes to concentrate 
on, the surveillance organization should select candidate programs by 
the risk associated with completing the remaining work, so that 
surveillance can be value-added. To facilitate selection, it is 
important to evaluate the risks associated with each program. Table 42 
outlines some risk factors that may warrant program surveillance. 

Table 42: Risk Factors That Warrant EVM Surveillance: 
 
Risk factor: Baseline resets; 
Description: Programs experiencing frequent baseline resets need 
additional monitoring, since they often result from poor planning or a 
change in work approach that is causing significant schedule or 
technical challenges; surveillance of change control and EAC benefits 
such programs by ensuring that changes are correctly implemented and 
EVM data are reliable for making EAC projections . 

Risk factor: Contract phase and type; 
Description: Development contracts tend to be higher-risk and are 
therefore often good candidates for surveillance; production or follow-
on contracts are usually lower-risk and therefore benefit less from 
surveillance. 
 
Risk factor: Contract value; 
Description: The higher the contract dollar value, the more appropriate 
the program for frequent EVM surveillance. 
 
Risk factor: Significant cost or schedule variance; 
Description: Programs with significant, unfavorable cost or schedule 
variances should be reviewed often; surveillance can help identify 
problems with baseline planning that may give insight into how to take 
effective corrective action. 
 
Risk factor: Nature of remaining work; 
Description: The technical content of remaining work should be reviewed 
to ensure 
that the most value-added EVM processes and guidelines are selected for 
surveillance. 

Risk factor: Volume or amount of remaining work; 
Description: New efforts tend to benefit more from surveillance than 
those that are near completion. 

Risk factor: Program office experience; 
Description: Program office experience in implementing and using EVM 
processes may influence its selection of programs to survey; program 
offices lacking 
experience may implement the processes incorrectly, increasing the risk 
of generating unreliable program data. 

Risk factor: Time since last review; 
Description: If it has been a long time since the last surveillance 
review, the program should be selected for surveillance. 

Risk factor: Findings or concerns from prior reviews; 
Description: Results from prior surveillance reviews may justify 
additional monitoring. 
 
Risk factor: Effectiveness of suppliers’ and subcontractors’ 
surveillance process; 
Description: How well a program’s supplier or subcontractor implements 
its EVM process may influence the selection of programs to review. 
 
Source: © 2004 National Defense Industrial Association (NDIA) Program 
Management Systems Committee (PMSC), Program Management Systems 
Committee Surveillance Guide (October 2004). 

[End of table] 

Using an algorithm that assigns relative weights and scales to each 
risk area and classifies risk as low, medium, or high can help 
determine which programs would most benefit from surveillance. Table 43 
shows how an algorithm can be used to evaluate a candidate program. 

Table 43: A Program Surveillance Selection Matrix: 

Risk factor: Contract value; 
Weight: 0.05; 
Risk level: High = 3: More than 20% of business base; 
Risk level: Medium = 2: 5%–20%; 
Risk level: Low = 1: Less than 5%; 
Risk score: 3. 
 
Risk factor: Nature of work; 
Weight: 0.05; 
Risk level: High = 3: High-risk, many unknowns; 
Risk level: Medium = 2: [Empty]; 
Risk level: Low = 1: Low-risk content; 
Risk score: 3. 
 
Risk factor: Program office experience; 
Weight: 0.05; 
Risk level: High = 3: Inexperienced staff; 
Risk level: Medium = 2: [Empty]; 
Risk level: Low = 1: Very experienced staff; 
Risk score: 1. 
 
Risk factor: Program type; 
Weight: 0.05; 
Risk level: High = 3: Development; 
Risk level: Medium = 2: Production; 
Risk level: Low = 1: Operations and maintenance; 
Risk score: 3. 

Risk factor: Baseline resets; 
Weight: 0.10; 
Risk level: High = 3: Many per year; 
Risk level: Medium = 2: Once a year; 
Risk level: Low = 1: Less than one a year; 
Risk score: 3. 

Risk factor: Historic trends; 
Weight: 0.10; 
Risk level: High = 3: Worsening; 
Risk level: Medium = 2: [Empty]; 
Risk level: Low = 1: Trends are improving 
Risk score: 3. 

Risk factor: Previous findings; 
Weight: 0.10; 
Risk level: High = 3: Many unresolved; 
Risk level: Medium = 2: [Empty]; 
Risk level: Low = 1: Few or easily closed; 
Risk score: 1. 

Risk factor: Variance percent; 
Weight: 0.10; 
Risk level: High = 3: Worse than –10%; 
Risk level: Medium = 2: –5% to –10%; 
Risk level: Low = 1: Better than –5%; 
Risk score: 3. 

Risk factor: Management interest; 
Weight: 0.40; 
Risk level: High = 3: High visibility;
Risk level: Medium = 2: [Empty];
Risk level: Low = 1: Low visibility; 
Risk score: 3. 

Total: 
Risk score: 2.6. 

Source: © 2004 National Defense Industrial Association (NDIA) Program 
Management Systems Committee (PMSC), Program Management Systems 
Committee Surveillance Guide (October 2004). 

[End of table] 

For the sample program assessed in the algorithm in table 43, we can 
quickly determine that it is a high-risk program because it received a 
risk score of 2.6 of a possible 3.0. This risk is determined by the 
fact that the program has high contract value, the work is high-risk, 
and high variances had led to several baseline resets. Once a risk 
score has been calculated for all candidate programs, the scores can be 
used to decide which programs should be reviewed more often. The number 
of programs that can be reviewed each year, however, depends on 
available resources. 

Developing a Program Surveillance Plan: 

The surveillance team designated to perform program reviews should 
consist of a few experienced staff who fully understand the 
contractor’s EVM system and the processes being reviewed. The 
surveillance organization should appoint the team leader and ensure 
that all surveillance team members are independent. This means that 
they should not be responsible for any part of the programs they 
assess. 

Key activities on the surveillance team’s agenda include reviewing 
documents, addressing government program office concerns, and 
discussing prior surveillance findings and any open issues. Sufficient 
time should be allocated to all these activities to complete them. The 
documents for review should give the team an overview of the program’s 
implementation of the EVM process. Recommended documents include: 

* at least 2 months of program EVM system reports; 

* EVM variance analyses and corrective actions; 

* program schedules; 

* risk management plan and database; 

* program-specific instructions or guidance on implementing the EVM 
system; 

* WBS with corresponding dictionary;

* organizational breakdown structure;

* EAC and supporting documentation; 

* correspondence related to the EVM system; 
 
* contract budget baseline, management reserve, and undistributed 
budget log; 
 
* responsibility assignment matrix identifying control account 
managers; 

* work authorization documentation; 
 
* staffing plans; 

* rate applications used; and 

* findings from prior reviews and status. 

Additionally, it is recommended that if there are any concerns 
regarding the validity of the performance data, the government program 
office be notified. Finally, inconsistencies identified in prior 
reviews should be discussed to ensure that the contractor has rectified 
them and continues to comply with its EVM system guidelines. 

Executing the Program Surveillance Plan: 

Surveillance should be approached in terms of mentoring or coaching the 
contractor on where there are deficiencies or weaknesses in its EVM 
process and offering possible solutions. The contractor can then view 
the surveillance team as a valuable and experienced asset to determine 
whether it can demonstrate that it is continuing to use the accepted 
EVM system to manage the program. 

Successful surveillance is predicated on access to objective 
information that verifies that the program team is using EVM 
effectively to manage the contract and complies with company EVM 
procedures. Objective information includes program documentation 
created in the normal conduct of business. Besides collecting 
documentation, the surveillance team should interview control account 
managers and other program staff to see if they can describe how they 
comply with EVM policies, procedures, or processes. During interviews, 
the surveillance team should ask them to verify their responses with 
objective program documentation such as work authorizations, cost and 
schedule status data, variance analysis reports, and back-up data for 
any estimates at completion. Finally, to ensure a common exposure to 
the program’s content and quicker consolidation of findings, the 
surveillance team should stay together as much as possible. 

The interview is a key review effort because it enables the 
surveillance team to gauge the EVM knowledge of the program staff. This 
is especially important because control account managers are the source 
of much of the information on the program’s EVM system. Interviews also 
enable the surveillance team to monitor program personnel’s awareness 
of and practice in complying with EVM guidelines. In particular, 
interviews help the surveillance team determine whether the control 
account managers see EVM as an effective management tool. The following 
subjects should be covered in an interview: 

* work authorization;

* organization;

* EVM methodologies, knowledge of the EVM process, use of EVM 
information, and EVM system program training; 

* scheduling and budgeting, cost and schedule integration, and cost 
accumulation; 

* EACs; 

* change control process; 

* variance analysis; 

* material management; 

* subcontract management and data integration; and 

* risk assessment and mitigation. 

When all the documentation has been reviewed and interviews have been 
conducted, the surveillance team should provide appropriate feedback to 
the program team. Specifically, surveillance team members and program 
personnel should clarify any questions, data requests, and responses to 
be sure everything is well understood. The surveillance team leader 
should present all findings and recommendations to the program staff so 
that any misunderstandings can be clarified and corrected. In addition, 
a preliminary report should be prepared, once program personnel have 
provided their preliminary feedback, that addresses findings and 
recommendations: 

Findings fall into two broad categories: compliance with the accepted 
EVM system description and consistency with EVM system guidelines. 
Local practices may be compliant with the system description, while 
others may fall short of the intent of an EVM guideline because of 
discrepancies in the system description. If findings cannot be 
resolved, confidence in program management’s ability to effectively 
use the EVM system will be lowered, putting the program at risk of not 
meeting its goals and objectives. Open findings may also result in 
withdrawing advance agreements and acceptance of the company’s EVM 
system. 

Team members may recommend EVM implementation enhancements, such as 
sharing successful practices or tools. Unlike findings, however, 
recommendations need not be tracked to closure. 

In addition to findings and recommendations, the final team report 
should outline an action plan that includes measurable results and 
follow-up verification, to resolve findings quickly. It should present 
the team’s consensus on the follow-up and verification required to 
address findings resulting from the surveillance review. An effective 
corrective action plan must address how program personnel should 
respond to each finding and it must set realistic dates for 
implementing corrective actions. The surveillance review is complete 
when the leader confirms that all findings have been addressed and 
closed. 

Managing System Surveillance Based on Program Results: 

After a program’s surveillance is complete, the results are collected 
and tracked in a multiprogram database. This information is transformed 
into specific measures for assessing the overall health of a 
contractor’s EVM system process. They should be designed to capture 
whether the EVM data are readily available, accurate, meaningful, and 
focused on desirable corrective action. The types of measure may vary 
from contractor to contractor, but each one should be well defined, 
easily understood, and focused on improving the EVM process and 
surveillance capability. They should have the following 
characteristics: 
 
* surveillance results measures identify where there are deviations 
from documented EVM application processes and, 
 
* system surveillance measures are EVM system process measures that 
indicate whether the surveillance plan is working by resolving systemic 
issues.

To develop consistent measures, individual program results can be 
summarized by a standard rating system that uses color categories to 
identify findings. Table 44 shows a standard color-category rating 
system. 

Table 44: A Color-Category Rating System fo Summarizing Program Findings

Related to: Organization: 

1. 
EVM system rating: Low = green: One WBS used and authorized 
EVM system rating: Moderate = yellow: One WBS used for the program 
EVM system rating: High = red: More than one WBS used for the program 
for the program 

2. 
EVM system rating: Low = green: WBS dictionary available and traceable 
to the contract WBS and statement of work; 
EVM system rating: Moderate = yellow: WBS dictionary available but 
cannot be traced to the contract WBS and is inconsistent with the 
statement of work; 
EVM system rating: High = red: WBS dictionary not developed. 

3. 
EVM system rating: Low = green: Organizational breakdown system, 
including major subcontractors, defined; 
EVM system rating: Moderate = yellow: More than one organizational 
breakdown system used; not all are identified or some contain errors or 
omissions;
EVM system rating: High = red: Organizational breakdown system not 
defined. 
 
4. 
EVM system rating: Low = green: Program WBS and organizational 
breakdown system integrated and identified by the responsibility 
assignment matrix; 
EVM system rating: Moderate = yellow: Program WBS and organizational 
breakdown system identified but 
the responsibility assignment matrix is incomplete or outdated; 
EVM system rating: High = red: Responsibility assignment matrix process 
is not implemented. 

Related to: Budget: 
 
1. 
EVM system rating: Low = green: Budgets for authorized work identified; 
EVM system rating: Moderate = yellow: Budgets for authorized work have 
omissions; 
EVM system rating: High = red: Budgets for authorized work not 
developed. 

2. 
EVM system rating: Low = green: Sum of work package budgets equals 
control account budgets; 
appropriate EVM techniques deployed; 
EVM system rating: Moderate = yellow: Sum of work package budgets 
equals control account budgets, 
but appropriate EVM techniques not applied; 
EVM system rating: High = red: Sum of work package budgets does not 
equal control account budgets. 

3. 
EVM system rating: Low = green: Management reserve and undistributed 
budget identified; management reserve not used for cost growth or 
contract changes; 
EVM system rating: Moderate = yellow: Management reserve and 
undistributed budget identified but do not adequately cover existing 
program scope and risk; 
EVM system rating: High = red: Management reserve used for cost growth 
or contract changes. 

4. 
EVM system rating: Low = green: Time-phased budget established, against 
which performance can be measured; 
EVM system rating: Moderate = yellow: Not applicable; 
EVM system rating: High = red: Baseline cannot be used for accurate 
performance measurement. 

5. 
EVM system rating: Low = green: Authorized work identified in 
measurable units; 
EVM system rating: Moderate = yellow: Authorized work identified in 
measurable units but has omissions. 
EVM system rating: High = red: Authorized work not identified in 
measurable units. 
 
Source: © 2004 National Defense Industrial Association (NDIA) Program 
Management Systems Committee (PMSC), Program Management Systems 
Committee Surveillance Guide (October 2004). 

[End of table] 

Summarizing individual program findings by a standard measure can help 
pinpoint systemic problems in a contractor’s EVM system and can 
therefore be useful for highlighting areas for correction. This may 
result in more training or changing the EVM system description to 
address a given weakness by improving a process. Without the benefit of 
standard measures, it would be difficult to diagnose systemic problems; 
therefore, it is a best practice to gather them and review them often. 

Overtarget Baselines And Schedules: 

At times, an organization may conclude that the remaining budget and 
schedule targets for completing a program are significantly 
insufficient and that the current baseline is no longer valid for 
realistic performance measurement. The purpose of an overtarget 
baseline or overtarget schedule is to restore management’s control of 
the remaining effort by providing a meaningful basis for performance 
management. Working to an unrealistic baseline could make an 
unfavorable cost or schedule condition worse. 

For example, if variances become too big, they may obscure management’s 
ability to discover newer problems that could still be mitigated. To 
quickly identify new variances, an overtarget baseline normally 
eliminates historic variances and adds budget for future work. The 
contractor then prepares and submits a request to implement a recovery 
plan—in the form of an overtarget baseline or overtarget schedule—that 
reflects the needed changes to the baseline. 

The Rebaseline Rationale: 

The focus during a rebaseline is ensuring that the estimated cost of 
work to complete is valid, remaining risks are identified and tracked, 
management reserve is identified, and the new baseline is adequate and 
meaningful for future performance measurement. 

An overtarget baseline is established by formally reprogramming the 
performance measurement baseline to include additional budget that is 
above and beyond the contract’s negotiated cost.[Footnote 81] This 
additional budget is believed necessary to finish work that is in 
process and becomes part of the recovery plan for setting new 
objectives that are achievable. 

An overtarget baseline does not always affect all remaining work in the 
baseline; sometimes only a portion of the WBS needs more budget. 
Similarly, an overtarget baseline may or may not reset cost and 
schedule variances, although in most cases the variances are 
eliminated. 

An overtarget baseline or overtarget schedule should be rare. 
Therefore, if a program is experiencing recurrent overtarget baselines, 
it may be that the scope is not well understood or simply that program 
management lacks effective EVM discipline and is unable to develop 
realistic estimates. 

Moreover, a program that frequently changes its baseline can appear to 
be trying to “get well” by management’s hiding its real performance, 
leading to distorted EVM data reporting. When this happens, decision 
makers tend to lose confidence in the program, as evidenced in case 
study 48. 

Case Study 48: Maintaining Realistic Baselines, from Uncertainties 
Remain, GAO-04-643R: 
 
From the contract’s award in 1996 to 2003, the cost of the Airborne 
Laser’s (ABL) research and development contract increased from about $1 
billion to about $2 billion. In fiscal year 2003 alone, work the 
contractor completed cost about $242 million more. Besides these cost 
overruns, the contractor was unable to complete $28 million worth of 
work planned for the fiscal year. GAO estimated from the contractor’s 
2003 cost and schedule performance that the prime contract would 
overrun by $431 million to $943 million. 

The program had undergone several major restructurings and contract 
rebaselines from 1996 on, primarily because of unforeseen complexity in 
manufacturing and integrating critical technology. According to program 
officials, rapid prototyping resulted in limited subcomponent testing, 
causing rework and changing requirements. At the time of GAO’s review, 
the program faced massively increasing amounts of incomplete work from 
previous years, even though the prime contractor had increased the 
number of people devoted to the program and had added shifts to bring 
the work back on schedule. In addition, unanticipated difficulties in 
software coding and integration, as well as difficulty in manufacturing 
advanced optics and laser components, caused cost growth. 

Good investment decisions depend on understanding the total funds 
needed to obtain an expected benefit, but the Missile Defense Agency 
(MDA) had been unable to assure decision makers that its cost 
projections to complete technology development could be relied on. 
Decision makers would have been able to make more informed decisions 
about further program investments if they had understood the likelihood 
and confidence associated with MDA’s cost projections. Therefore, GAO 
recommended that MDA complete an uncertainty analysis of the 
contractor’s new cost estimate. 

Source: GAO, Uncertainties Remain Concerning the Airborne Laser’s Cost 
and Military Utility, GAO-04-643R, Washington, D.C.: Mar. 17, 2004. 

[End of case study] 

The end result of an overtarget baseline is that its final budget 
always exceeds the contract budget base, which includes the negotiated 
contract cost plus any authorized, unpriced work. In EVM system 
terminology, the sum of all budgets (performance measurement baseline, 
undistributed budget, and management reserve) that exceed the contract 
budget base is known as total allocated budget, and the difference 
between the total allocated budget and contract budget base is the 
overtarget baseline. Figure 40 illustrates the effect an overtarget 
baseline has on a contract. 

Figure 40: The Effect on a Contract of Implementing an Overtarget 
Budget: 

[Refer to PDF for image: illustration] 

Before overrun: 

Total allocated budget; 
Contract budget base; 
Performance measurement baseline: Management reserve. 
 
After overrun: 

Total allocated budget; 
Contract budget base: Overtarget budget; 
Performance measurement baseline: Management reserve. 

Source: DCMA. 

[End of figure] 

Like an overtarget budget, an overtarget schedule occurs when the 
schedule and its associated budgets are spread over time and work ends 
up being scheduled beyond the contract completion date. The new 
schedule becomes the basis for performance measurement. Typically, an 
overtarget schedule precipitates the need for an overtarget budget, 
because most increases in schedule also require additional budget. As 
mentioned above, the contractor submits an overtarget budget and 
overtarget schedule request to the government program office for 
evaluation. It should contain the following key elements: 

* an explanation of why the current plan is no longer feasible, 
identifying the problems that led to the need to make a new plan of the 
remaining work and discussing measures in place to prevent recurrence; 
 
* a bottoms-up estimate of remaining costs and schedule that accounts 
for risk and includes management reserve;

* a realistic schedule for remaining work that has been validated and 
spread over time to the new plan; 

* a report on the overtarget budget in the CPR—the government program 
office needs to come to an agreement with the contractor on how it is 
to be reported in the CPR, how decisions are to be made on handling 
existing cost and schedule variances, and how perspectives on new 
budget allocations will be reported (whether variances are to be 
retained or eliminated or both); 

* the overtarget budget’s implementation schedule, to be accomplished 
as soon as possible once approved; usually, it is established in one to 
two full accounting periods, with reporting continuing against the 
existing baseline in the meantime. 

In determining whether implementing an overtarget budget and overtarget 
schedule is appropriate, the program office should consider the 
program’s health and status and should decide whether the benefits 
outweigh the costs. An overtarget budget should be planned with the 
same rigor as planning for the original program estimate and 
performance measurement baseline. While overtarget budget and 
overtarget schedule can restore program confidence and control by 
establishing an achievable baseline, with meaningful performance 
metrics, the time and expense required must be carefully considered. 
Contract type is a key factor to consider when rebaselining a program, 
because each contract has its own funding implications when an 
overtarget budget is implemented. Table 45 describes two common types 
of contracts and considerations for overtarget budget implementation. 

Table 45: Overtarget Budget Funding Implications by Contract Type: 

Contract type: Fixed price incentive; 
Description: Negotiated target cost plus estimated cost of authorized 
unpriced work equals the cost of the contract budget base; government 
program office 
liability is established up to a specified ceiling price 
Considerations: 
* Although additional performance budget is allocated to the 
performance measurement baseline, the overtarget budget does not change 
the customer’s funding liability or any contract terms; the contractor 
has liability for a portion of costs above target and all actual costs 
over the ceiling price, because the work’s scope has not changed and 
the contract has not been modified; 
* An overtarget budget is established on a fixed price incentive 
contract without regard to profit, cost sharing, or ceiling 
implications. 

Contract type: Cost reimbursement; 
Description: Provides for payment of allowable incurred costs to the 
contractor to the extent provided in the contract and, where included, 
for contractor’s fee or profit; the new contract budget base is based 
on the updated cost target; 
Considerations: 
* The customer must be notified of the need for an overtarget budget, 
having agreed to pay for actual costs incurred to the extent provided 
in the contract; he may have to commit or seek additional funds to
address the changing program condition and must therefore be aware of 
and involved in the overtarget budget implementation; 
* While the government normally has full cost responsibility if this is 
a cost plus incentive fee contract, the contractor may lose the fee; 
* A cost growth contract modification results in obligating additional 
funds to cover in-scope effort; this involves real dollars, so the 
performance measurement budget does not increase and the cost growth 
variance continues to be reported in the CPR; when a contract 
modification includes a new scope, the modification should clearly 
state the portion of the new estimated cost that is for new scope and 
the portion that is to provide funds for an acknowledged cost overrun. 

Source: GAO and Ivan Bembers and others, Over Target Baseline and Over 
Schedule Handbook (n.p., n.p.: May 7, 2003), p. 7. 

[End of table] 

The program office and the contractor should also consider whether 
losing valuable historic performance variances and trends is worth the 
effort and time to reset the baseline. Table 46 identifies common 
problems and indicators that may be warning signs that a program may 
need an overtarget budget or schedule. 

Table 46: Common Indicators of Poor Program Performance: 

Indicator: Cost; 
Description: 
* Estimated cost to complete and budget for remaining work differ 
significantly; 
* Significant difference between cumulative CPI and TCPI; 
* Significant lack of confidence in the EAC; 
* Frequent allocation of management reserve to the performance 
measurement baseline for newly identified in-scope effort; 
* Inadequate control account budgets for remaining work; 
* Work packages with no budget left; 
* No reasonable basis for achieving the EAC 
* EACs that are too optimistic and do not adequately account for risks 

Indicator: Schedule; 
Description: 
* High level of concurrent activities in the integrated schedule; 
* Significant negative float in the integrated schedule’s critical 
path; 
* Unrealistic activity durations; 
* Unrealistic logic and relationships between tasks; 
* Significant number of activities with constrained start or finish 
dates; 
* No horizontal or vertical integration in the schedule; 
* No basis for schedule reserve reductions except to absorb the effect 
of schedule delays. 

Indicator: Project execution risk; 
Description: 
* Risk management analysis that shows significant changes in risk 
levels; 
* Lack of correlation between budget phases and baseline schedule; 
* No correlation between estimate to complete time periods and current 
program schedule; 
* Program management’s reliance on ineffective performance data. 

Indicator: Data accuracy; 
Description: 
* Frequent or significant current or retroactive changes; 
* Actual costs exceeding the EAC; 
* Work scope transferred without associated budget; 
* An apparently front-loaded performance measurement baseline; 
* Inadequate planning for corrective action; 
* Repetitive reasons for variances; 
* No reflection of progress in earned value; 
* Late booking of actual costs that cause lagging variances; 
* Frequent data errors. 

Source: Ivan Bembers and others, Over Target Baseline and Over Schedule 
Handbook (n.p., n.p.: May 7, 2003). 

[End of table] 

Establishing a revised performance measurement baseline to incorporate 
significant variances should be a major wake-up call for program 
management, sending a serious message about the amount of risk a 
program is undertaking. Therefore, in conjunction with evaluating the 
indicators in table 46, program management should consider other 
aspects before deciding to implement an overtarget budget and schedule. 

Work Completion Percentage: 
 
The contract should typically be 20 percent to 85 percent complete. A 
contract that is less than 20 percent complete may not be mature enough 
yet to benefit from the time and expense of implementing overtarget 
budget and schedule. A contract that is more than 85 percent complete 
gives management limited time to significantly change the program’s 
final cost. 

Projected Growth: 
 
A projected growth of more than 15 percent may warrant an overtarget 
budget and schedule. The projection is made by comparing the estimated 
time of completion with the budget allocated for the remaining work. An 
overtarget budget’s most important criterion is whether it is necessary 
to restore meaningful performance measurement. 

Remaining Schedule: 
 
If less than a year is required to complete the remaining work, the 
benefit of overtarget budget and schedule will most likely be 
negligible because of the time it typically takes to implement the new 
baseline. 

Benefit Analysis: 

A benefit analysis should determine whether the ultimate goal of 
implementing overtarget budget and overtarget schedule gives management 
better control and information. With this analysis, the government 
program office and contractor should ensure that the benefits will 
outweigh the cost in both time and resources. If better management 
information is expected and the program team is committed to managing 
within the new baseline, then it should be implemented. 

Rebaselining History: 

Several overtarget budget requests have suggested severe underlying 
management problems. These should be investigated before implementing a 
new budget. 

Key Steps of the Overtarget Budget–Overtarget Schedule Process: 

While it is the primary responsibility of the contractor to ensure that 
a meaningful performance measurement baseline is established, every 
control account manager must develop new work plans that can be 
reasonably executed. The program manager and supporting business staff 
must have open lines of communication and a clear review process to 
ensure that the baseline is reasonable and accurate, reflecting known 
risks and opportunities. 

Thus, overtarget budget–overtarget schedule implementation involves 
multiple steps and processes toward establishing a new performance 
management baseline, illustrated in figure 41. 

Figure 41: Steps Typically Associated with Implementing an Overtarget 
Budget: 

[Refer to PDF for image: illustration] 

1. Statement of need for OTB; 
2. Consult with customer; 
3. Consensus on remaining scope; 
4. Develop revised integrated master schedule; 
5. Schedule review and concurrence; 
6. Consult with customer; 
7. Issue guidance to revise cost account plans; 
8. Revise detail schedules and prepare estimates to complete; 
9. Input estimate to complete into EVM system; 
10. Control account manager reviews and estimate to complete 
“scrubbing”; 
11. Make final OTB cost and schedule; 
12. Consult with customer; 
13. Senior management review cost and schedule; 
14. Establish new PMB. 
 
Source: Ivan Bembers and others. Over Target Baseline and Over 
Schedule Handbook, (n.p., n.p., 2003). 

[End of figure] 

The key steps we describe here include (1) planning the approach, (2) 
developing the new schedule and making new cost account plans, and (3) 
senior management’s reviewing the costs and schedule. Each step assumes 
early involvement and frequent interaction between the contractor and 
government program office. 
 
Planning the Overtarget Budget–Overtarget Schedule Approach: 
 
When developing a plan for an overtarget budget, certain factors should 
be considered: 

* What issues or problems resulted in the need for one? How will the 
new plan address them? 

* Can the overtarget budget be accomplished within the existing 
schedule? If not, then an overtarget schedule must also be performed. 
Conversely, does an overtarget schedule require an overtarget budget or 
can the schedule be managed within the existing budget? 
 
* How realistic is the estimate to complete? Does it need to be 
updated? 

* Are cost and schedule variances being eliminated or retained? Will 
future reporting include historical data or begin again when the new 
plan is implemented?

* What is the basis for the overtarget budget management reserve 
account? Is it adequate for the remaining work? 

* To what extent are major subcontractors affected by the overtarget 
budget? How will it affect their target cost and schedule dates? 

* Were any EVM system discipline issues associated with the need for an 
overtarget budget? If so, how were they resolved? 

If the new baseline is to provide management with better program 
status, a decision about whether to eliminate variances will have to be 
made. A single point adjustment—that is, eliminating cumulative 
performance variances, replanning the remaining work, and reallocating 
the remaining budget to establish a new performance measurement 
baseline—results in a new performance measurement baseline that 
reflects the plan of the remaining work and budget. Since existing 
variances can significantly distort progress toward the new baseline, a 
single point adjustment is a common and justifiable adjunct to an 
overtarget budget. Table 47 describes options for treating historical 
cost and schedule variances when performing a single point adjustment. 

Table 47: Options for Treating Variances in Performing a Single Point 
Adjustment: 

Variance option: Eliminate: All variances; 
Description: Eliminate cost and schedule variances for all WBS elements 
by setting BCWS and BCWP equal to ACWP; the most common type of 
variance adjustment, this normally generates an increase in BCWP and 
sometimes results in an adjustment to BCWS. 

Variance option: Eliminate: Schedule variance only; 
Description: Cost variance is considered a valid performance 
measurement; the new performance measurement baseline retains the cost 
variance history but eliminates schedule variance by setting BCWS equal 
to BCWP, allowing revised planning for the remaining work and budgets. 

Variance option: Eliminate: Cost variance only; 
Description: When, infrequently, cost variance impels an overtarget 
budget but schedule information is valid, variance is eliminated by 
setting BCWP equal to ACWP; the cumulative BCWP value is adjusted to 
match the cumulative cost variance. To preserve the existing schedule 
variance, the cumulative BCWS should be changed by the same amount as 
the BCWP; the CPR will reflect positive adjustments to both in the 
current period following the overtarget budget. 

Variance option: Eliminate: Selected variances; 
Description: If one WBS element or a subcontractor shows performance 
out of line with the baseline, management may implement an overtarget 
budget for only that portion of the contract; all other variances 
remain intact 

Variance option: Retain: All variances; 
Description: A contractor may have been performing fairly well to the 
baseline plan with no significant variances, but additional budget is 
necessary to complete the work; or the contractor has large variances 
warranting an overtarget budget, but management wants to retain them. 
In both situations, cost and schedule variances are left alone but 
budget is added to cover future work in the overtarget budget process. 

Source: Ivan Bembers and others, Over Target Baseline and Over Schedule 
Handbook (n.p., n.p.: May 7, 2003). 

[End of table] 

It is important to understand that while cost and schedule variances 
can be adjusted in various ways, under no circumstances should the 
value of ACWP be changed in the overtarget budget process. The value of 
ACWP should always be reconcilable to the amount shown in the 
contractor’s accounting records. In addition, management reserve to be 
included in the final overtarget budget should be addressed in the 
overtarget budget planning step: The amount will depend on how much 
work and risk remain. Historic management reserve consumption before 
the overtarget budget may offer important insights into the amount to 
set aside. The bottom line is that a realistic management reserve 
budget should be identified and available for mitigating future risks. 
These two issues—keeping ACWP integrity and setting aside adequate 
management reserve—must be considered in making the new plan, 
regardless of whether the single point adjustment option is used. 
Figure 42 shows how a single point adjustment results in a change 
to the performance measurement baseline. 

Figure 42: Establishing a New Baseline with a Single Point Adjustment: 

[Refer to PDF for image: s curve graph] 

Program budget (resources) plotted vs. time. 

Source: Abba Consulting. 

[End of figure] 

In figure 42, the performance measurement baseline—that is, BCWS—is 
shifted upward to align with actual costs to date—that is, with ACWP. 
The new baseline continues from this point forward, and all new work 
performed and corresponding actual costs will be measured against this 
new baseline. The revised budget is also at a higher level than the 
original budget; the schedule has slipped 4 months from May to 
September. Finally, all variances up to the overtarget budget date have 
been eliminated and the management reserve amount has risen above the 
new performance measurement baseline. 

As work is performed against this new baseline, reliable performance 
indicators can be used to identify problems and implement corrective 
actions. However, because all variances have been eliminated, it may 
take several months after the single point adjustment for trends to 
emerge against the new baseline. 

During the next few months, monitoring the use of management reserve 
can help show whether realistic budgets were estimated for the 
remaining work or new risks occurred after the overtarget budget.

A note of caution: single point adjustments should not be made 
regularly and not solely to improve contract performance metrics— 
especially when attempting to meet OMB’s “Get to Green” capital 
planning initiative to show favorable program performance status. 
Because a single point adjustment masks true performance, frequent use 
tends to cause varied and significant problems such as: 

* distorting earned value cost and schedule metrics, resulting in 
unreliable index-based EAC calculations;

* turning attention away from true cost and schedule variances; and; 

* hindering the ability of EVM data to predict performance trends. 

In other words, single point adjustments should be used sparingly in 
order not to inhibit successful use of EVM information to manage 
programs. 

Planning the New Schedule and Control Accounts: 
 
Even if only an overtarget budget is required, some level of schedule 
development or analysis should always be performed. The revised 
schedule should be complete, integrated, realistic in length, and 
coordinated among key vendors and subcontractors. Further, the schedule 
logic and activity durations should be complete and should represent 
the effort associated with the remaining work. Any effect on government-
furnished equipment schedules or availability of government test ranges 
should also be considered before the schedule is validated and 
considered realistic. 

The government program office and the contractor should review, and 
come to a mutual understanding of, the remaining scope, resources, and 
risk in the new schedule. They should agree that it is integrated 
vertically and horizontally, task durations are backed by historic 
data, schedule reserve is adequate, and achieving the overall schedule 
is likely. 

Once the revised schedule for the remaining work has been established, 
it is used to determine the budget for the remaining cost accounts. A 
detailed estimate to complete the remaining work should be based on a 
bottom-up estimate to reflect all costs—staffing, material, travel. 
Control account managers should also consider the remaining cost and 
schedule risk and their probability. 

Senior Management Review of Cost and Schedule: 
 
While an overriding goal of the overtarget budget–overtarget schedule 
process is to allow the contractor to implement an effective baseline 
in a timely manner, the government program office plays a key role in 
determining whether the contract can be executed within the constraints 
of program funding and schedule. Three key activities the government 
program office should consider in the final review of the new baseline 
are: 

1. perform an IBR to verify that the value and associated schedule 
determined in the overtarget budget–overtarget schedule process have 
been established in the new baseline; 

2. determine to what extent EVM reporting requirements will be 
suspended or reduced, given the time needed to implement the new 
baseline; a best practice is to continue reporting against the old 
baseline until the new one is established, keeping EVM reporting rhythm 
in place and maintaining a record of the final change;

3. select meaningful performance indicators (such as those in table 46) 
to monitor contractor efforts to implement and adhere to the new 
baseline. 

One key indicator, management reserve usage, should not be used to a 
great extent in the near term; another is EVM performance trends, 
although the government program office should be aware of its effect on 
the subsequent trend chart if a single point adjustment was made. 

Update The Program Cost Estimate with Actual Costs: 
 
Regardless of whether changes to the program result from a major 
contract modification or an overtarget budget, the cost estimate should 
be regularly updated to reflect all changes. Not only is this a sound 
business practice; it is also a requirement outlined in OMB’s Capital 
Programming Guide.[Footnote 82] The purpose of updating the cost 
estimate is to check its accuracy, defend the estimate over time, 
shorten turnaround time, and archive cost and technical data for use in 
future estimates. After the internal agency and congressional budgets 
are prepared and submitted, it is imperative that cost estimators 
continue to monitor the program to determine whether the preliminary 
information and assumptions remain relevant and accurate. 

Keeping the estimate fresh gives decision makers accurate information 
for assessing alternative decisions. 

Cost estimates must also be updated whenever requirements change, and 
the results should be reconciled and recorded against the old estimate 
baseline. Several key activities are associated with updating the cost 
estimate: 

* documenting all changes that affect the overall program estimate so 
that differences from past estimates can be tracked; 
 
* updating the estimate as requirements change, or at major milestones, 
and reconciling the results with the program budget and EVM system; 

* updating the estimate with actual costs as they become available 
during the program’s life cycle; 

* recording reasons for variances so that the estimate’s accuracy can 
be tracked; 

* recording actual costs and other pertinent technical 
information—source line of code sizing, effort, schedule, risk items—so 
they can be used for estimating future programs; and; 

* obtaining government program office feedback, assessing lessons 
learned on completion, and recording the lessons so they are available 
for the next version of the estimate. 

After these activities are completed, the estimator should document the 
results in detail, including reasons for all variances. This critical 
step allows others to track the estimates and to identify when, how 
much, and why the program cost more or less than planned. Further, the 
documented comparison between the current estimate (updated with actual 
costs) and old estimate allows the cost estimator to determine the 
level of variance between the two estimates. In other words, it allows 
estimators to see how well they are estimating and how the program is 
changing over time. 

Keep Management Updated: 

Part of agency capital planning and investment control is reporting 
updated program EACs to management during senior executive program 
reviews. With EVM data, a variety of EACs can be generated solely for 
this purpose. In addition, continuous management reviews of the EVM 
data not only allow insight into how a specific program is performing 
but also help depict a company’s financial condition accurately for 
financial reporting purposes. 

EVM data provide a clear picture of what was scheduled, accomplished, 
and spent in a given month so that program status can be known at any 
time. Likewise, cost and schedule performance trends derived from the 
CPR are objective data that allow management to identify where 
potential problems and cost overruns can occur. This information should 
be presented at every program manager review, since it is essential for 
managing a program effectively. 

DOD requires in addition that contractors submit a quarterly contract 
funds status report that provides time-phased funding requirements and 
execution plans and identifies requirements for work agreed-to but not 
yet under contract. Other agencies require a similar document. For 
example, NASA requires form 533 that reports data necessary for 
projecting costs and hours to ensure that resources realistically 
support program schedules. It also evaluates contractors’ actual cost 
and fee data and compares them to the negotiated contract value, 
estimated costs, and budget forecast data. 

Data from the DOD report or a similar report are important for knowing 
whether the government has adequate funding to complete the program, 
based on the contractor’s historic performance trends. Therefore, both 
that report and the CPR should be used regularly to monitor contractor 
performance and update the cost estimate. Doing so will provide 
valuable information about problems early on, when there is still time 
to act. It also makes everyone more accountable and answerable to basic 
program management questions, such as: 

* Can the EVM data be trusted? 

* Is there really a problem? 

* How much risk is associated with this program? 

* What is causing a problem and how big is it? 

* Are other risks associated with this problem? 

* What is likely to happen? 

* What are the alternatives? 

* What should the next course of action be? 

* Who is responsible for major parts of the contract? 

* What were the major changes since the contract began? 

* How long have similar programs taken? 

* How much work has been completed and when will the program finish? 

* When should results start materializing? 

While EVM offers many benefits, perhaps the greatest benefit of all is 
the discipline of planning the entire program before starting any work. 
This planning brings forth better visibility and accountability, which 
add clarity to risks as well as opportunities. Further, EVM offers a 
wealth of data and lessons that can be used to project future program 
estimates. To reap these benefits, however, EVM requires strong 
partnership between the government program office and the contractor to 
make for a sense of ownership and responsibility on both sides. This 
shared accountability is a major factor in bringing programs to 
successful completion and makes good program management possible. 

17. Best Practices Checklist: Managing Program Costs: Updating: 
 
* The cost estimate was updated with actual costs, keeping it current 
and relevant. 
- Actual cost, technical, and schedule data were archived for future 
estimates. 

* Authorized changes to the EVM performance measurement baseline were 
incorporated in a timely manner. 
- It reflected current requirements. 
- These changes were incorporated in a documented, disciplined, and 
timely manner so that budget, schedule, and work stayed together for 
true performance measurement. 
- Changes were approved and implemented in a well-defined baseline 
control process. 

* Regular EVM system surveillance ensured the contractor’s effective 
management of cost, schedule, and technical performance and compliance 
with ANSI guidelines. 
- The surveillance organization was independent and had authority to 
resolve issues. 
- Surveillance staff had good knowledge about EVM and agency 
programs.
- An annual surveillance plan was developed and programs were chosen 
objectively. 
- Findings and recommendations were presented to the program team 
for clarification, and the final surveillance report had an action plan 
to resolve findings quickly. 

* The contractor’s overtarget baseline or overtarget schedule was 
detailed, reasonable, and realistic; planned for costs, schedule, and 
management review; and described measures in place to prevent another 
OTB. 

* Updated EACs and other EVM data were continually reported to 
management. 

* EVM and CFSR–like data were examined regularly to identify problems 
and act on them quickly. 

[End of Chapter 20] 

Appendixes: 

Appendix 1: Auditing Agencies And Their Web Sites:
 
GAO frequently contacts the audit agencies in this appendix at the 
start of a new audit. This list does not represent the universe of 
audit organizations in the federal government. 

Auditing agency: 

Air Force Audit Agency: 
Defense Contract Audit Agency 
District of Columbia, Office of the Inspector General: 
Federal Trade Commission, Office of Inspector General: 
National Aeronautics and Space Administration, Office of Inspector 
General: 
National Archives, Office of the Inspector General: 
Navy Inspector General: 
Social Security Administration, Office of the Inspector General: 
U.S. Army Audit Agency: 
U.S. Department of Commerce, Office of Inspector General: 
U.S. Department of Defense, Office of Inspector General: 
U.S. Department of Education, Office of Inspector General: 
U.S. Department of Health and Human Services, Office of Inspector 
General: 
U.S. Department of Housing and Urban Development, Office of Inspector 
General: 
U.S. Environmental Protection Agency, Office of Inspector General: 
U.S. General Services Administration, Office of Inspector General: 
U.S. House of Representatives, Office of Inspector General: 
United States Postal Service, Office of Inspector General: 

[End of Appendix 1] 

Appendix 2: Case Study Backgrounds: 

We drew the material in the guide’s 48 case studies from the 16 GAO 
reports described in this appendix. Table 48 shows the relationship 
between reports, case studies, and the chapters they illustrate. The 
table is arranged by the order in which we issued the reports, earliest 
first. Following the table, paragraphs that describe the reports are 
ordered by the numbers of the case studies in this Cost Guide. 

Table 48: Case Studies Drawn from GAO Reports Illustrating This Guide: 

Case study: 2, 5, 18, 30, 35; 
GAO report: GAO/AIMD-99-41: Customs Service Modernization; 
Chapters illustrated: 1, 2, 4, 9, 11. 

Case study: 47; 
GAO report: GAO-03-343: National Airspace System; 
Chapters illustrated: 20. 

Case study: 17; 
GAO report: GAO-03-645T: Best Practices; 
Chapters illustrated: 5. 

Case study: 1, 3, 4, 11, 13, 23; 
GAO report: GAO-04-642: NASA; 
Chapters illustrated: 1, 2, 5, 8. 

Case study: 48; 
GAO report: GAO-04-643R: Uncertainties Remain; 
Chapters illustrated: 20. 

Case study: 8, 10, 14, 27, 28, 33, 36, 38, 40, 46;
GAO report: GAO-05-183: Defense Acquisitions; 
Chapters illustrated: 2, 4, 9–11, 13, 14, 19. 
 
Case study: 19, 21, 45; 
GAO report: GAO-06-215: DOD Systems Modernization; 
Chapters illustrated: 5, 7, 18. 

Case study: 24; 
GAO report: GAO-06-296: Homeland Security; 
Chapters illustrated: 8. 

Case study: 9; 
GAO report: GAO-06-327: Defense Acquisitions; 
Chapters illustrated: 2. 

Case study: 7; 
GAO report: GAO-06-389: Combating Nuclear Smuggling; 
Chapters illustrated: 2. 

Case study: 20; 
GAO report: GAO-06-623: United States Coast Guard; 
Chapters illustrated: 7. 

Case study: 12, 32, 44; 
GAO report: GAO-06-692: Cooperative Threat Reduction; 
Chapters illustrated: 2, 10, 18. 

Case study: 6, 16, 25, 26, 29, 31, 34, 37, 39, 42; 
GAO report: GAO-07-96: Space Acquisitions; 
Chapters illustrated: 2, 4, 9–12, 14, 15. 

Case study: 15; 
GAO report: GAO-07-133R: Combating Nuclear Smuggling; 
Chapters illustrated: 4. 

Case study: 41; 
GAO report: GAO-07-240R: Chemical Demilitarization; 
Chapters illustrated: 15. 

Case study: 43; 
GAO report: GAO-07-268: Telecommunications; 
Chapters illustrated: 16. 

Case study: 22; 
GAO report: GAO-08-756: Air Traffic Control; 
Chapters illustrated: 8. 

Note: Full bibliographic data for the reports in this table (listed in 
the order in which GAO issued them) are given below their headings in 
this appendix and in the case studies in the text. 

Case Studies 1, 3, 4, 11, 13, and 23: From NASA, GAO-04-642, May 28, 
2004: 
 
For more than a decade, GAO has identified the National Aeronautics and 
Space Administration’s (NASA) contract management as a high-risk area. 
Because of NASA’s inability to collect, maintain, and report the full 
cost of its programs and projects, it has been challenged to manage its 
programs and control program costs. The scientific and technical 
expectations inherent in NASA’s mission create even greater 
challenges—especially if meeting those expectations requires NASA to 
reallocate funding from existing programs to support new efforts. 

Because cost growth has been a persistent problem in a number of NASA’s 
programs, GAO was asked to examine NASA’s cost estimating for selected 
programs, assess its cost estimating processes and methods, and 
describe any barriers to improving its cost estimating processes. 
Accordingly, in NASA: Lack of Disciplined cost estimating Processes 
Hinders Effective Program Management (May 28, 2004), GAO reported its 
analysis of 27 NASA programs, 10 of which it reviewed in detail. 

Case Studies 2, 5, 18, 30, and 35: From Customs Service Modernization, 
GAO/AIMD-99-41, February 26, 1999: 
 
Title VI of the 1993 North American Free Trade Agreement Implementation 
Act, Public Law 103-182, enabled the U.S. Customs Service to speed the 
processing of imports and improve compliance with trade laws. Customs 
refers to this legislation as the Customs Modernization and Informed 
Compliance Act, or “Mod Act.” The act’s primary purpose was to 
streamline and automate Customs’ commercial operations. According to 
Customs, modernized commercial operations would permit it to more 
efficiently handle its burgeoning import workloads and expedite the 
movement of merchandise at more than 300 ports of entry. Customs 
estimated that the volume of import trade would increase from $761 
billion in 1995 to $1.1 trillion through 2001, with the number of 
commercial entries processed increasing in those years from 13.1 
million to 20.6 million. 

The Automated Commercial Environment (ACE) program was Customs’ system 
solution to a modernized commercial environment. In November 1997, 
Customs estimated that it would cost $1.05 billion to develop, operate, 
and maintain ACE between fiscal year 1994 and fiscal year 2008. Customs 
planned to develop and deploy ACE in increments. The first four were 
known collectively as the National Customs Automation Program (NCAP). 
The first increment, NCAP 0.1, was deployed for field operation and 
evaluation in May 1998. At the end of fiscal year 1998, Customs 
reported that it had spent $62.1 million on ACE. GAO issued its report 
on these programs, Customs Service Modernization: Serious Management 
and Technical Weaknesses Must Be Corrected, on February 26, 1999. 

Case Studies 6, 16, 25, 26, 29, 31, 34, 37, 39, and 42: From Space 
Acquisitions, GAO-07-96, November 17, 2006: 

Estimated costs for major space acquisition programs in the Department 
of Defense (DOD) have increased about $12.2 billion—or nearly 44 
percent—above initial estimates for fiscal years 2006–2011. In some 
cases, current estimates of costs are more than double the original 
estimates. For example, the Space Based Infrared System High program 
was originally estimated to cost about $4 billion but is now estimated 
to cost over $10 billion. The National Polar-orbiting Operational 
Environmental Satellite System program was originally estimated to cost 
almost $6 billion but is now over $11 billion. Such growth has had a 
dramatic effect on DOD’s overall space portfolio. To cover the added 
costs of poorly performing programs, DOD has shifted scarce resources 
away from other programs, creating cascading cost and schedule 
inefficiencies. As a result, GAO was asked to examine (1) in what areas 
space system acquisition cost estimates have been unrealistic and (2) 
what incentives and pressures have contributed to the quality and 
usefulness of cost estimates for space system acquisitions. GAO 
reported its findings on November 17, 2006, in Space Acquisitions: DOD 
Needs to Take More Action to Address Unrealistic Initial Cost Estimates 
of Space Systems. 

Case Study 7: From Combating Nuclear Smuggling, GAO-06-389, March 22, 
2006: 
 
Since September 11, 2001, combating terrorism has been one of the 
nation’s highest priorities. Preventing the smuggling of radioactive 
material into the United States—perhaps for use by terrorists in a 
nuclear weapon or in a radiological dispersal device (a dirty bomb)—has 
become a key national security objective. The Department of Homeland 
Security (DHS) is responsible for providing radiation detection 
capabilities at U.S. ports of entry. In September 2003, GAO reported on 
the department’s progress in completing domestic deployments. In 
particular, GAO found that certain aspects of its installation and use 
of equipment diminished its effectiveness and that agency coordination 
on long-term research issues was limited. 

After GAO issued that report, questions arose about the deployed 
detection equipment’s efficacy—in particular, its purported inability 
to distinguish naturally occurring radioactive materials from a nuclear 
bomb. GAO was asked to review DHS’s progress in (1) deploying radiation 
detection equipment, (2) using radiation detection equipment, (3) 
improving the equipment’s capabilities and testing, and (4) increasing 
cooperation between DHS and other federal agencies in conducting 
radiation detection programs. GAO reported these findings on March 22, 
2006, in Combating Nuclear Smuggling: DHS Has Made Progress Deploying 
Radiation Detection Equipment at U.S. Ports-of-Entry, but Concerns 
Remain. 

Case Studies 8, 10, 14, 27, 28, 33, 36, 38, 40, and 46: From Defense 
Acquisitions, GAO-05-183, February 28, 2005: 

The U.S. Navy makes significant investments to maintain the 
technological superiority of its warships. It devoted $7.6 billion in 
2005 alone to new ship construction in six ship classes: 96 percent of 
this was allocated to the Arleigh Burke class destroyer, Nimitz class 
aircraft carrier, San Antonio class amphibious transport dock ship, and 
Virginia class submarine. Cost growth in the Navy’s shipbuilding 
programs has been a long-standing problem. Over the few preceding 
years, the Navy had used “prior year completion” funding—that is, 
additional appropriations for ships already under contract—to pay for 
cost overruns. Responding to a congressional request, GAO’s 
review—Defense Acquisitions: Improved Management Practices Could Help 
Minimize Cost Growth in Navy Shipbuilding Programs (Feb. 28, 2005)—(1) 
estimated the current and projected cost growth on construction 
contracts for eight case study ships, (2) broke down and examined the 
components of the cost growth, and (3) identified funding and 
management practices that contributed to cost growth. 

Case Study 9: From Defense Acquisitions, GAO-06-327, March 15, 2006: 

DOD has spent nearly $90 billion since 1985 to develop a Ballistic 
Missile Defense System. The developer, the Missile Defense Agency 
(MDA), plans to invest about $58 billion more in the next 6 years. 
MDA’s overall goal is to produce a system that can defeat enemy 
missiles launched from any range during any phase of their flight. Its 
approach is to field new capabilities in 2-year blocks. Block 2004, the 
first block, was to provide some protection by December 2005 against 
attacks out of North Korea and the Middle East. 

The Congress requires GAO to assess MDA’s progress annually. Its 2006 
report assessed (1) MDA’s progress during fiscal year 2005 and (2) 
whether capabilities fielded under Block 2004 met their goals. In 
Defense Acquisitions: Missile Defense Agency Fields Initial Capability 
but Falls Short of Original Goals (Mar. 15, 2006), GAO identified 
reasons for shortfalls and discussed corrective actions that should be 
taken. 

Case Studies 12, 32, and 44: From Cooperative Threat Reduction, GAO-06-
692, May 31, 2006: 

Until Russia’s stockpile of chemical weapons is destroyed, it will 
remain not only a proliferation threat but also vulnerable to theft and 
diversion. The U.S. Congress has authorized DOD since 1992 to provide 
more than $1 billion for the Cooperative Threat Reduction program to 
help the Russian Federation build a chemical weapons destruction 
facility at Shchuch’ye to eliminate about 14 percent of its stockpile. 
DOD has faced numerous challenges over the past several years that have 
increased the facility’s estimated cost from about $750 million to more 
than $1 billion and that delayed its operation from 2006 to 2009. DOD 
has attributed these increases to a variety of factors. Asked to assess 
the facility’s progress, schedule, and cost and to review the status of 
Russia’s efforts to destroy all its chemical weapons, GAO reported its 
findings in Cooperative Threat Reduction: DOD Needs More Reliable Data 
to Better Estimate the Cost and Schedule of the Shchuch’ye Facility 
(May 31, 2006). 

Case Study 15: From Combating Nuclear Smuggling, GAO-07-133R, October 
17, 2006: 

DHS is responsible for providing radiation detection capabilities at 
U.S. ports of entry. Current portal monitors, costing about $55,000 
each, detect the presence of radiation. They cannot distinguish between 
harmless radiological materials, such as naturally occurring 
radiological material in some ceramic tile, and dangerous nuclear 
material, such as highly enriched uranium. Portal monitors with new 
identification technology designed to distinguish between the two types 
of material currently cost $377,000 or more. In July 2006, DHS 
announced that it had awarded contracts to three vendors to further 
develop and purchase $1.2 billion worth of new portal monitors over 5 
years. GAO’s report on these developments is in Combating Nuclear 
Smuggling: DHS’s Cost-Benefit Analysis to Support the Purchase of New 
Radiation Detection Portal Monitors Was Not Based on Available 
Performance Data and Did Not Fully Evaluate All the Monitors’ Costs and 
Benefits (Oct. 17, 2006). 

Case Study 17: From Best Practices, GAO-03-645T, April 11, 2003: 

DOD’s modernizing its forces competes with health care, homeland 
security, and other demands for federal funds. Therefore, DOD must 
manage its acquisitions as cost efficiently and effectively as 
possible. As of April 2003, DOD’s overall investments to modernize and 
“transition” U.S. forces were expected to average $150 billion a year 
through 2008. 

In 2003, DOD’s newest acquisition policy emphasized evolutionary, 
knowledge-based concepts that had produced more effective and efficient 
weapon system outcomes. However, most DOD programs did not employ such 
concepts and, as a result, experienced cost increases, schedule delays, 
and poor product quality and reliability. 

In a hearing before the Subcommittee on National Security, Emerging 
Threats, and International Relations of the House Committee on 
Government Reform, GAO’s testimony—Best Practices: Better Acquisition 
Outcomes Are Possible If DOD Can Apply Lessons from F/A-22 Program 
(Apr. 11, 2003)—compared best practices for developing new products 
with the experiences of the F/A-22 program. 

Case Studies 19, 21, and 45: From DOD Systems Modernization, GAO-06-
215, December 5, 2005: 

The Naval Tactical Command Support System (NTCSS) was started in 1995 
to help U.S. Navy personnel effectively manage ship, submarine, and 
aircraft support activities. The Navy expected to spend $348 million on 
the system between fiscal years 2006 and 2009. As of December 2005, 
about $1 billion had been spent to partially deploy NTCSS to about half 
its intended sites. It is important that DOD adhere to disciplined 
information technology acquisition processes to successfully modernize 
its business systems. Therefore, GAO was asked to determine whether 
NTCSS was being managed according to DOD’s acquisition policies and 
guidance, as well as other relevant acquisition management best 
practices. GAO issued its report on December 5, 2005, under the title, 
DOD Systems Modernization: Planned Investment in the Naval Tactical 
Command Support System Needs to Be Reassessed. 

Case Study 20: From United States Coast Guard, GAO-06-623, May 31, 
2006: 

Search and rescue is one of the U.S. Coast Guard’s oldest missions and 
highest priorities. The search and rescue mission includes minimizing 
the loss of life, injury, and property damage by aiding people and 
boats in distress. The National Distress and Response System is the 
legacy communications component of the Coast Guard’s search and rescue 
program. However, the 30-year-old system had several deficiencies 
and was difficult to maintain, according to agency officials. In 
September 2002, the Coast Guard contracted to replace its search and 
rescue communications system with a new system known as Rescue 21. 
However, the acquisition and initial implementation of Rescue 21 had 
resulted in significant cost overruns and schedule delays. Therefore, 
GAO was asked to assess the (1) reasons for the significant cost 
overruns and implementation delays, (2) viability of the revised cost 
and schedule estimates, and (3) impact of the implementation delays. 
GAO issued its report on May 31, 2006, under the title, United States 
Coast Guard: Improvements Needed in Management and Oversight of Rescue 
System Acquisition. 

Case Study 22: From Air Traffic Control, GAO-08-756, July 18, 2008: 

In fiscal year 2008, the Federal Aviation Administration (FAA) planned 
to spend over $2 billion on information technology (IT) 
investments—many of which support FAA’s air traffic control 
modernization. To more effectively manage such investments, in 2005 the 
Office of Management and Budget (OMB) required agencies to use earned 
value management (EVM). If implemented appropriately, EVM is a project 
management approach that provides objective reports of project status, 
produces early warning signs of impending schedule delays and cost 
overruns, and provides unbiased estimates of a program’s total costs. 

Among other objectives, GAO was asked to assess FAA’s policies for 
implementing EVM on its IT investments, evaluate whether the agency is 
adequately using these techniques to manage key IT acquisitions, and 
assess the agency’s efforts to oversee EVM compliance. To do so, GAO 
compared agency policies with best practices, performed four case 
studies, and interviewed key FAA officials. GAO issued its report, FAA 
Uses Earned Value Techniques to Help Manage Information Technology 
Acquisitions, but Needs to Clarify Policy and Strengthen Oversight, on 
July 18, 2008. 

Case Study 24: From Homeland Security, GAO-06-296, February 14, 2006: 
 
DHS’s U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) 
program was designed to collect, maintain, and share information, 
including biometric identifiers, on selected foreign nationals entering 
and exiting the United States. US-VISIT uses the identifiers—digital 
finger scans and photographs—to match persons against watch lists and 
to verify that a visitor is the person who was issued a visa or other 
travel documents. Visitors are also to have their departure confirmed 
by having their visas or passports scanned and by undergoing finger 
scanning at selected air and sea ports of entry. GAO has made many 
recommendations to improve the program’s management, all of which DHS 
has agreed to implement. GAO was asked to report in February 2006 on 
DHS’s progress in responding to 18 of those recommendations. Homeland 
Security: Recommendations to Improve Management of Key Border Security 
Program Need to Be Implemented (Feb. 14, 2006) was the result. 

Case Study 41: From Chemical Demilitarization, GAO-07-240R, January 26, 
2007: 

The U.S. stockpile of 1,269 tons of a lethal nerve agent (called VX) 
stored at the Newport Chemical Depot, Indiana, is one of nine 
stockpiles that DOD must destroy in response to congressional direction 
and the requirements of the Chemical Weapons Convention. The stockpile 
at Newport will be destroyed by neutralization—mixing hot water and 
sodium hydroxide with VX to change the chemical composition to a less 
toxic form. The resulting by-product is a liquid wastewater commonly 
referred to as hydrolysate that consists mostly of water but needs 
further treatment for disposal. At the time of GAO’s review, none of 
the generated hydrolysate—which was expected to be about 2 million 
gallons at the completion of the neutralization process—had been 
treated. Instead, the hydrolysate was being stored onsite until a post-
treatment plan could be implemented. 

The House Committee on Armed Services Report on the National Defense 
Authorization Act for Fiscal Year 2006 (H.R. Rep. No. 109-89) directed 
the Secretary of the Army to conduct and provide the congressional 
defense committees with a detailed cost-benefit analysis to include an 
analysis comparing the proposed off-site treatment option with eight on-
site options. In response, the Army published its cost-benefit report 
in April 2006, which concluded that only three of the eight 
technologies were feasible for treating Newport’s hydrolysate. In the 
cost-effectiveness analysis contained in the report, the Army 
determined that the cost of off-site treatment of the hydrolysate would 
be less expensive than the on-site options. The Army also concluded 
that the off-site treatment option would allow the disposal to be 
accomplished in the shortest amount of time and would minimize the 
amount of time that the hydrolysate must be stored at Newport. GAO was 
asked to (1) assess the reasonableness of the Army’s rationale to 
eliminate five of the eight technologies for treating Newport’s 
hydrolysate; (2) determine what other options the Army considered, such 
as incineration; and (3) evaluate the adequacy of the cost comparison 
analysis presented for the three remaining technologies considered as 
alternatives to the Army’s proposed plan. GAO issued its report on 
January 26, 2007, under the title, Chemical Demilitarization: Actions 
Needed to Improve the Reliability of the Army’s Cost Comparison 
Analysis for Treatment and Disposal Options for Newport’s VX 
Hydrolysate. 

Case Study 43: From Telecommunications, GAO-07-268, February 23, 2007: 
 
The mission of General Services Administration (GSA) technology 
programs is to provide federal agencies with acquisition services and 
solutions at best value including offering agencies options for 
acquiring needed telecommunications services. With the current set of 
governmentwide telecommunications contracts approaching expiration, GSA 
and its customer agencies will have to see the services acquired under 
these contracts through their transition to their replacements, known 
collectively as Networx. GSA will incur program management costs 
associated with planning and executing this transition. It has also 
made a commitment to absorb certain agency transition costs. To ensure 
that it would have the funds necessary to pay for these costs, GSA 
estimated that it would need to set aside about $151.5 million. GAO was 
asked to determine (1) the soundness of GSA’s analysis in deriving the 
estimate of funding that would be required for the transition and (2) 
whether GSA will have accumulated adequate funding to pay for its 
transition management costs. GAO issued its report on February 23, 
2007, under the title, Telecommunications: GSA Has Accumulated Adequate 
Funding for Transition to New Contracts but Needs Cost Estimation 
Policy. 

Case Study 47: From National Airspace System, GAO-03-343, January 31, 
2003: 

The Standard Terminal Automation Replacement System (STARS) was to 
replace outdated computer equipment used to control air traffic within 
5 to 50 nautical miles of an airport. At the time of this review, FAA’s 
plan was to procure 74 STARS systems, including 70 for terminal 
facilities and 4 for support facilities. With STARS, air traffic 
controllers at these facilities would receive new hardware and software 
that would produce color displays of aircraft position and flight 
information. In the future, FAA would be able to upgrade the software 
to provide air traffic control tools to allow better spacing of 
aircraft as they descend into airports. STARS was complex, costly, and 
software-intensive. Since 1996, when FAA initiated STARS, the number of 
systems scheduled to be procured ranged from as many as 188 to as 
few as 74, and the program’s cost and schedule also varied 
considerably. GAO’s report, covering cost and performance issues 
related to this procurement, is in National Airspace System: Better 
Cost Data Could Improve FAA’s Management of the Standard Terminal 
Automation Replacement System (Jan. 31, 2003). 

Case Study 48: From Uncertainties Remain, GAO-04-643R, May 17, 2004: 

In 1996, the Air Force launched an acquisition program to develop and 
produce a revolutionary laser weapon system, the Airborne Laser (ABL), 
capable of defeating an enemy ballistic missile during the boost phase 
of its flight. Over the 8 years preceding GAO’s review, the program’s 
efforts to develop this technology resulted in significant cost growth 
and schedule delays. The prime contractor’s costs for developing ABL 
nearly doubled from the Air Force’s original estimate and cost was 
growing. The cost growth occurred primarily because the program did not 
adequately plan for and could not fully anticipate the complexities of 
developing the system. The Missile Defense Agency continued to face 
significant challenges in developing the ABL’s revolutionary 
technologies and in achieving cost and schedule stability. From 1996 
through 2003, the value of the prime contract, which accounted for the 
bulk of the program’s cost, increased from about $1 billion to $2 
billion. According to our analysis, costs could increase between $431 
million and $943 million more through the first full demonstration of 
the ABL system. GAO’s report, covering cost and performance issues 
related to this procurement, is in Uncertainties Remain Concerning 
the Airborne Laser’s Cost and Military Utility (May 17, 2004). 

[End of Appendix 2] 

Appendix 3: Experts Who Helped Develop This Guide: 

The two lists in this appendix name the experts in the cost estimating 
community, with their organizations, who helped us develop this guide. 
This first list names significant contributors to the Cost Guide. They 
attended and participated in numerous expert meetings, provided text or 
graphics, and submitted comments. 
 
Organization: ABBA Consulting; 
Expert: Wayne Abba. 

Organization: BAE Systems; 
Expert: Shobha Mahabir. 

Organization: David Consulting Group; 
Expert: Michael Harris. 

Organization: Defense Contract Management Agency; 
Expert: Bob Keysar. 

Organization: Defense Acquisition University; 
Expert: David Bachman; 
Expert: Todd Johnston. 

Organization: Department of Defense; 
Expert: Debbie Tomsic. 

Organization: Department of Treasury; 
Expert: Kimberly Smith. 

Organization: Federal Aviation Administration; 
Expert: Lewis Fisher; 
Expert: Wayne Hanley; 
Expert: Fred Sapp. 

Organization: Fleming Management Consultancy; 
Expert: Quentin Fleming. 

Organization: Galorath Incorporated; 
Expert: Dan Galorath. 

Organization: Hulett & Associates LLC; 
Expert: David Hulett. 

Organization: Independent Consultant; 
Expert: John Pakiz. 

Organization: Internal Revenue Service; 
Expert: Donald Moushegian. 

Organization: KM Systems Group; 
Expert: Kim Hunter. 

Organization: Lockheed Martin Corporation; 
Expert: Jamie Fieber. 

Organization: Ludwig Consulting Services LLC; 
Expert: Joyce Ludwig. 

Organization: Management Concepts; 
Expert: Gregory T. Haugan. 

Organization: MCR Federal LLC; 
Expert: Neil Albert. 

Organization: Missile Defense Agency; 
Expert: David Melton; 
Expert: Peter Schwarz. 

Organization: MITRE; 
Expert: David Crawford. 

Organization: MITRE and National Oceanic and Atmospheric 
Administration; 
Expert: Richard Riether. 

Organization: National Aeronautics and Space Administration; 
Expert: Glenn Campbell; 
Expert: David Graham. 

Organization: PRICE Systems; 
Expert: Bruce Fad; 
Expert: William Mathis. 

Organization: Project and Program Management Systems, International 
Expert: Eric Marantoni. 

Organization: Social Security Administration; 
Expert: Otto Immink. 

Organization: The Analytical Sciences Corporation; 
Expert: Peter Braxton; 
Expert: Greg Hogan. 

Organization: Technomics; 
Expert: Rick Collins; 
Expert: Robert Meyer; 
Expert: Jack Smuck. 

Organization: Tecolote Research, Incorporated; 
Expert: Lew Fichter; 
Expert: Greg Higdon; 
Expert: Bill Rote; 
Expert: Alf Smith. 

Organization: United Kingdom Ministry of Defense; 
Expert: Andy Nicholls. 

Organization: U.S. Air Force, Air Force Cost Analysis Agency; 
Expert: John Cargill; 
Expert: Rich Hartley; 
Expert: John Peterson; 
Expert: William Seeman; 
Expert: Wilson Rosa. 

Organization: U.S. Army, Army Cost Center; 
Expert: Sean Vessey. 

Organization: U.S. Army, Corps of Engineers; 
Expert: Kim Callan 

Organization: U.S. Navy, Center for Cost Analysis; 
Expert: Susan Wileman. 

Organization: U.S. Navy, Naval Air Systems Command; 
Expert: Brenda Bizier; 
Expert: Susan Blake; 
Expert: Fred Meyer; 
Expert: John Scapparo. 

Organization: U.S. Navy, Naval Sea Systems Command; 
Expert: Hershel Young. 

This second list names those who generously donated their time to 
review the Cost Guide in its various stages and to provide feedback. 

Organization: Association for the Advancement of Cost Engineering; 
Osmund Belcher; 
Expert: Bill Kraus. 

Organization: Business Growth Solutions Ltd. 
Expert: Keith Gray. 

Organization: Center for Naval Analyses; 
Expert: Dan Davis; 
Expert: Richard Sperling. 

Organization: CGI Federal; 
Expert: Sameer Rohatgi. 

Organization: Comcast; 
Expert: Tamara Garcia. 

Organization: Data Systems Analysts Inc. 
Expert: Aubrey Jones. 

Organization: Department of Defense; 
Expert: John Leather. 

Organization: Department of Homeland Security; 
Expert: Jim Manzo; 
Expert: Michael Zaboski. 

Organization: Department of Homeland Security, Domestic Nuclear; 
Detection Office; 
Expert: Richard Balzano; 
Expert: Lisa Bell; 
Andrew Crisman. 

Organization: Federal Aviation Administration; 
Expert: Scott Allard; 
Expert: Dan Milano; 
Expert: William Russell. 

Organization: Hutchins & Associates; 
Expert: Pam Shepherd. 

Organization: Independent Consultant; 
Expert: Steven Deal; 
Expert: Dee Dobbins; 
Expert: Jan Kinner; 
Expert: David Muzio; 
Expert: Max Wideman. 

Organization: Lockheed Martin Corporation; 
Expert: Walt Berkey; 
Expert: Bill Farmer; 
Expert: Kathleen McCarter; 
Expert: Chitra Raghu; 
Expert: Tony Stemkowski. 

Organization: MITRE; 
Expert: Raj Agrawal. 

Organization: National Geospatial Intelligence Agency; 
Expert: Ivan Bembers. 

Organization: Northrop Grumman; 
Expert: Gay Infanti; 
Expert: Beverly Solomon. 

Organization: Office of Management and Budget; 
Expert: Patricia Corrigan. 

Organization: Parsons; 
Expert: Karen Kimball; 
Expert: Michael Nosbisch; 
Expert: Jon Tanke; 
Expert: Sandy Whyte. 

Organization: PRICE Systems; 
Expert: Didier Barrault. 

Organization: PT Mitratata Citragraha (PTMC); 
Expert: Paul D. Giammalvo. 

Organization: Robbins Gioia; 
Expert: Wei Tang. 

Organization: Social Security Administration; 
Expert: Alan Deckard. 

Organization: SRA International; 
Expert: David Lyons. 

Organization: SRS Technologies; 
Expert: Tim Sweeney. 

Organization: The Analytical Sciences Corporation; 
Expert: Samuel Toas. 

Organization: University of Colorado; 
Expert: Shekhar Patil. 

Organization: UQN and Associates; 
Expert: Ursula Kuehn. 

Organization: U.S. Air Force; 
Expert: Harold Parker. 

Organization: U.S. Army; 
Expert: Robert Dow. 

Organization: U.S. Army, Army Cost Center; 
Expert: Mort Anvari. 

Organization: U.S. Navy; 
Expert: Mark Gindele. 

Organization: U.S. Navy, Naval Air Systems Command; 
Expert: Jeff Scher. 

Organization: Wyle Labs;
Expert: Katrina Brown. 

[End of Appendix 3] 

Appendix 4: The Federal Budget Process: 

Each year in January or early February, the president submits budget 
proposals for the year that begins October 1. They include data for the 
most recently completed year, the current year, the budget year, and 
at least the 4 years following the budget year. 

The budget process has four phases: (1) executive budget formulation, 
(2) congressional budget process, (3) budget execution and control, and 
(4) audit and evaluation. Budget cycles overlap—the formulation of one 
budget begins before action has been completed on the previous one. 
Tables 49 and 50 present information from OMB’s Circular A-11 about the 
main phases of the budget cycle and the steps—and time periods—within 
each phase. 

Table 49: Phases of the Budget Process: 

Phase: Executive budget formulation; 
Description: OMB and the federal agencies begin preparing one budget 
almost as soon as the president has sent the last one to the Congress. 
OMB officially starts the process by sending planning guidance to 
executive agencies in the spring. The president completes this phase by 
sending the budget to the Congress on the first Monday in February, as 
specified in law 

Phase: Congressional budget process; 
Description: Begins when the Congress receives the president’s budget. 
The Congress does not vote on the budget but prepares a spending and 
revenue plan that is embedded in the Congressional Budget Resolution; 
the Congress also enacts regular appropriations acts and other laws 
that control spending and receipts. 
 
Phase: Budget execution: 
Description: This phase lasts for at least 5 fiscal years and has two 
parts: 
* Apportionment pertains to funds appropriated for that fiscal year and 
to balances of appropriations made in prior years that remain available 
for obligation. At the beginning of the fiscal year, and at other times 
as necessary, OMB apportions funds to executive agencies; that is, it 
specifies the amounts they may use by time period, program, project, or 
activity. Throughout the year, agencies hire people, enter into 
contracts, enter into grant agreements, and so on, to carry out their 
programs, projects, and activities. These actions use up the available 
funds by obligating the federal government to make immediate or future 
outlays; 
* Reporting and outlay last until funds are canceled (1-year and 
multiple-year funds are canceled at the end of the fifth year, after 
the funds expire for new obligations) or until funds are totally 
disbursed (for no-year funds). 
 
Phase: Audit and evaluation; 
Description: 
* While OMB does not specify times, each agency is responsible for 
ensuring that its obligations and outlays adhere to the provisions in 
the authorizing and appropriations legislation, as well as other laws 
and regulations governing the obligation and expenditure of funds. OMB 
provides guidance for, and federal laws are aimed at, controlling and 
improving agency financial management. Agency inspectors general 
give policy direction for, and agency chief financial officers oversee, 
all financial management activities related to agency programs and 
operations. 
* The 1993 Government Performance and Results Act requires each agency 
to submit an annual performance plan and performance report to OMB and 
the Congress; the report must establish goals defining the level of 
performance each program activity in the agency,s budget is to achieve 
and describing the operational processes and resources required to meet 
those goals. The Congress oversees agencies through the legislative 
process, hearings, and investigations. GAO audits and evaluates 
government programs and reports its findings and recommendations for 
corrective action to the Congress, OMB, and the agencies. 
 
Source: GAO and OMB. 

[End of table] 

Table 50: The Budget Process: Major Steps and Time Periods: 

Phase: Formulation; 
Major step: OMB issues planning guidance to executive agencies. OMB’s 
Director issues to agency heads policy guidance for budget requests; if 
no more specific guidance is given, the previous budget’s out-year 
estimates serve as the starting point for the next budget. This begins 
the process of formulating the budget the president will submit next 
February; 
Time: Spring. 
 
Phase: Formulation; 
Major step: OMB issues Circular No. A–11 to all federal agencies, 
providing detailed instructions for submitting budget data and 
materials; 
Time: July.
 
Phase: Formulation; 
Major step: Executive agencies, except those not subject to review, 
submit budgets; OMB provides specific deadlines; 
Time: September. 

Phase: Formulation; 
Major step: The fiscal year begins. The just completed budget cycle 
focused on this fiscal year, the budget year in that cycle, and the 
current year in this cycle; 
Time: October 1. 

Phase: Formulation; 
Major step: OMB conducts its fall review, analyzing agency budget 
proposals in light of presidential priorities, program performance, and 
budget constraints; 
Time: October–November. 

Phase: Formulation; 
Major step: OMB informs executive agencies of decisions on their budget 
requests; 
Time: Late November. 

Phase: Formulation; 
Major step: Agencies enter computer data and submit printed material 
and additional data; this begins immediately after passback and 
continues until OMB “locks” agencies out of the database to meet the 
printing deadline; 
Time: Late November to early January. 
 
Phase: Formulation; 
Major step: Agencies prepare, and OMB reviews, the justification 
materials they need to explain their budget requests to congressional 
subcommittees; 
Time: January. 
 
Phase: Formulation; 
Major step: The president transmits the budget to the Congress; 
Time: First Monday in February. 
 
Phase: Congressional; 
Major step: The Congressional Budget Office (CBO) reports to budget 
committees on the economic and budget outlook; 
Time: January. 
 
Phase: Congressional; 
Major step: CBO reestimates the President’s Budget, based on its 
economic and technical assumptions; 
Time: February. 

Phase: Congressional; 
Major step: Committees submit “views and estimates” to House and Senate 
budget committees, indicating preferences on matters they are 
responsible for; 
Time: Within 6 weeks of budget transmittal. 

Phase: Congressional; 
Major step: The Congress completes action on the concurrent resolution 
on the budget and commits to broad spending and revenue levels by 
passing a budget resolution; 
Time: April 15. 

Phase: Congressional; 
Major step: The Congress completes action on appropriations bills for 
the coming fiscal year or passes a continuing resolution (stop-gap 
appropriations); 
Time: September 30. 

Phase: Execution; 
Major step: The fiscal year begins; 
Time: October 1. 

Phase: Execution; 
Major step: OMB apportions funds made available in the annual 
appropriations process and other available funds. Agencies submit to 
OMB apportionment requests for each budget account by August 21 or 
within 10 calendar days after the approval of the appropriation, 
whichever is later. OMB approves or modifies apportionments, specifying 
the funds agencies may use by time period, program, project, or 
activity; 
Time: September 10, or within 30 days after approval of a spending 
bill. 

Phase: Execution; 
Major step: Agencies incur obligations and make outlays for funded 
programs, projects, and activities, hiring people and entering into 
contracts and agreements. They record obligations and outlays according 
to control procedures, report to Treasury, and prepare financial 
statements; 
Time: Throughout the fiscal year. 

Phase: Execution; 
Major step: The fiscal year ends; 
Time: September 30. 

Phase: Execution; 
Major step: Agencies disburse against obligated balances and adjust 
them to reflect actual obligations, continuing to record obligations 
and outlays, report to Treasury, and prepare financial statements; 
Time: Until September 30, fifth year after funds expire. 

Source: OMB. 

[End of table] 

[End of Appendix 4] 

Appendix 5: Federal Cost Estimating And EVM Legislation, Regulations, 
Policies, And Guidance: 

The material in this appendix, keyed to table 3 in the body of the Cost 
Guide, describes criteria related to cost estimating and EVM. 

Legislation and Regulations: 

1968: DOD Selected Acquisition Reports: 

Before selected acquisition reports (SAR) were introduced, with DOD 
Instruction 7000.3 in 1968, no recurring reports on major acquisitions 
summarized cost, schedule, and performance data for comparison with 
earlier and later estimates. The original purpose of SARs was to keep 
the Assistant Secretary of Defense (Comptroller) informed of the 
progress of selected acquisitions and to compare this progress with 
the planned technical, schedule, and cost performance. When the 
Secretary of Defense and the Congress began to require regular reports 
early in 1969, SARs became key recurring summaries advising the 
Congress on the progress of major acquisition programs.[Footnote 83] 

For the purpose of oversight and decision making, legislation (10 
U.S.C. § 2432 (2006)) now requires DOD to submit SARs annually to the 
Congress. The reports present the latest cost and schedule estimates 
and technical status for major defense programs. The comprehensive 
annual SARs are prepared in conjunction with the president’s budget. 

Quarterly exception reports are required only for programs with unit 
cost increases of at least 15 percent or schedule delays of at least 6 
months. They are also submitted for initial reports, final reports, and 
programs that are rebaselined at major milestone decisions. 

For each major defense acquisition program, an SAR contains program 
quantities; program acquisition cost and acquisition unit cost; current 
procurement cost and procurement unit cost; reasons for any changes in 
these costs from the previous SAR; reasons for any significant changes 
from the previous SAR in total program cost, software schedule 
milestones, or performance; any major contract changes and reasons for 
cost or schedule variances since the last SAR; and program highlights 
for current reporting period. 

1982: DOD Unit Cost Reports: 

Recognizing the need to establish a cost growth oversight mechanism for 
DOD’s major defense acquisition programs, the Congress requires DOD to 
report on program cost growth that exceeds certain thresholds. This 
requirement is commonly called Nunn-McCurdy, after the congressional 
leaders responsible for it. It became permanent law in 1982 with the 
Department of Defense Authorization Act, 1983. The law (10 U.S.C. § 
2433 (2006)) now provides for oversight of cost growth in DOD’s major 
defense acquisition programs by requiring DOD to notify the Congress 
when a program’s unit cost growth exceeds (or breaches) the original or 
the latest approved acquisition program baseline by certain thresholds. 
[Footnote 84] If the cost growth has increased by certain percentages 
over the baseline, the Secretary of Defense must carry out an 
assessment that includes the projected costs of completing the program 
if current requirements are not modified, as well as based on 
reasonable modification of such requirements. The assessment is also to 
include a rough order of magnitude of the costs of any reasonable 
alternative system of capability. Further, the Secretary of Defense is 
to certify to the Congress that: 

1. the program is essential to national security, 

2. no alternatives will provide equal or greater military capability at 
less cost, 

3. new program acquisition or procurement unit cost estimates are 
reasonable, and, 

4. the management structure is adequate to control unit cost. 

1983: DOD Independent Cost Estimates: 
 
Section 2434 of title 10 of the U.S. Code requires the Secretary of 
Defense to consider an independent life-cycle cost estimate (LCCE) 
before approving system development and demonstration, or production 
and deployment, of a major defense acquisition program. Under DOD’s 
acquisition system policy, this function is delegated to a program’s 
milestone decision authority. The statute requires that DOD prescribe 
regulations governing the content and submission of such estimates and 
that the estimates be prepared: 

1. by an office or other entity not under the supervision, direction, 
or control of the military department, agency, or other component 
directly responsible for the program’s development or acquisition or; 
 
2. if the decision authority has been delegated to an official of a 
military department, agency, or other component, by an office or other 
entity not directly responsible for the program’s development or 
acquisition. 

The statute specifies that the independent estimate is to include all 
costs of development, procurement, military construction, and 
operations and support, without regard to funding source or management 
control. 

1993: Government Performance and Results Act: 

The Government Performance and Results Act of 1993 (GPRA), Pub. L. No. 
103-62, requires agencies to prepare multiyear strategic plans that 
describe mission goals and methods for reaching them. It also requires 
agencies to develop annual performance plans that OMB uses to prepare a 
federal performance plan that is submitted to the Congress, along with 
the president’s annual budget submission. The agencies’ plans must 
establish measurable goals for program activities and must describe the 
methods for measuring performance toward those goals. The act also 
requires agencies to prepare annual program performance reports to 
review progress toward annual performance goals. 

1994: Federal Acquisition Streamlining Act: 

The Federal Acquisition Streamlining Act of 1994 (Pub. L. No. 103-355, 
§§ 5001(a)(1), 5051(a), as amended) established a congressional policy 
that the head of each executive agency should achieve, on average, 90 
percent of cost, performance, and schedule goals established for major 
acquisition programs of the agency. The act requires an agency to 
approve or define cost, performance, and schedule goals for its major 
acquisition programs. To implement the 90 percent policy, the act 
requires agency heads to determine whether there is a continuing need 
for programs that are significantly behind schedule, over budget, or 
not in compliance with performance or capability requirements and to 
identify suitable actions to be taken, including termination, with 
respect to such programs. This provision is codified at 41 U.S.C. § 263 
(2000) for civilian agencies. A similar requirement in 10 U.S.C. § 2220 
applied to DOD but was amended to remove the 90 percent measure. DOD 
has its own major program performance oversight requirements, such as 
the Nunn-McCurdy cost reporting process at 10 U.S.C. § 2433. OMB 
incorporated the 90 percent measure into the Capital Programming Guide 
Supplement to Circular A-11.[Footnote 85] 
 
1996: Clinger-Cohen Act: 

The Clinger-Cohen Act of 1996 (codified, as relevant here, at 40 U.S.C. 
§§ 11101–11704 (Supp. V 2005)) is intended to improve the productivity, 
efficiency, and effectiveness of federal programs by improving the 
acquisition, use, and disposal of information technology resources. 
Among its provisions, it requires federal agencies to: 

1. establish capital planning and investment control processes to 
maximize the value and manage the risks of information technology 
acquisitions, through quantitative and qualitative assessment of 
investment costs, benefits, and risks, among other ways; 

2. establish performance goals and measures for assessing and improving 
how well information technology supports agency programs, by 
benchmarking agency performance against public and private sector best 
practices; 

3. appoint chief information officers to be responsible for carrying 
out agency information resources management activities, including the 
acquisition and management of information technology, to improve agency 
productivity, efficiency, and effectiveness; and; 

4. identify in their strategic information resources management plans 
any major information technology acquisition program, or any phase or 
increment of such a program, that has significantly deviated from the 
cost, performance, or schedule goals established for the program. 

2006: DOD Major Automated Information System Programs: 
 
Section 816 of the John Warner National Defense Authorization Act for 
Fiscal Year 2007 (Pub. L. No. 109-364) added new oversight requirements 
for DOD’s major automated information system programs. These 
requirements, codified at 10 U.S.C. §§ 2445a–2445d (2006), include 
estimates of development costs and full life-cycle costs, as well as 
the establishment of a program baseline, variance reporting, and 
reports on significant or critical changes in the program (these 
include estimated program cost increases over certain thresholds). 

2006: Federal Acquisition Regulation—EVM Policy Added: 

The government’s earned value management system policy is spelled out 
in subpart 34.2 of the Federal Acquisition Regulation (FAR, 48 C.F.R.). 
The Civilian Agency Acquisition Council and the Defense Acquisition 
Regulations Council promulgated a final rule amending the FAR to 
implement EVM policy on July 5, 2006.[Footnote 86] The rule was 
necessary to help standardize EVM use across the government where 
developmental effort under a procurement contract is required. It 
implements EVM system policy in accordance with OMB Circular A–11, Part 
7, and its supplement, the Capital Planning Guide.[Footnote 87] 

It requires that EVM be used for major acquisitions for development. 
The rule defines an EVM system as a project management tool that 
effectively integrates the project’s scope of work with cost, schedule, 
and performance elements for optimum project planning and control (see 
FAR, 48 C.F.R. § 2.101). It also states that the qualities and 
characteristics of an EVM system are described in ANSI/EIA Standard 
748, Earned Value Management Systems.[Footnote 88] 

The rule stipulates that when an EVM system is required, the government 
is to conduct an integrated baseline review (IBR) to verify the 
technical content and realism of the related performance budgets, 
resources, and schedules. Through the IBR, agencies are to attain 
mutual understanding of the risks inherent in contractors’ performance 
plans and the underlying management control systems. The rule 
contemplates that the IBR results in the formulation of a plan to 
handle these risks. 

2008: Defense Federal Acquisition Regulation Supplement: 

DOD issued a final rule (73 Fed. Reg. 21,846 (April 23, 2008), 
primarily codified at 48 C.F.R. subpart 234.2, and part 252 (sections 
252.234-7001 and 7002)), amending the Defense Federal Acquisition 
Regulation Supplement (DFARS) to update requirements for DOD 
contractors to establish and maintain EVM systems. The rule also 
eliminated requirements for DOD contractors to submit cost-schedule 
status reports. 

This final rule updated DFARS text addressing EVM policy for DOD 
contracts, supplements the final FAR rule published at 71 Fed. Reg. 
38,238 on July 5, 2006, and establishes DOD-specific EVM requirements, 
as permitted by the FAR. The DFARS rule follows up on the policy in the 
memorandum the Under Secretary of Defense (Acquisition, Technology, and 
Logistics) issued on March 7, 2005, entitled “Revision to DOD Earned 
Value Management Policy.” 

The DFARS changes in this rule include the following: For cost or 
incentive contracts and subcontracts valued at $20 million or more, the 
rule requires an EVM system that complies with the guidelines in the 
American National Standards Institute/Electronic Industries Alliance 
Standard 748, Earned Value Management Systems (ANSI/EIA–748). For cost 
or incentive contracts and subcontracts valued at $50 million or more, 
the rule requires an EVM system that the cognizant federal agency (as 
defined in FAR 2.101) has determined to be in compliance with the 
guidelines in ANSI/EIA–748. For cost or incentive contracts and 
subcontracts valued at less than $20 million, the rule provides that 
application of EVM is optional and is a risk-based decision. For firm-
fixed-price contracts and subcontracts of any dollar value, the rule 
discourages applying EVM. DCMA is assigned responsibility for 
determining EVM compliance when DOD is the cognizant federal agency. 
Requirements for contractor cost-schedule status reports are 
eliminated. 

Policies: 

1976: OMB Circular Major Systems Acquisitions: 

OMB’s 1976 Circular A-109, Major Systems Acquisitions, establishes 
policies for agencies to follow when acquiring major systems. It 
requires agencies to ensure that their major system acquisitions 
fulfill mission needs, operate effectively, and demonstrate a level of 
performance and reliability that justifies the use of taxpayers’ funds. 
The policy also states that agencies need to maintain the ability to 
develop, review, negotiate, and monitor life-cycle costs. Moreover, 
agencies are expected to assess cost, schedule, and performance 
progress against predictions and inform agency heads of any variations 
at key decision points. When variations occur, the circular requires 
agencies to develop new assessments and use independent cost estimates, 
where feasible, for comparing results. 

1992: OMB Guidelines and Discount Rates for Benefit-Cost Analysis: 

OMB issued Circular No. A-94 to agencies in 1992, Guidelines and 
Discount Rates for Benefit-Cost Analysis of Federal Programs, to 
support government decisions to initiate, review, or expand programs 
that would result in measurable costs or benefits extending for 3 or 
more years into the future. It is general guidance for conducting 
benefit-cost and cost-effectiveness analyses. It also gives specific 
guidance on discount rates for evaluating federal programs whose 
benefits and costs are distributed over time. 

The guidance serves as a checklist for whether an agency has considered 
and properly dealt with all the elements of sound benefit-cost and cost-
effectiveness analyses, including, among other things, identifying 
assumptions, analyzing alternatives, applying inflation, discounting 
for net present value, characterizing uncertainty, and performing 
sensitivity analysis. 

1995: DOD’s Economic Analysis for Decisionmaking Instruction: 

Economic Analysis for Decisionmaking, DOD’s 1995 Instruction No. 
7041.3, implements policy and updates responsibilities and procedures 
for conducting cost-effectiveness economic analysis. It states that 
economic analysis is an important tool for planning and budgeting for 
DOD systems, and it helps decision makers obtain insight into the 
economic factors of various alternatives. The instruction outlines 
procedures for estimating the life-cycle costs and benefits of each 
feasible alternative and for adjusting all costs and benefits to 
present value by using discount factors to account for the time value 
of money. These procedures provide decision makers with the information 
associated with each alternative’s size and the timing of costs and 
benefits so that the best alternative can be selected. The instruction 
discusses the following elements of an economic analysis: a statement 
of the objective, assumptions, alternative ways of satisfying the 
objective, costs and benefits for each alternative considered, a 
comparison of alternatives ranked by net present value, sensitivity and 
uncertainty analysis, and results and recommendations. It also contains 
guidance on choosing alternatives and providing sensitivity analysis 
and proper discounting. 

2003: DOD’s Defense Acquisition System Directive: 
 
DOD’s Directive No. 5000.1, The Defense Acquisition System, outlines 
the management processes DOD is to follow to provide effective, 
affordable, and timely systems to users. It stipulates that the Defense 
Acquisition System exists to manage the nation’s investment in 
technologies, programs, and product support necessary to achieve the 
National Security Strategy and support the armed forces. Among other 
things, the policy requires every program manager to establish life-
cycle cost, schedule, and performance goals that will determine the 
acquisition program baseline. These goals should be tracked and any 
deviations in program parameters and exit criteria should be reported. 
The directive discusses how programs should be funded to realistic 
estimates and states that major drivers of total ownership costs should 
be identified. It requires program managers to use knowledge-based 
acquisition for reducing risk by requiring that new technology be 
demonstrated before it is incorporated into a program. 

DOD’s Directive No. 5000.1, The Defense Acquisition System has been 
redesignated 5000.01 and certified current as of Nov. 20, 2007. 

2003: DOD’s Operation of the Defense Acquisition System Instruction: 

DOD’s Instruction No. 5000.2, Operation of the Defense Acquisition 
System, establishes a framework for translating requirements into 
stable and affordable programs that can be managed effectively. It 
describes the standard framework for defense acquisition systems, which 
is to define the concept and analyze alternatives, develop the 
technology, develop the system and demonstrate that it works, produce 
the system and deploy it to its users, and operate and support the 
system throughout its useful life. The instruction also discusses in 
great detail the three milestones and what entrance and exit criteria 
must be met for each one. It explains the concept of evolutionary 
acquisition and how DOD prefers this strategy for acquiring technology, 
because it allows for the delivery of increased technical capability to 
users in the shortest time. The instruction identifies technology 
readiness assessments as a way to manage and mitigate technology 
risk. It discusses the different kinds of acquisition categories and 
their cost thresholds and decision authorities. In addition, it defines 
the role of the Cost Analysis Improvement Group (CAIG) in developing 
independent cost estimates. 

DOD’s Instruction No. 5000.2, Operation of the Defense Acquisition 
System, was cancelled and reissued by Instruction No. 5000.02 on Dec. 
8, 2008. 

2004: National Security Space Acquisition Policy 03-01: 

This document provides acquisition process guidance for DOD entities 
that are part of the National Security Space team. The Under Secretary 
of the Air Force is the DOD Space Milestone Decision Authority for all 
DOD Space Major Defense Acquisition Programs (MDAP). National Security 
Space is defined as the combined space activities of DOD and the 
National Intelligence Community. This policy describes the streamlined 
decision making framework for all such DOD programs. 

A DOD Space Major Defense Acquisition Program is an acquisition program 
that the DOD Space Milestone Decision Authority or the Defense 
Acquisition Executive designates as special interest or that the Space 
Milestone Decision Authority estimates will require an eventual total 
expenditure for research, development, test, and evaluation of more 
than $365 million in fiscal year 2000 constant dollars or, for 
procurement, more than $2.19 billion in fiscal year 2000 constant 
dollars. Highly sensitive classified programs as defined by 10 U.S.C. § 
2430 are not included. 

2005: DOD’s Earned Value Management Policy: 
 
Stating that EVM had been “an effective management control tool in the 
Department for the past 37 years,” DOD revised its policy—with its 
March 7, 2005, memorandum, “Revision to DOD Earned Value Management 
Policy”—to streamline, improve, and increase consistency in EVM’s 
application and implementation. The memorandum requires contracts equal 
to or greater than $20 million to implement EVM systems in accordance 
with ANSI/EIA Standard 748. It also requires contractors with contracts 
equal to or greater than $50 million to have formally validated EVM 
systems approved by the cognizant contracting officer. The revised 
policy also requires contract performance reports, an integrated master 
schedule, and an IBR whenever EVM is required. The new policy also 
calls for, among other things, a common WBS structure for the CPR and 
IMS. 

2005: OMB’s Memorandum on Improving Information Technology Project 
Planning and Execution: 

OMB’s 2005 Improving Information Technology (IT) Project Planning and 
Execution Memorandum for Chief Information Officers discusses how it 
expects agencies to ensure that cost, schedule, and performance goals 
are independently validated for reasonableness before beginning 
development. In addition, it requires agencies to fully implement EVM 
on all major capital acquisition projects. Full implementation occurs 
when agencies have shown that they have: 
 
1. a comprehensive agency policy for EVM; 

2. included EVM system requirements in contracts or agency in-house 
project charters; 

3. have held compliance reviews for agency and contractor EVM systems; 

4. a policy of performing periodic system surveillance reviews to 
ensure that the EVM system continues to meet ANSI/EIA Standard 748 
guidelines; and; 

5. a policy of conducting IBRs for making cost, schedule, and 
performance goals final. 

The memorandum gives further guidance and explanation for each of these 
five key components. For example, OMB states that compliance reviews 
should confirm that a contractor’s EVM system processes and procedures 
have satisfied ANSI/EIA Standard 748 guidelines and that surveillance 
reviews should show that agencies are using EVM to manage their 
programs. The memorandum stresses the importance of an IBR as a way of 
assessing program performance and understanding risk. 

2006: OMB’s Capital Programming Guide: 

The Capital Programming Guide—the part 7 supplement to OMB’s Circular 
No. A-11—sets forth the requirements for how OMB manages and oversees 
agency budgets. In the budget process, agencies must develop and submit 
to OMB for review an exhibit 300, also known as the Capital Asset Plan 
and Business Case. Under OMB’s Circular A-11, agencies must analyze and 
document their decisions on proposed major investments. Exhibit 300 
functions as a reporting mechanism that enables an agency to 
demonstrate to its own management, as well as OMB, that it has used the 
disciplines of good project management, developed a strong business 
case for investment, and met other administration priorities in 
defining the cost, schedule, and performance goals proposed for the 
investment. Exhibit 300 has eight key sections on spending, performance 
goals and measures, analysis of alternatives, risk inventory and 
assessment, acquisition strategy, planning for project investment and 
funding, enterprise architecture, and security and privacy. When 
considering investments to recommend for funding, OMB relies on the 
accuracy and completeness of the information reported in exhibit 300. 
It also cites that credible cost estimates are vital for sound 
management decision making and for any program or capital project to 
succeed. To that end, OMB states that following the guidelines in GAO 
Cost Estimating and Assessment Guide (GAO-09-3SP) will help agencies 
meet most cost estimating requirements. 

2006: DOD’s Cost Analysis Improvement Group Directive: 

DOD’s Directive 5000.04 states that the CAIG is the principal advisory 
body on cost for milestone decision authorities. The CAIG estimates 
that supporting milestone decisions include costs for research and 
development, prime hardware and its major subcomponents, procurement, 
initial spares, military construction, and all operations and 
support—regardless of funding source or management control. The CAIG is 
to provide its assessments in a formal report addressed to milestone 
decision authorities. In addition to describing the cost estimate, the 
CAIG report is to include a quantitative assessment of the associated 
risks. The risks should include the validity of program assumptions, 
such as the reasonableness of program schedules and technical 
uncertainty and any errors associated with the cost estimating methods. 

The directive describes other CAIG responsibilities, including 
reporting on the reasonableness of unit costs for programs breaching 
specific cost thresholds, the validity of costs in acquisition program 
baselines, and independent assessments of the Defense Acquisition 
Executive Summary program costs and giving guidance on preparing cost 
estimates, sponsoring cost research, establishing standard definitions 
of cost terms, and developing and implementing policy to collect, 
store, and exchange information on how to improve cost estimating and 
data. 

Guidance: 

1992: CAIG’s Operating and Support Cost-Estimating Guide: 

The 1992 Operating and Support Cost-Estimating Guide, prepared by the 
Cost Analysis Improvement Group in the Office of the Secretary, is 
intended to help DOD components prepare, document, and present 
operating and support cost estimates to the CAIG. It discusses the 
requirements for the cost estimates, provides instructions for 
developing them, and presents standard cost element structures and 
definitions for specific categories of weapon systems. Documentation 
and presentation requirements are provided to help prepare for CAIG 
reviews. The guide’s primary objective is to achieve consistent, well 
documented operating and support cost estimates that an independent 
party can replicate and verify. 

1992: DOD’s Cost Analysis Guidance and Procedures: 

DOD’s 1992 Directive 5000.4-M, Cost Analysis Guidance and Procedures, 
is a manual for preparing the Cost Analysis Requirements Document, 
which the program office is to develop, describing the program in 
enough detail for cost estimators to develop an LCCE. The manual 
contains information on preparing and presenting LCCEs to the CAIG, 
including the scope of the estimate and the analytical methods to be 
used. It defines seven high-level cost terms—development cost, flyaway 
sailaway rollaway cost, weapons system cost, procurement cost, program 
acquisition cost, operating and support cost, and life cycle cost— 
and how they relate to WBS elements and appropriations. 

2003: DOD’s Program Manager’s Guide to the Integrated Baseline Review 
Process: 

DOD developed the April 2003 Program Manager’s Guide to the Integrated 
Baseline Review Process to improve the consistency of the IBR process. 
The intent was to ensure that the IBR would provide program managers 
with an understanding of the risks involved with a contractor’s 
performance plans and corresponding EVM systems. Since DOD’s 
acquisition policy requires IBRs on contracts with EVM requirements, 
the guide identifies the purpose of the IBR process and stresses the 
need for the process to continue even after the IBR has been conducted. 
Program managers are strongly encouraged to follow this guidance for 
training in, preparing, and conducting IBRs. 

2004: NDIA PMSC Surveillance Guide: 

The NDIA PMSC Surveillance Guide—the short title of the 2004 edition of 
this document—is intended for the use of government and contractor 
communities in determining whether EVM systems are being used to 
effectively manage program cost, schedule, and technical performance. 
The guide gives an overview of what EVM system surveillance entails, 
including ensuring that company processes and procedures are followed 
to satisfy the ANSI/EIA 748-A Standard. It discusses the activities in 
proper system surveillance, including organization, planning, 
execution, results, management control, and corrective action. It 
provides a standard industry surveillance approach to ensuring a common 
understanding of expectations and the use of a uniform process. 

2005: NDIA PMSC EVM Systems Intent Guide: 

The 2005 Earned Value Management Systems Intent Guide, issued by NDIA 
and its Program Management Systems Committee, is intended for the use 
of government analysts and contractors, wherever ANSI/EIA Standard 748 
is required. The guide defines the management value and intent for each 
of the standard’s guidelines and lists the attributes and objective 
evidence that can be used to verify compliance with a given guideline. 
The objective of compliance is to demonstrate that a contractor has 
thought through each guideline and can describe how its business 
process complies with it. A customer, independent reviewer, or auditor 
can use the intent, typical attributes, and objective evidence of 
typical outputs that the guide describes as the basis for verifying 
compliance. The guide’s five sections are (1) organization; (2) 
planning, scheduling, and budgeting; (3) accounting considerations; (4) 
analysis and management reports; and (5) revisions and data 
maintenance. It recommends that: 

1. contract or business processes and system documentation be mapped 
and verified against the guideline’s intent, typical attributes, and 
objective evidence of typical outputs described in the document by the 
process owner; 

2. someone independent of the documenting party verify the compliance 
assessment; 

3. the verifier be versed in ANSI/EIA 748 EVM system guidelines; 

4. the customer recognize this method as being applicable and 
meaningful to compliance assessment verification; and; 

5. the customer consider past acceptance of compliance with ANSI/EIA 
748 EVM system guidelines, business organization application policy, 
and surveillance activity in management decisions to perform a 
compliance assessment.[Footnote 89] 

2006: DOD Earned Value Management Implementation Guide: 
 
DCMA issued the Department of Defense Earned Value Management 
Implementation Guide in 2006 to serve as the central EVM guidance 
during implementation and surveillance of EVM systems in compliance 
with DOD guidelines. The guide has two parts. The first contains basic 
EVM information, describes an EVM system’s objectives, and provides 
guidance for interpreting EVM guidelines as they apply to government 
contracts. The second part describes procedures and processes 
government staff must follow in evaluating the implementation of EVM 
systems. It also provides guidance on tailoring the guidelines, 
analyzing EVM performance, determining the effectiveness of the 
baseline and its maintenance, and performing other activities that must 
be followed after contracts have been awarded. 

2006: NDIA System Acceptance Guide: 

NDIA’s Program Management Systems Committee’s working draft of its EVM 
System Acceptance Guide was released for comment in November 2006. The 
guide defines a process in which a government or industry owner of an 
EVM system that has a first-time requirement to comply with the 
ANSI/EIA 748-A standard can: 
 
1. understand the need for and effectively design the system, 

2. implement the system on the acquiring acquisition, 

3. evaluate its compliance and implementation, 

4. prepare and provide documentation that substantiates evaluation and 
implementation, and, 

5. receive approval and documentation that satisfies current and future 
requirements for the system’s approval.[Footnote 90] 

2007: ANSI/EIA 748-B: 

ANSI/EIA 748-B is an update of ANSI-EIA-748A. This document provides 
basic guidelines for companies to use in establishing and applying an 
integrated EVM system. The guidelines are expressed in fundamental 
terms and provide flexibility for its usage. The guidelines are grouped 
into five major categories. 

They incorporate best business practices to provide strong benefits for 
program or enterprise planning and control. The processes include 
integrating program scope, schedule, and cost objectives; establishing 
a baseline plan for accomplishing program objectives; and using earned 
value techniques for performance measurement during the execution of a 
program. The system provides a sound basis for identifying problems, 
taking corrective actions, and managing replanning as required. 

The guidelines in this document are purposely high level and goal 
oriented, since they are intended to state the qualities and 
operational considerations of an integrated management system using 
earned value analysis methods without mandating detailed system 
characteristics. Different organizations must have the flexibility to 
establish and apply a management system that suits their management 
style and business environment. The system must, first and foremost, 
meet the organization’s needs and good business practices. 

2007: NDIA Systems Application Guide: 

NDIA’s Program Management Systems Committee’s working draft of its EVM 
Systems Application Guide was published in March 2007. It describes for 
all organizations implementing the ANSI/EIA 748-A standard, EVM systems 
(Current Version), the importance of planning the EVM application 
through all phases of the acquisition life cycle. It elaborates on the 
performance-based management requirements in OMB’s Capital Programming 
Guide. The Systems Application Guide also provides the context for the 
application of EVM within a federal agency’s acquisition life cycle, 
along with government acquisition terminology.[Footnote 91] 

[End of Appendix 5] 

Appendix 6: Data Collection Instrument: 

Job title: 
Job code: 

Explain the job, identify the requester, and provide any other relevant 
information. 

Data Request. Please provide copies of the following: 

1. Program life-cycle cost estimates and supporting documentation, 
showing the basis of the estimates (methodology, data sources, risk 
simulation inputs and results, software cost model inputs and results, 
etc.) 

2. Program management review briefings from the past year’s budget 
documentation, including projected budget and OMB 300 reports. 

3. The program’s contract. 

4. A short contract history, with a description of contract line item 
numbers, contract number and type, award date, and performance period 
and a summary of significant modifications (with cost and description). 

5. Award fee determination (or incentive) letters and any presentations 
by the contractor regarding award fee determination (e.g., self-
evaluations). 

6. Price negotiation memos, also known as business clearance letters. 

7. Independent cost estimate briefings and supporting documentation. 

8. Nunn-McCurdy unit cost breach program reporting and certification 
documentation, if applicable. 

9. Work breakdown structure (WBS) or cost element structure (CES), with 
dictionary. 

10. The latest approved technical baseline description (TBD), also 
knows as cost analysis requirements description (CARD) in DOD and cost 
analysis data requirement CADRe at NASA. 

11. Current acquisition program baseline. 

12. Selected acquisition reports (SAR), if applicable. 

13. If DOD, cost and software data reporting (CSDR), or contract 
critical design review (CCDR) if NASA. 

14. Technology readiness assessments, if applicable. 

15. Design review reports, preliminary and critical. 

16. The acquisition decision memorandum. 
 
17. EVM contract performance reports (CPR), Formats 1-5, for the past 
12 months, year-end for all prior years, and monthly thereafter during 
the audit - preferably electronic. 

18. All integrated baseline review (IBR) reports. 

19. EVM surveillance reports for the past 12 months and a standing 
request for monthly reports during the audit. 

20. The integrated mater schedule (IMS) in its native software format 
(e.g., *mpp). 

21. The integrated master plan (IMP). 

Contract Questions. Please answer the following questions: 

1. Break down the program’s budget by contract, government in-house, 
and 
other costs. What percent of the program’s budget do the prime 
contract, major subcontracts, and government costs, subsume? 
Identify the quantities of the system to be procured, including planned 
options and foreign military sales, if applicable. 

2. Discuss any major contract modifications and how long it took to 
make the changes final. 

3. Discuss the award fee structure, if applicable. Does the program 
use cost performance as a basis for determining award fee? Are contract 
performance report (CPR) data used? If not used, what is examined to 
determine award fees? 

4. Describe any applicable teaming arrangements. 

Program Management And Cost. Please answer the following questions: 

1. Who was responsible for developing the program’s life-cycle cost 
estimate? If a support contractor prepared the estimates, what 
requirements and guidelines were provided to the support contractor 
regarding the development of the estimate? What qualifications and 
experience do the cost analysts have? Was the estimate prepared by a 
centralized cost team outside of the program office? What types of cost 
data are available to the cost team? Are centralized databases and 
experts available to the cost team to support the development of the 
estimate? 

2. How often does the program present program management review 
briefings? How are decisions made and documented? 

3. What are the program’s current risk drivers and associated rankings -
high, medium, low? Please describe the effect of each risk. Is there a 
risk mitigation plan? If so, please describe it. 

4. Describe significant cost and schedule drivers. Are there corrective 
actions plans to address them? 

5. Has an independent cost estimate (ICE) been performed on the 
program’s life-cycle costs? If so, how much higher or lower was the 
ICE? How were the differences between the ICE and the program cost 
estimate reconciled? Who was briefed on the ICE? 

6. Have any Monte Carlo simulations been run to determine the risk 
level associated with cost estimates? What were the results and how did 
they influence program decision regarding risk and funding? 

7. How does the program procure equipment furnished by the government? 
Are there separate contracts for such items? If so, what is the value? 
How is such equipment accounted for in the program’s cost estimate? 

8. Who is responsible for absorbing cost overruns associated with 
equipment furnished by the government - the program or the program 
developing the item?

9. Please describe the program’s software requirements. How was the 
effort estimated in regard to size requirements and productivity rates? 
Were any software cost models used? What were the associated inputs? 

10. Please discuss any effects inflation has had on the program and 
whether inflation has played a role in cost overruns. 

[End of Appendix 6] 

Appendix 7: Data Collection Instrument: Data Request Rationale: 

The items in this appendix are keyed to the “Data Request” items in 
appendix VI. 

1. Program life-cycle cost estimates and supporting documentation, 
showing the basis of the estimates (methodology, data sources, risk 
simulation inputs and results, software cost model inputs and results, 
etc.). 

Rationale: Only by assessing the estimate’s underlying data and 
methodology can the auditor determine its quality. This information 
will answer important questions such as, How applicable are the data? 
Were the data normalized correctly? What method was used? What 
statistics were generated? 

2. Program management review briefings from the past 2 years’ budget 
documentation, including budget and OMB 300 reports. 

Rationale: This information tells the auditor what senior management 
was told and when the presentations were made—what problems were 
revealed, what alternative actions were discussed. Budget documentation 
assures the auditor that agencies are properly employing capital 
programming to integrate the planning, acquisition, and management of 
capital assets into the budget decisionmaking process. Agencies are 
required to establish cost, schedule, and measurable performance goals 
for all major acquisition programs and should achieve, on average, 90 
percent of those goals. 

3. The program’s contract. 

Rationale: This tells the auditor what the contractor was required to 
deliver at a given time. It also provides price or cost information, 
including the negotiated price or cost, as well as the type of contract 
(such as fixed-price, cost-plus-fixed-fee, cost-plus-award, or 
incentive fee). 

4. A short contract history, with a description of contract line item 
numbers, contract number and type, award date, and performance period 
and a summary of significant contract modifications (with cost and 
description). 

Rationale: This provides important context for the current contract. 
Only with a detailed knowledge of program history can the auditor 
effectively determine the program’s present status and future 
prospects. 

5. Award fee determination (or incentive) letters and any contractor 
presentations regarding award fee determination (e.g., self-
evaluations). 

Rationale: This obviously applies only to contracts with award (or 
incentive) fees. For such contracts, the auditor needs to know the 
basis on which fees were awarded, whether it was strictly followed, and 
reasons for any deviations. 

6. Price negotiation memos, also known as business clearance letters. 

Rationale: The price negotiation memorandum summarizes for the auditor 
the contract price negotiations, including documentation of fair and 
reasonable pricing. 

7. Independent cost estimate briefings and supporting documentation. 

Rationale: This information is important because, first, it provides 
the auditor with the data needed to assess the quality of the LCCE and, 
second, it reveals what information was independently briefed to senior 
management about the quality of the baseline cost estimate. 

8. Nunn-McCurdy unit cost breach program reporting and certification 
documentation, if applicable. 

Rationale: This will not apply at all to non-DOD programs and applies 
only to certain DOD programs. For DOD programs (major defense 
acquisition programs), it is important that the auditor know the nature 
of the breach, when it occurred, when it was reported, and what action 
was taken. 

9. Work breakdown structure (WBS) or cost element structure (CES), with 
dictionary. 

Rationale: The WBS and CES and associated dictionary represent a 
hierarchy of product-oriented elements that provide a detailed 
understanding of what the contractor was required to develop and 
produce. 

10. The latest approved technical baseline description, also known as 
cost analysis requirements document in DOD and CADRe at NASA. 

Rationale: The technical baseline description provides the auditor with 
the program’s technical and program baseline. Besides defining the 
system, it provides complete information on testing plans, procurement 
schedules, acquisition strategy, and logistics plans. This is the 
document on which cost analysts base their estimates and is therefore 
essential to the auditor’s understanding of the program.

11. Current acquisition program baseline. 

Rationale: The acquisition program baseline documents program goals 
before program initiation. The program manager derives the acquisition 
program baseline from the users’ performance requirements, schedule 
requirements, and best estimates of total program cost consistent with 
projected funding. The baseline should contain only the parameters 
that, if thresholds are not met, will require the milestone decision 
authority to reevaluate the program and consider alternative program 
concepts or design approaches. 

12. Selected acquisition reports (SAR), if applicable. 
 
Rationale: For major defense acquisition programs, the SAR provides the 
history and current status of total program cost, schedule, and 
performance, as well as program unit cost and unit cost breach 
information. For joint programs, SARs provide information by 
participant. Each SAR includes a full, life-cycle cost analysis for the 
reporting program; an analysis of each of its evolutionary increments, 
as available; and analysis of its antecedent program, if applicable. 

13. If DOD, cost and software data reporting, or contractor critical 
design review, if NASA. 

Rationale: Contractor critical design reviews provide the auditor with 
actual contractor development or procurement costs by WBS or CES. 
Especially useful is the fact that recurring and nonrecurring costs are 
differentiated. 

14. Technology readiness assessment, if applicable. 

Rationale: A technology readiness assessment provides an evaluation of 
a system’s technological maturity by major WBS elements. It is 
extremely useful in countering technological overoptimism. For elements 
with unacceptable assessments, the auditor can then assess whether 
satisfactory mitigation plans have been developed to ensure that 
acceptable maturity will be achieved before milestone decision dates. 

15. Design review reports, preliminary and critical. 

Rationale: Design review reports provide the technical information 
needed to ensure that the system is satisfactorily meeting its 
requirements. The preliminary design review ensures that the system can 
proceed into detailed design, while meeting its stated performance 
requirements within cost (program budget), schedule (program schedule), 
risk, and other system constraints. The critical design review ensures 
that the system can proceed into system fabrication, demonstration, and 
test, while meeting its stated performance requirements within cost, 
schedule, risk, and other system constraints. It also assesses the 
system’s final design as captured in product specifications for each 
configuration item in the system (product baseline) and ensures that 
each product in the product baseline has been captured in the detailed 
design documentation. 

16. The acquisition decision memorandum. 

Rationale: This provides the documented rationale for the milestone 
decision authority’s (or investment review board’s) approving a program 
to advance to the next stage of the acquisition process. 

17. EVM contract performance reports, Formats 1–5, for the past 12 
months, year-end for all prior years, and monthly thereafter during the 
audit—preferably electronic. 

Rationale: CPRs are management reports essential to an auditor’s 
ability to develop a comprehensive analysis. They are timely, reliable 
summary data from which to assess current and projected contract 
performance. The auditor can use them to reasonably project future 
program performance. Format 1 provides data to measure cost and 
schedule performance by product-oriented WBS elements—i.e., hardware, 
software, and services the government is buying. Format 2 provides the 
same data by the contractor’s organization (functional or integrated 
product team structure). Format 3 provides the budget baseline plan 
against which performance is measured. Format 4 provides staffing 
forecasts for correlation with the budget plan and cost estimates. 
Format 5 is a narrative report explaining significant cost and schedule 
variances and other identified contract problems and topics. 

18. All integrated baseline review reports. 

Rationale: An IBR’s purpose is to verify the technical content and 
realism of the interrelated performance budgets, resources, and 
schedules. It helps the auditor understand the inherent risks in 
offerors’ or contractors’ performance plans and the underlying 
management control systems, and it should contain a plan to handle 
these risks. OMB policy requires that IBRs be initiated as early as 
practicable. 

19. EVM surveillance reports for the past 12 months and a standing 
request for monthly CPRs during the audit. 

Rationale: EVM surveillance reports assure the auditor that contractors 
are using effective internal cost and schedule control systems that 
provide contractor and government managers with timely and auditable 
data to effectively monitor programs, provide timely indications of 
actual and potential problems, meet requirements, and control contract 
performance. Surveillance ensures that a supplier’s EVM implementation 
of processes and procedures is being maintained over time and on all 
applicable programs and is in compliance with the 32 EVM guidelines. 

20. The integrated master schedule (IMS). 

Rationale: The IMS contains the detailed tasks or work packages 
necessary to ensure program execution. The auditor can use the IMS to 
verify the attainability of contract objectives, evaluate progress 
toward program objectives, and integrate the program schedule 
activities with the program components. 

21. The integrated master plan. 

Rationale: The integrated master plan provides an event-based hierarchy 
of program events, with each event supported by accomplishments and 
each accomplishment associated with specific criteria to be satisfied 
for its completion. The plan is normally part of the contract and is 
therefore contractually binding. 

[End of Appendix 7] 

Appendix 8: SEI Checklist: 

Checklists and Criteria contains a checklist for evaluating an 
organization’s software and is available at: 
 
Checklists and Criteria for Evaluating the Cost and Schedule Estimating 
Capabilities of Software Organizations: 

Robert E. Park: 

Approved for public release: 
Distribution unlimited: 

Software Engineering Institute: 
Carnegie Mellon University: 
Pittsburgh, Pennsylvania 15213P: 

This technical report was prepared for the: 
 
SEI Joint Program Office: 
HQ ESCENS: 
5 Eglin Street: 
Hanscom AFB, MA 01731-2116: 

The ideas and findings in this report should not be construed as an 
official DoD position. It is published in the interest of scientific 
and technical information exchange. 

Review and Approval: 
 
This report has been reviewed and is approved for publication. 

For The Commander: 
 
Signature On File: 

Thomas R. Miler, Lt Col, USAF: 
SEI Joint Program Office:
 
This work is sponsored by the U.S. Department of Defense. 

Copyright © 1995 by Carnegie Mellon University 

This document is available through Research Access, Inc., 800 Vinial 
Street, Pittsburgh, PA 15212. 

Phone: 1-800-685-6510. FAX: 412-321-2994. 

Copies of this document are available through the National Technical 
Information Service NTIS. For information on ordering, please contact 
NTIS directly: National Technical Information Service, U.S. Department 
of Commerce, Springfield, VA 22161. Phone: 703487-4600. 

This document is also available through the Defense Technical 
Information Center DTIC. DTIC provides access to and transfer of 
scientific and technical information for DoD personnel, DoD contractors 
and potential contractors, and other U.S. Government agency personnel 
and their contractors. To obtain a copy, please contact DTIC directly: 
Defense Technical Information Center, Attn: FDRA, Cameron Station, 
Alexandria, VA 223046145. Phone 703-274-7633. 

[End of Appendix 8] 

Appendix 9: Examples Of Work Breakdown Structures: 

DOD developed Work Breakdown Structures for Defense Materiel Items in 
1968 to provide a framework and instructions for developing a WBS. 
[Footnote 92] Although it now serves only as guidance, the handbook 
remains an excellent resource for developing a WBS for government and 
private industry. It outlines the contents and components that should 
be considered for aircraft, electronic and automated software systems, 
missiles, ordnance, ships, space systems, surface vehicle systems, and 
unmanned air vehicle systems. It gives examples and definitions, 
particularly in its appendixes A–I, which constitute the bulk of the 
document and on which tables 51, 52, and 54–59 are based. These 
examples of WBS were valid at the time of publication. It is advised 
that the source of the WBS be checked before it is used to see if any 
updates have been made. Table 53 presents a common WBS for software 
development based on NASA research conducted by the Jet Propulsion 
Laboratory at the California Institute of Technology. Table 60 shows 
the Department of Energy’s Project WBS. Table 61 is an example of the 
General Services Administration’s construction WBS, and table 62 is an 
Automated Information System: Enterprise Resource Planning Program 
Level Work Breakdown Structure. Tables 63–67 are from the Project 
Management Institute’s Practice Standard for Work Breakdown Structures, 
second edition, published in October 2006. Table 68 is an example of a 
major construction renovation project. Table 69 includes IT 
infrastructure and IT services only. Automated information system 
configuration, customization, development, and maintenance are covered 
in chapter 8. Table 70 is an example from the CSI Masterformat™ 2004 
Structure. 

Table 51: Aircraft System Work Breakdown Structure: 
 
Level 2 element: 1.1 Air vehicle; 
Level 3 element: 
1.1.1 Airframe; 
1.1.2 Propulsion; 
1.1.3 Air vehicle applications software; 
1.1.4 Air vehicle system software; 
1.1.5 Communications/identification; 
1.1.6 Navigation/guidance; 
1.1.7 Central computer; 
1.1.8 Fire control; 
1.1.9 Data display and controls; 
1.1.10 Survivability; 
1.1.11 Reconnaissance; 
1.1.12 Automatic flight control; 
1.1.13 Central integrated checkout; 
1.1.14 Antisubmarine warfare; 
1.1.15 Armament; 
1.1.16 Weapons delivery; 
1.1.17 Auxiliary equipment; 
1.1.18 Crew station. 

Level 2 element: 1.2 Systems engineering/program management; 

Level 2 element: 1.3 System test and evaluation; 
Level 3 element: 
1.3.1 Development test and evaluation;
1.3.2 Operational test and evaluation;
1.3.3 Mock-ups/system integration laboratories;
1.3.4 Test and evaluation support;
1.3.5 Test facilities. 

Level 2 element: 1.4 Training; 
Level 3 element: 
1.4.1 Equipment;
1.4.2 Services;
1.4.3 Facilities.

Level 2 element: 1.5 Data; 
Level 3 element: 
1.5.1 Technical publications;
1.5.2 Engineering data;
1.5.3 Management data;
1.5.4 Support data;
1.5.5 Data depository.

Level 2 element: 1.6 Peculiar support equipment; 
Level 3 element: 
1.6.1 Test and measurement equipment;
1.6.2 Support and handling equipment. 

Level 2 element: 1.7 Common support equipment; 
Level 3 element: 
1.7.1 Test and measurement equipment;
1.7.2 Support and handling equipment.

Level 2 element: 1.8 Operational/site activation; 
Level 3 element: 
1.8.1 System assembly, installation, checkout; 
1.8.2 Contractor technical support;
1.8.3 Site construction;
1.8.4 Site/ship/vehicle conversion.

Level 2 element: 1.9 Industrial facilities; 
Level 3 element: 
1.9.1 Construction/conversion/expansion;
1.9.2 Equipment acquisition or modernization;
1.9.3 Maintenance (industrial facilities).

Level 2 element: 1.10 Initial spares and repair parts. 

Source: DOD, Department of Defense Handbook: Work Breakdown Structures 
for Defense Materiel Items, MIL-HDBK-881A (Washington, D.C.: OUSD 
(AT&L), July 3, 2005), app. A. 

[End of table] 

Table 52: Electronic/Automated Software System Work Breakdown 
Structure: 
 
Level 2 element: 1.1 Prime mission product; 
Level 3 element: 
1.1.1 Subsystem 1...n (specify names);
1.1.2 Prime mission product applications software;
1.1.3 Prime mission product system software;
1.1.4 Integration, assembly, test, checkout.

Level 2 element: 1.2 Platform integration.

Level 2 element: 1.3 Systems engineering/program management.

Level 2 element: 1.4 System test and evaluation; 
Level 3 element: 
1.4.1 Development test and evaluation; 
1.4.2 Operational test and evaluation; 
1.4.3 Mock-ups/system integration labs; 
1.4.4 Test and evaluation support; 
1.4.5 Test facilities. 

Level 2 element: 1.5 Training; 
Level 3 element: 
1.5.1 Equipment; 
1.5.2 Services; 
1.5.3 Facilities. 

Level 2 element: 1.6 Data; 
Level 3 element: 
1.6.1 Technical publications; 
1.6.2 Engineering data; 
1.6.3 Management data; 
1.6.4 Support data; 
1.6.5 Data depository. 

Level 2 element: 1.7 Peculiar support equipment; 
Level 3 element: 
1.7.1 Test and measurement equipment; 
1.7.2 Support and handling equipment. 

Level 2 element: 1.8 Common support equipment; 
Level 3 element: 
1.8.1 Test and measurement equipment; 
1.8.2 Support and handling equipment. 

Level 2 element: 1.9 Operational/site activation. 

Level 2 element: 1.10 System assembly, installation, checkout; 
Level 3 element: 
1.10.1 Contractor technical support; 
1.10.2 Site construction; 
1.10.3 Site/ship/vehicle conversion. 

Level 2 element: 1.11 Industrial facilities; 
Level 3 element: 
1.11.1 Construction/conversion/expansion; 
1.11.2 Equipment acquisition/modernization; 
1.11.3 Maintenance (industrial facilities). 

Level 2 element: 1.12 Initial spares and repair parts. 
 
Source: DOD, Department of Defense Handbook: Work Breakdown Structures 
for Defense Materiel Items, MIL-HDBK-881A (Washington, D.C.: OUSD 
(AT&L), July 3, 2005), app. B. 

[End of table] 

Table 53: Ground Software Work Breakdown Structure: 
 
Level 2 element: 1.1 Software management; 
Level 3 element: 
1.1.1 General management/control activities; 
1.1.2 Software risk management; 
1.1.3 Arrange and conduct reviews; 
1.1.4 General documentation support; 
1.1.5 Secretarial/clerical; 
1.1.6 Administrative support; 
1.1.7 Information technology/computer support; 
1.1.8 Other expenses. 

Level 2 element: 1.2 Software systems engineering; 
Level 3 element: 
1.2.1 Functional design document; 
1.2.2 Requirements specification; 
1.2.9 Trade-off studies; 
1.2.10 Review preparation. 

Level 2 element: 1.3 Software function i (i = 1...n); 
Level 3 element: 
1.3.1 Management and control activities; 
1.3.2 High-level design; 
1.3.3 Detailed design, code, and unit test; 
1.3.4 Data. 

Level 2 element: 1.4 Software development test bed; 
Level 3 element: 
1.4.1 Test engineering support; 
1.4.2 Test bed development; 
1.4.3 Simulators and test environment; 
1.4.4 Test bed support software; 
1.4.5 Test bed computers. 

Level 2 element: 1.5 Software integration and test; 
Level 3 element: 
1.5.1 Subsystem software integration test plan; 
1.5.2 Software test plans and procedures; 
1.5.3 Support subsystem integration and test; 
1.5.4 System integration and test. 

Level 2 element: 1.6 Software quality assurance; 
Level 3 element: 
1.6.1 Software product assurance plan; 
1.6.2 Software assurance activities. 

Level 2 element: 1.7 Delivery and transfer to operations; 
Level 3 element: 
1.7.1 End user training; 
1.2.3 Software interface documents; 
1.2.4 Configuration management; 
1.2.5 Procurement; 
1.2.6 User manuals; 
1.2.7 Ops concept; 
1.2.8 Concept document.

Source: NASA. 

End of table] 

Table 54: Missile System Work Breakdown Structure: 

Level 2 element: 1.1 Air vehicle 1.1.1 Propulsion (stages 1...n); 
Level 3 element: 
1.1.2 Payload; 
1.1.3 Airframe; 
1.1.4 Reentry system; 
1.1.5 Post boost system; 
1.1.6 Guidance and control; 
1.1.7 Ordnance initiation set; 
1.1.8 Airborne test equipment; 
1.1.9 Airborne training equipment; 
1.1.10 Auxiliary equipment; 
1.1.11 Integration, assembly, test, checkout. 

Level 2 element: 1.2 Command and launch; 
Level 3 element: 
1.2.1 Surveillance, identification, tracking;
1.2.2 Sensors; 
1.2.3 Launch and guidance control; 
1.2.4 Communications; 
1.2.5 Command/launch applications software; 
1.2.6 Command and launch system software; 
1.2.7 Launcher equipment; 
1.2.8 Auxiliary equipment; 
1.2.9 Booster adapter. 

Level 2 element: 1.3 Systems engineering/program management; 
Level 3 element: 
1.3.1 System test and evaluation; 
1.3.2 Development test and evaluation; 
1.3.3 Operational test and evaluation; 
1.3.4 Mock-ups/system integration laboratories; 
1.3.5 Test and evaluation support; 
1.3.6 Test facilities. 

Level 2 element: 1.4 Training; 
Level 3 element: 
1.4.1 Equipment; 
1.4.2 Services; 
1.4.3 Facilities. 

Level 2 element: 1.5 Data; 
Level 3 element: 
1.5.1 Technical publications; 
1.5.2 Engineering data; 
1.5.3 Management data; 
1.5.4 Support data; 
1.5.5 Data depository. 

Level 2 element: 1.6 Peculiar support equipment; 
Level 3 element: 
1.6.1 Test and measurement equipment; 
1.6.2 Support and handling equipment. 

Level 2 element: 1.7 Common support equipment; 
Level 3 element: 
1.7.1 Test and measurement equipment; 
1.7.2 Support and handling equipment. 

Level 2 element: 1.8 Operational/site activation. 

Level 2 element: 1.9 System assembly, installation, checkout; 
Level 3 element: 
1.9.1 Contractor technical support;
1.9.2 Site construction; 
1.9.3 Site/ship/vehicle conversion. 

Level 2 element: 1.10 Industrial facilities; 
Level 3 element: 
1.10.1 Construction/conversion/expansion; 
1.10.2 Equipment acquisition/modernization; 
1.10.3 Maintenance (industrial facilities). 

Level 2 element: 1.11 Initial spares and repair parts. 

Source: DOD, Department of Defense Handbook: Work Breakdown Structures 
for Defense Materiel Items, MIL-HDBK-881A (Washington, D.C.: OUSD 
(AT&L), July 3, 2005), app. C. 

[End of table] 

Table 55: Ordnance System Work Breakdown Structure: 

Level 2 element: 1.1 Complete round; 
Level 3 element: 
1.1.1 Structure; 
1.1.2 Payload; 
1.1.3 Guidance and control; 
1.1.4 Fuze; 
1.1.5 Safety/arm; 
1.1.6 Propulsion; 
1.1.7 Integration, assembly, test, checkout. 

Level 2 element: 1.2 Launch system; 
Level 3 element: 
1.2.1 Launcher; 
1.2.2 Carriage; 
1.2.3 Fire control; 
1.2.4 Ready magazine; 
1.2.5 Adapter kits; 
1.2.6 Integration, assembly, test, checkout. 

Level 2 element: 1.3 Systems engineering/program management. 

Level 2 element: 1.4 System test and evaluation; 
Level 3 element: 
1.4.1 Development test and evaluation; 
1.4.2 Operational test and evaluation; 
1.4.3 Mock-ups/system integration laboratories; 
1.4.4 Test and evaluation support; 
1.4.5 Test facilities. 

Level 2 element: 1.5 Training; 
Level 3 element: 
1.5.1 Equipment; 
1.5.2 Services; 
1.5.3 Facilities. 

Level 2 element: 1.6 Data; 
Level 3 element: 
1.6.1 Technical publications; 
1.6.2 Engineering data; 
1.6.3 Management data; 
1.6.4 Support data; 
1.6.5 Data depository. 

Level 2 element: 1.7 Peculiar support equipment; 
Level 3 element: 
1.7.1 Test and measurement equipment; 
1.7.2 Support and handling equipment. 

Level 2 element: 1.8 Common support equipment; 
Level 3 element: 
1.8.1 Test and measurement equipment; 
1.8.2 Support and handling equipment. 

Level 2 element: 1.9 Operational/site activation. 

Level 2 element: 1.10 System assembly, installation, checkout; 
Level 3 element: 
1.10.1 Contractor technical support; 
1.10.2 Site construction; 
1.10.3 Site/ship/vehicle conversion. 

Level 2 element: 1.11 Industrial facilities; 
Level 3 element: 
1.11.1 Construction/conversion/expansion; 
1.11.2 Equipment acquisition/modernization; 
1.11.3 Maintenance (industrial facilities). 

Level 2 element: 1.12 Initial spares and repair parts. 

Source: DOD, Department of Defense Handbook: Work Breakdown Structures 
for Defense Materiel Items, MIL-HDBK-881A (Washington, D.C.: OUSD 
(AT&L), July 3, 2005), app. D. 

[End of table] 

Table 56: Sea System Work Breakdown Structure: 

Level 2 element: 1.1 Ship; 
Level 3 element: 
1.1.1 Hull structure;
1.1.2 Propulsion plant; 
1.1.3 Electric plant; 
1.1.4 Command/communication/surveillance; 
1.1.5 Auxiliary systems; 
1.1.6 Outfit and furnishings; 
1.1.7 Armament; 
1.1.8 Total ship integration/engineering; 
1.1.9 Ship assembly and support services. 

Level 2 element: 1.2 Systems engineering/program management. 

Level 2 element: 1.3 System test and evaluation; 
Level 3 element: 
1.3.1 Development test and evaluation; 
1.3.2 Operational test and evaluation; 
1.3.3 Mock-ups/system integration laboratories; 
1.3.4 Test and evaluation support; 
1.3.5 Test facilities. 

Level 2 element: 1.4 Training; 
Level 3 element: 
1.4.1 Equipment; 
1.4.2 Services; 
1.4.3 Facilities. 

Level 2 element: 1.5 Data; 
Level 3 element: 
1.5.1 Technical publications; 
1.5.2 Engineering data; 
1.5.3 Management data; 
1.5.4 Support data; 
1.5.5 Data depository. 

Level 2 element: 1.6 Peculiar support equipment; 
Level 3 element: 
1.6.1 Test and measurement equipment; 
1.6.2 Support and handling equipment. 

Level 2 element: 1.7 Common support equipment; 
Level 3 element: 
1.7.1 Test and measurement equipment; 
1.7.2 Support and handling equipment.

Level 2 element: 1.8 Operational/site activation; 
Level 3 element: 
1.8.1 System assembly, installation, checkout; 
1.8.2 Contractor technical support; 
1.8.3 Site construction; 
1.8.4 Site/ship/vehicle conversion. 

Level 2 element: 1.9 Industrial facilities; 
Level 3 element: 
1.9.1 Construction/conversion/expansion; 
1.9.2 Equipment acquisition/modernization; 
1.9.3 Maintenance (industrial facilities). 

Level 2 element: 1.10 Initial spares and repair parts. 
 
Source: DOD, Department of Defense Handbook: Work Breakdown Structures 
for Defense Materiel Items, MIL-HDBK-881A (Washington, D.C.: OUSD 
(AT&L), July 3, 2005), app. E. 

Table 57: Space System Work Breakdown Structure: 
 
Level 2 element: 1.1 Systems engineering, integration, and test; 
program management; and other common elements. 

Level 2 element: 1.2 Space vehicle (1...n as required); 
Level 3 element: 
1.2.1 Systems engineering, integration, and test; program management; 
and other common elements; 
1.2.2 Spacecraft bus; 
1.2.3 Communication/payload; 
1.2.4 Booster adapter; 
1.2.5 Space vehicle storage; 
1.2.6 Launch systems integration; 
1.2.7 Launch operations & mission support. 

Level 2 element: 1.3 Ground (1...n as required); 
Level 3 element: 
1.3.1 Systems engineering, integration, and test; program management; 
and other common elements; 
1.3.2 Ground terminal subsystems; 
1.3.3 Command and control subsystem; 
1.3.4 Mission management subsystem;
1.3.5 Data archive/storage subsystem;
1.3.6 Mission data processing subsystem;
1.3.7 Mission data analysis and dissemination subsystem;
1.3.8 Mission infrastructure subsystem; 
1.3.9 Collection management subsystem. 

Level 2 element: 1.4 Launch vehicle. 

Source: DOD, Department of Defense Handbook: Work Breakdown Structures 
for Defense Materiel Items, MIL-HDBK-881A (Washington, D.C.: OUSD 
(AT&L), July 3, 2005), app. F. 

[End of table] 

Table 58: Surface Vehicle System Work Breakdown Structure: 

Level 2 element: 1.1 Primary vehicle; 
Level 3 element: 
1.1.1 Hull/frame; 
1.1.2 Suspension/steering; 
1.1.3 Power package/drive train; 
1.1.4 Auxiliary automotive; 
1.1.5 Turret assembly; 
1.1.6 Fire control; 
1.1.7 Armament; 
1.1.8 Body/cab; 
1.1.9 Automatic loading; 
1.1.10 Automatic/remote piloting; 
1.1.11 Nuclear, biological, chemical; 
1.1.12 Special equipment; 
1.1.13 Navigation; 
1.1.14 Communications; 
1.1.15 Primary vehicle application software; 
1.1.16 Primary vehicle system software; 
1.1.17 Vetronics; 
1.1.18 Integration, assembly, test, checkout. 

Level 2 element: 1.2 Secondary vehicle; 
Level 3 element: 
1.1.1–18 (Same as primary vehicle). 

Level 2 element: 1.3 Systems engineering/program management; 
Level 3 element: 
1.3.1 System test and evaluation; 
1.3.2 Development test and evaluation; 
1.3.3 Operational test and evaluation; 
1.3.4 Mock-ups/system integration lab; 
1.3.5 Test and evaluation support; 
1.3.6 Test facilities. 

Level 2 element: 1.4 Training; 
Level 3 element: 
1.4.1 Equipment; 
1.4.2 Services; 
1.4.3 Facilities. 

Level 2 element: 1.5 Data; 
Level 3 element: 
1.5.1 Technical publications; 
1.5.2 Engineering data; 
1.5.3 Management data; 
1.5.4 Support data; 
1.5.5 Data depository. 

Level 2 element: 1.6 Peculiar support equipment; 
Level 3 element: 
1.6.1 Test and measurement equipment; 
1.6.2 Support and handling equipment. 

Level 2 element: 1.7 Common support equipment; 
Level 3 element: 
1.7.1 Test and measurement equipment; 
1.7.2 Support and handling equipment. 

Level 2 element: 1.8 Operational/site activation; 
Level 3 element: 
1.8.1 System assembly, installation, checkout;
1.8.2 Contractor technical support;
1.8.3 Site construction;
1.8.4 Site/ship/vehicle conversion. 

Level 2 element: 1.9 Industrial facilities; 
Level 3 element: 
1.9.1 Construction/conversion/expansion;
1.9.2 Equipment acquisition/modernization;
1.9.3 Maintenance (industrial facilities).

Level 2 element: 1.10 Initial spares and repair parts.

Source: DOD, Department of Defense Handbook: Work Breakdown Structures 
for Defense Materiel Items, MIL-HDBK-881A (Washington, D.C.: OUSD 
(AT&L), July 3, 2005), app. G. 

[End of table] 

Table 59: Unmanned Air Vehicle System Work Breakdown Structure: 
 
Level 2 element: 1.1 Air vehicle; 
Level 3 element: 
1.1.1 Airframe;
1.1.2 Propulsion;
1.1.3 Communications/identification;
1.1.4 Navigation/guidance;
1.1.5 Central computer;
1.1.6 Auxiliary equipment;
1.1.7 Air vehicle application software;
1.1.8 Air vehicle system software;
1.1.9 Integration, assembly, test, checkout.

Level 2 element: 1.2 Payload (1...n); 
Level 3 element: 
1.2.1 Survivability;
1.2.2 Reconnaissance;
1.2.3 Electronic warfare;
1.2.4 Armament;
1.2.5 Weapons delivery;
1.2.6 Payload application software;
1.2.7 Payload system software;
1.2.8 Integration, assembly, test, checkout;

Level 2 element: 1.3 Ground segment; 
Level 3 element: 
1.3.1 Ground control systems;
1.3.2 Command and control subsystem;
1.3.3 Launch and recovery equipment;
1.3.4 Transport vehicles;
1.3.5 Ground segment application software;
1.3.6 Ground segment system software;
1.3.7 Integration, assembly, test, checkout.

Level 2 element: 1.4 System integration, assembly, test.

Level 2 element: 1.5 Systems engineering/program management.

Level 2 element: 1.6 System test and evaluation; 
Level 3 element: 
1.6.1 Development test and evaluation; 
1.6.2 Operational test and evaluation; 
1.6.3 Mock-ups/system integration laboratories; 
1.6.4 Test and evaluation support; 
1.6.5 Test facilities. 

Level 2 element: 1.7 Training; 
Level 3 element: 
1.7.1 Equipment; 
1.7.2 Services; 
1.7.3 Facilities. 

Level 2 element: 1.8 Data; 
Level 3 element: 
1.8.1 Technical publications; 
1.8.2 Engineering data; 
1.8.3 Management data; 
1.8.4 Support data; 
1.8.5 Data depository. 

Level 2 element: 1.9 Peculiar support equipment; 
Level 3 element: 
1.9.1 Test and measurement equipment; 
1.9.2 Support and handling equipment. 

Level 2 element: 1.10 Common support equipment; 
Level 3 element: 
1.10.1 Test and measurement equipment; 
1.10.2 Support and handling equipment. 

Level 2 element: 1.11 Operational/site activation; 
Level 3 element: 
1.11.1 System assembly, installation, checkout; 
1.11.2 Contractor technical support; 
1.11.3 Site construction; 
1.11.4 Site/ship/vehicle conversion. 

Level 2 element: 1.12 Industrial facilities; 
Level 3 element: 
1.12.1 Construction/conversion/expansion; 
1.12.2 Equipment acquisition/modernization; 
1.12.3 Maintenance (industrial facilities). 

Level 2 element: 1.13 Initial spares and repair parts. 

Source: DOD, Department of Defense Handbook: Work Breakdown Structures 
for Defense Materiel Items, MIL-HDBK-881A (Washington, D.C.: OUSD 
(AT&L), July 3, 2005), app. H. 

[End of table] 

Table 60: Department of Energy Project Work Breakdown Structure: 
 
Level 2 element: 1.1 Fuel processing; 

Level 3 element: 1.1.1 Conceptual design; 
Level 4 element: 
1.1.1.1 Conceptual design facility; 
1.1.1.2 Criteria development & conceptual design reviews. 

Level 3 element: 1.1.2 Design; 
Level 4 element: 
1.1.2.1 Definitive design; 
1.1.2.2 CADD consultant; 
1.1.2.3 Engineering support during construction. 

Level 3 element: 1.1.3 Government furnished equipment; 
Level 4 element: 
1.1.3.1 Construction preparation; 
1.1.3.2 Building; 
1.1.3.3 Process & service systems & equipment; 
1.1.3.4 Quality assurance. 

Level 3 element: 1.1.4 Construction; 
Level 4 element: 
1.1.4.1 Construction preparation; 
1.1.4.2 Building; 
1.1.4.3 Process & service systems & equipment; 
1.1.4.4 Construction inspection; 
1.1.4.5 Construction management; 
1.1.4.6 Construction services; 
1.1.4.7 Constructability reviews. 

Level 3 element: 1.1.5 Project administration; 
Level 4 element: 
1.1.5.1 Project control; 
1.1.5.2 Records management; 
1.1.5.3 Support services; 
1.1.5.4 Engineering; 
1.1.5.5 Independent construction cost estimate. 

Level 3 element: 1.1.6 Systems development; 
Level 4 element: 
1.1.6.1 Process development; 
1.1.6.2 Design support; 
1.1.6.3 Plant liaison; 
1.1.6.4 Computer/control system development. 

Level 3 element: 1.1.7 Startup; 
Level 4 element: 
1.1.7.1 SO test preparation; 
1.1.7.2 SO test performance; 
1.1.7.3 Manuals; 
1.1.7.4 Integrated testing; 
1.1.7.5 Cold run; 
1.1.7.6 SO test resources; 
1.1.7.7 Deleted; 
1.1.7.8 Preventative maintenance. 

Level 3 element: 1.1.8 Safety/environmental; 
Level 4 element: 
1.1.8.1 Environmental assessment; 
1.1.8.2 PSD document; 
1.1.8.3 Safety analysis report; 
1.1.8.4 Probabilistic risk assessment; 
1.1.8.5 Document coordination; 
1.1.8.6 RAM study; 
1.1.8.7 Hazardous waste. 

Level 2 element: 1.2 Liquid effluent treatment & disposal; 
Level 3 element: 
1.2.1 Construction; 
1.2.2 Government furnished material; 
1.2.3 Construction inspection; 
1.2.4 Project administration; 
1.2.5 Design.
 
Source: Department of Energy, Work Breakdown Structures for Defense 
Materiel Items (Washington, D.C.: June 2003). 

[End of table] 

Table 61: General Services Administration Construction Work Breakdown 
Structure: 
 
Level 2 element: 1.1 Superstructure 
Level 3 element: 
1.1.1 Foundations; 
1.1.2 Basement construction. 

Level 2 element: 1.2 Exterior enclosure; 
Level 3 element: 
1.2.1 Exterior walls; 
1.2.2 Exterior glazing & doors; 
1.2.3 Roofing. 

Level 2 element: 1.3 Interior construction; 
Level 3 element: 
1.3.1 Partitions, doors & specialties; 
1.3.2 Access/platform floors; 
1.3.3 Interior finishes. 

Level 2 element: 1.4 Conveyance systems; 
Level 3 element: 
1.4.1 Conveyance systems; 
1.4.2 Plumbing; 
1.4.3 HVAC; 
1.4.4 Fire protection/alarm; 
1.4.5 Electrical service distribution & emergency power; 
1.4.6 Lighting and branch wiring; 
1.4.7 Communications, security & other electrical systems. 

Level 2 element: 1.5 Equipment & furnishing; 
Level 3 element: 
1.5.1 Equipment & furnishings. 

Level 2 element: 1.6 Special construction, demolition & abatement; 
Level 3 element: 
1.6.1 Special construction; 
1.6.2 Building demolition & abatement. 
 
Level 2 element: 1.7 Site work; 
Level 3 element: 
1.7.1 Site work building related; 
1.7.2 Other site work project related. 
 
Source: General Services Administration, Project Estimating Requirement 
(Washington, D.C.: January 2007). 

[End of table] 

Table 62: Automated Information System: Enterprise Resource Planning 
Program Level Work Breakdown Structure: 
 
Level 2 element: 1.1 Configuration, customization, development;
Level 3 element: 
1.1.1 Site prototype design and development foundations; 
1.1.2 Product application solution basement construction; 
1.1.3 System integration & test; 
1.1.4 Systems engineering/program management/change management; 
1.1.5 System test & evaluation; 
1.1.6 Training; 
1.1.7 Data; 
1.1.8 Reserved; 
1.1.9 Reserved; 
1.1.10 Reserved; 
1.1.11 Industrial facilities; 
1.1.12 Initial spares. 

Level 2 element: 1.2 Operational/site implementation; 
Level 3 element: 
1.2.1 Site type 1; 
1.2.2 Site type 2; Site type n; 
1.2.3 System integration & test; 
1.2.4 Systems engineering/program management/change management; 
1.2.5 System test & evaluation; 
1.2.6 Training; 
1.2.7 Data; 
1.2.8 Reserved; 
1.2.9 Reserved; 
1.2.10 Reserved; 
1.2.11 Industrial facilities; 
1.2.12 Initial spares. 

Level 2 element: 1.3 Sustainment; 
Level 3 element: 
1.3.1 Management; 
1.3.2 Sustaining engineering; 
1.3.3 COTS software maintenance and renewal; 
1.3.4 Custom software maintenance; 
1.3.5 Annual operations investment; 
1.3.6 Tech refresh; 
1.3.7 Recurring training; 
1.3.8 Hardware maintenance;
1.3.9 Help desk support. 

Level 2 element: 1.4 Systems engineering/program management/change 
management; 
Level 3 element: 
1.4.1 Systems engineering; 
1.4.2 Program management; 
1.4.3 Change management. 

Level 2 element: 1.5 Data. 
 
Source: U.S. Air Force. 

[End of table] 

Table 63: Environmental Management Work Breakdown Structure: 

Level 2 element: 1.1 System design; 
Level 3 element: 
1.1.1 Initial design; 
1.1.2 Client meeting;
1.1.3 Draft design;
1.1.4 Client and regulatory agency meeting;
1.1.5 Final design.

Level 2 element: 1.2 System installation; 
Level 3 element: 
1.2.1 Facility planning meeting;
1.2.2 Well installation;
1.2.3 Electrical power drop installation;
1.2.4 Blower and piping installation.

Level 2 element: 1.3 Soil permeability test; 
Level 3 element: 
1.3.1 System operation check;
1.3.2 Soil permeability test;
1.3.3 Test report.

Level 2 element: 1.4 Initial in situ respiration test; 
Level 3 element: 
1.4.1 In situ respiration test;
1.4.2 Test report.

Level 2 element: 1.5 Long-term bioventing test; 
Level 3 element: 
1.5.1 Ambient air monitoring.
1.5.2 Operation, maintenance, and monitoring;
1.5.3 Three-month in situ respiration test;
1.5.4 Test report;
1.5.5 Six-month in situ respiration test;
1.5.6 Test report.

Level 2 element: 1.6 Confirmation sampling; 
Level 3 element: 
1.6.1 Soil boring and sampling;
1.6.2 Data validation.

Level 2 element: 1.7 Report preparation; 
Level 3 element: 
1.7.1 Predraft report;
1.7.2 Client meeting;
1.7.3 Draft report;
1.7.4 Client and regulatory agency meeting;
1.7.5 Final report.

Level 2 element: 1.8 Project management. 

Source: Project Management Institute, Practice Standards for Work 
Breakdown Structures, Project Management Institute, Inc. (2006). 
Copyright and all rights reserved. Material from this publication has 
been reproduced with the permission of PMI. 

[End of table] 

Table 64: Pharmaceutical Work Breakdown Structure: 
 
1.1 Project initiation; 
Level 3 element: 
1.1.1 Decision to develop business case;
1.1.2 Business case;
1.1.3 Project initiation decision.

Level 2 element: 1.2 Marketing/sales support; 
Level 3 element: 
1.2.1 Market research program;
1.2.2 Branding program;
1.2.3 Pricing program;
1.2.4 Sales development program;
1.2.5 Other marketing/sales support.

Level 2 element: 1.3 Regulatory support; 
Level 3 element: 
1.3.1 IND submission; 
1.3.2 End of Phase 2 meeting; 
1.3.3 BLA/NDA submission; 
1.3.4 Postapproval regulatory support program. 

Level 2 element: 1.4 Lead identification program; 
Level 3 element: 
1.4.1 Hypothesis generation; 
1.4.2 Assay screening; 
1.4.3 Lead optimization; 
1.4.4 Other discovery support. 

Level 2 element: 1.5 Clinical pharmacology support; 
Level 3 element: 
1.5.1 Pharmacokinetic studies; 
1.5.2 Drug interaction studies; 
1.5.3 Renal effect studies; 
1.5.4 Hepatic effect studies; 
1.5.5 Bioequivalency studies; 
1.5.6 Other clinical pharmacology studies. 

Level 2 element: 1.6 Preclinical program; 
Level 3 element: 
1.6.1 Tox/ADME support; 
1.6.2 Clinical pharmacology support. 

Level 2 element: 1.7 Phase I clinical study program; 
Level 3 element: 
1.7.1 Pharmacokinetic/pharmacodynamic studies; 
1.7.2 Dose ranging studies; 
1.7.3 Multiple dose safety studies. 

Level 2 element: 1.8 Phase II clinical study program; 
Level 3 element: 
1.8.1 Multiple dose efficacy studies; 
1.8.2 Other clinical studies. 

Level 2 element: 1.9 Phase III clinical study program; 
Level 3 element: 
1.9.1 Pivotal registration studies; 
1.9.2 Other clinical studies. 

Level 2 element: 1.10 Submission/launch phase; 
Level 3 element: 
1.10.1 Prelaunch preparation; 
1.10.2 Launch; 
1.10.3 Post-launch support. 

Level 2 element: 1.11 Phase/commercialization clinical study program; 
Level 3 element: 
1.11.1 Investigator-sponsored studies; 
1.11.2 Registry studies. 

Level 2 element: 1.12 Legal support; 
Level 3 element: 
1.12.1 Publications; 
1.12.2 Patents/intellectual property; 
1.12.3 Trademarks; 
1.12.4 Other legal support. 

Level 2 element: 1.13 Program management support; 
Level 3 element: 
1.13.1 Program-level project management; 
1.13.2 Preclinical project management; 
1.13.3 Clinical project management; 
1.13.4 CM&C project management; 
1.13.5 Other project management support. 

Source: Project Management Institute, Practice Standards for Work 
Breakdown Structures, Project Management Institute, Inc. (2006). 
Copyright and all rights reserved. Material from this publication has 
been reproduced with the permission of PMI. 

[End of table] 

Table 65: Process Plant Construction Work Breakdown Structure: 

Level 2 element: 1.1 Plant system design; 
Level 3 element: 
1.1.1 Business requirements; 
1.1.2 Process models.

Level 2 element: 1.2 Construction; 
Level 3 element: 
1.2.1 Site development;
1.2.2 Civil structure;
1.2.3 Thermal systems;
1.2.4 Flow systems;
1.2.5 Storage systems;
1.2.6 Electrical systems;
1.2.7 Mechanical systems; 
1.2.8 Instrument and control systems;
1.2.9 Environmental systems;
1.2.10 Temporary structure;
1.2.11 Auxiliary systems;
1.2.12 Safety systems.

Level 2 element: 1.3 Legal and regulatory; 
Level 3 element: 
1.3.1 Licensing (nongovernment)/permitting (government);
1.3.2 Environmental impact;
1.3.3 Labor agreements;
1.3.4 Land acquisition.

Level 2 element: 1.4 Testing; 
Level 3 element: 
1.4.1 System test;
1.4.2 Acceptance test.

Level 2 element: 1.5 Startup.

Level 2 element: 1.6 Project management.

Source: Project Management Institute, Practice Standards for Work 
Breakdown Structures, Project Management Institute, Inc. (2006). 
Copyright and all rights reserved. Material from this publication has 
been reproduced with the permission of PMI. 

[End of table] 

Table 66: Telecon Work Breakdown Structure: 

Level 2 element: 1.1 Concept/feasibility; 
Level 3 element: 
1.1.1 Concept;
1.1.2 Marketing analysis;
1.1.3 Market plan;
1.1.4 Technical analysis;
1.1.5 Product scope definition;
1.1.6 Prototype.

Level 2 element: 1.2 Requirements; 
Level 3 element: 
1.2.1 End-user requirements;
1.2.2 Application requirements;
1.2.3 Infrastructure (systems) requirements;
1.2.4 Operations/maintenance requirements;
1.2.5 Service requirements.

Level 2 element: 1.3 Go/no go decision; 
Level 3 element: 
1.3.1 Prototype review;
1.3.2 Financial review; 
1.3.3 Schedule review;
1.3.4 Technical capabilities review;
1.3.5 Financial commitment review;
1.3.6 Go/no-go decision. 

Level 2 element: 1.4 Development; 
Level 3 element: 
1.4.1 End-user systems;
1.4.2 Application;
1.4.3 Infrastructure systems;
1.4.4 Network;
1.4.5 Operations/maintenance systems;
1.4.6 Service plan.

Level 2 element: 1.5 Testing; 
Level 3 element: 
1.5.1 Test plans;
1.5.2 Tests;
1.5.3 Results;
1.5.4 Corrective actions; 
1.5.5 Retests;
1.5.6 Retest results.

Level 2 element: 1.6 Deployment; 
Level 3 element: 
1.6.1 Trial in a nonpenalty environment;
1.6.2 First action site;
1.6.3 Deployment. 

Level 2 element: 1.7 Life-cycle support; 
Level 3 element: 
1.7.1 Customer training & education;
1.7.2 Turnover to customer;
1.7.3 Customer acceptance;
1.7.4 Support & maintenance.

Level 2 element: 1.8 Project management. 

Source: Project Management Institute, Practice Standards for Work 
Breakdown Structures, Project Management Institute, Inc. (2006). 
Copyright and all rights reserved. Material from this publication has 
been reproduced with the permission of PMI. 

[End of table] 

Table 67: Software Implementation Project Work Breakdown Structure: 

Level 2 element: 1.1 Project management.

Level 2 element: 1.2 Product requirements; 
Level 3 element: 
1.2.1 Software requirements;
1.2.2 User documentation;
1.2.3 Training program materials;
1.2.4 Hardware;
1.2.5 Implementation & future support.

Level 2 element: 1.3 Detail software design; 
Level 3 element: 
1.3.1 Initial software design;
1.3.2 Final software design;
1.3.3 Software design approval.

Level 2 element: 1.4 System construction;
Level 3 element: 
1.4.1 Configured software;
1.4.2 Customized user documentation;
1.4.3 Customized training program materials;
1.4.4 Installed hardware;
1.4.5 Implementation & future support.

Level 2 element: 1.5 Test; 
Level 3 element: 
1.5.1 System test plan;
1.5.2 System test cases;
1.5.3 System test results;
1.5.4 Acceptance test plan;
1.5.5 Acceptance test cases;
1.5.6 Acceptance test results;
1.5.7 Approved user documentation.

Level 2 element: 1.6 Go live.

Level 2 element: 1.7 Support; 
Level 3 element: 
1.7.1 Training;
1.7.2 End user support;
1.7.3 Product support. 

Source: Project Management Institute, Practice Standards for Work 
Breakdown Structures, Project Management Institute, Inc. (2006). 
Copyright and all rights reserved. Material from this publication has 
been reproduced with the permission of PMI. 

[End of table] 

Table 68: Major Renovation Project Work Breakdown Structure: 

Level 2 element: 1.1 Requirements; 
Level 3 element: 
1.1.1 Requirements; 
1.1.2 Planning;
1.1.3 Design.

Level 2 element: 1.2 Construction; 
Level 3 element: 
1.2.1 Move out;
1.2.2 Entrances;
1.2.3 Preconstruction;
1.2.4 Core and shell.

Level 2 element: 1.3 Tenant fit out; 
Level 3 element: 
1.3.1 Tenant fit out construction;
1.3.2 Security;
1.3.3 Furniture, fixture, equipment;
1.3.4 Move-in;
1.3.5 Commissioning.

Level 2 element: 1.4 Information management and telecomms;
Level 3 element: 
1.4.1 Design and engineering;
1.4.2 Temporary and transitional communication; 
1.4.3 Backbone basic;
1.4.4 Tenant specific;
1.4.5 Systems. 

Source: DOD, Pentagon Renovation Program. 

[End of table] 

Table 69: Sample IT Infrastructure and Service Work Breakdown 
Structure: 

The Table 69 sample WBS includes IT infrastructure and IT services 
only. Automated information system configuration, customization, 
development, and maintenance are in chapter 8 in this guide.

Level 1 element: 1.0 IT project investment (nonrecurring).

Level 2 element: 1.1 Facility Type 1 – n (e.g., buildings, flooring, 
cooling)
Level 3 element: 
1.1.1 Infrastructure site construction; 
1.1.2 Operational site construction; 
1.1.3 Integration/test facility construction. 

Level 2 element: 1.2 Purchased software licenses (e.g., application 
software, system software, database). 

Level 2 element: 1.3 Infrastructure purchased hardware (e.g., UNIX 
servers, Windows servers, WAN/LAN equipment). 

Level 2 element: 1.4 End user purchased hardware (e.g., switches, PC, 
printers, copiers). 

Level 2 element: 1.5 Operational/site implementation (aka deployment) 
(e.g., system architecture, training, hardware & software setup); 
Level 3 element: 
1.5.1 IT systems architecture/design; 
1.5.2 Software/database services (e.g., deploying & supporting 
enterprise applications, databases, data migration, 
middleware and software services); 
1.5.3 Infrastructure hardware & software installation, activation, & 
checkout (e.g., setup, deploy, test, checkout infrastructure systems 
such as servers, storage systems, & networks); 
1.5.4 End user hardware & software installation, activation, & 
checkout: (e.g., labor to setup & deploy end user systems such as PC’s, 
notebooks, PDAs, communication devices, and other mobile devices); 
1.5.5 Data (e.g., user documentation preparation labor); 
1.5.6 Training development (e.g., course development); 
1.5.7 Initial training (e.g., personnel training); 
1.5.8 Data migration; 
1.5.9 Operational test & evaluation (e.g., labor and material to test 
and certify the overall IT project). 

Level 2 element: 1.6 Management; 
Level 3 element: 
1.6.1 Government program office; 
1.6.2 Contractor program management. 

Level 1 element: 2.0 Operations & support (recurring). 

Level 2 element: 2.1 IT facility operations type 1; 
Level 3 element: 
2.1.1 IT facilities maintenance & support; 
2.1.2 Power. 

Level 2 element: 2.2 Facility operations type 2. 

Level 2 element: 2.3 Facility operations type n. 

Level 2 element: 2.4. Purchased software maintenance; 
Level 3 element: 
2.4.1 Application software (e.g., ongoing licensing); 
2.4.2 System software; 
2.4.3 Database. 

Level 2 element: 2.5 Purchased hardware maintenance; 
Level 3 element: 
2.5.1 Infrastructure; 
2.5.2 End user. 

Level 2 element: 2.6 Change architecture/design; 

Level 2 element: 2.7 Purchased software and hardware refresh (e.g., new 
hardware, new software, spares); 

Level 2 element: 2.8 IT project operations & monitoring 
Level 3 element: 
2.8.1 System administration; 
2.8.2 Database administration; 
2.8.3 Help desk support (Tier I, II, III); 
2.8.4 Security; 
2.8.5 Other IT operations & monitoring (e.g., hardware maintenance, 
computer operations); 
2.8.6 Data maintenance (e.g., documentation review & update labor) 
2.8.7 Recurring training (e.g., end users, developers, IT operations 
personnel); 
2.8.8 Data migration update; 
2.8.9 Management. 
 
Source: GAO, DOD, and industry expert collaboration. 

[End of table] 

The MasterFormat™ and OmniClass™ Work Classification System: 

Many standard project construction breakdown structures have been 
created over the years for use in construction management. The most 
common, in existence since the 1960s, are the CSI (Construction 
Specifications Institute) format in North America and the SMM7 
(Standard Method of Measurement) format in Great Britain.[Footnote 93] 
They originated as breakdowns for commercial building construction but 
have evolved to include other forms of construction. 

CSI introduced an expanded version, the MasterFormat™, in 2004 that 
includes 50 divisions of work covering civil site and infrastructure 
work as well as process equipment—a significant increase from the 
previous 16 divisions covering building construction that had been in 
use for years. This expansion reflects the growing complexity of the 
construction industry, as well as the need to incorporate facility life 
cycle and maintenance information into the building knowledge base. 
Another level of standardized numbers was added to the publication. One 
goal was to eventually facilitate building information modeling to 
contain project specifications. 

The MasterFormat™ standard serves as the organizational structure for 
construction industry publications such as the Sweets catalog, with a 
wide range of building products; MasterSpec; and other popular master 
guide specification applications, and RS Means and other cost 
information applications. MasterFormat helps architects, engineers, 
owners, contractors, and manufacturers classify the typical use 
of various products to achieve technical solutions on the job site, 
known as “work results.” Work results are permanent or temporary 
aspects of construction projects achieved in the production stage or by 
subsequent alteration, maintenance, or demolition processes, through 
the application of a particular skill or trade to construction 
resources. 

The OmniClass™ Construction Classification System, a new North American 
classification system, is useful for many additional applications, from 
organizing library materials, product literature, and project 
information to providing a classification structure for electronic 
databases.[Footnote 94] It incorporates other systems in use as the 
basis of many of its tables, including MasterFormat for work results 
and UniFormat™ for elements. 

OmniClass follows the international framework set out in International 
Organization for Standardization (ISO) Technical Report 
14177—Classification of Information in the Construction Industry, July 
1994. This document has been established as a standard in ISO 12006-2: 
Organization of Information about Construction Works—Part 2: Framework 
for Classification of Information. 

It is also worth noting that CSI is involved in developing a 
corresponding system for terminology based on a related ISO standard, 
ISO 12006-3: Organization of Information about Construction Works—Part 
3: Framework for Object-Oriented Information. The system known as the 
International Framework for Dictionaries (IFD) Library is a standard 
for terminology libraries or ontologies. It is part of the 
international standards for building information modeling being 
developed and promoted by buildingSMART International (bSI). CSI sees 
the IFD Library being used in conjunction with OmniClass to establish a 
controlled vocabulary for the North American building industry, thereby 
improving interoperability. Both OmniClass and the IFD Library are 
included in the development work of the buildingSmart alliance (the 
North American chapter of bSI) and its National Building Information 
Modeling Standard (NBIMS). 

OmniClass consists of 15 tables, each representing a different facet of 
construction information. Each table can be used independently to 
classify a particular type of information, or entries on it can be 
combined with entries on other tables to classify more complex 
subjects. The tables are not numbered sequentially and there are gaps 
in the progression. The first is table 11, Construction Entities by 
Function, and the last of the 15 is table 49, Properties. 

The OmniClass structures start to approach the DOD WBS template model 
at the system level regarding construction classifications. In its 
table 21 under “Utilities and Infrastructure” is included breakdowns 
for roadways, railways, airports, space travel, utilities, and water-
related construction. This is not unlike a concept in aircraft systems, 
electronic systems, missile systems, and ship systems from the DOD Mil 
Handbook template. 

OmniClass table 22 is based almost entirely on the CSI MasterFormat 
tables, although it is noted in OmniClass that “some content of 
MasterFormat 2004 Edition is not included in table 22.”[Footnote 95] 

None of the current construction breakdowns, including CSI, fully cover 
the complete civil infrastructure project life cycle, including 
development, engineering, construction, operations, maintenance, and 
risk mitigation. The current CSI MasterFormat 2004 edition comes 
closest to covering all the scope of work found in the construction of 
building facilities and site work. It falls short in addressing the 
unique requirements of program managers, estimators, schedulers, and 
cost engineers and in identifying all phases of work included in major 
infrastructure work such as Build Own and Transfer programs. 

These structures, MasterFormat, and OmniClass are not program work 
breakdown structures, although some subsections have the appearance of 
a WBS. However, at all levels the elements in the structures are 
candidates for WBS element descriptors, including work packages, and 
they meet the common definitions of WBS elements, being all nouns or 
nouns and adjectives. The MasterFormat tables include the equivalent of 
a WBS dictionary for the lowest levels. 

Many listings available in MasterFormat would enable an organization to 
pick and choose to provide ready-to-go WBS elements for virtually any 
work on any construction project, including related equipment and 
furnishings. It must be noted, however, that the summary headings are 
not truly WBS elements, since the breakdown or listings under the 
headings are further listings of categories within a heading and do not 
meet the WBS 100 percent rule. 

Figure 43 illustrates the relationship of the CSI MasterFormat 
structure to a WBS based on the CSI structure. (A true WBS would be 
based on the actual product structure.) The summary CSI elements are 
listed as the 34 divisions. Each division contains one or more sections 
that would be selected from the complete MasterFormat set to relate to 
the specific needs of the project. Also, although not shown, the 
specific physical breakdown of the building needs to be overlaid. For 
example, it would be normal for the individual floors to be identified 
and the appropriate work packages for each floor selected from the 
appropriate MasterFormat sections. Note, also, that there is no further 
breakdown of the project management element as would be the case in a 
true WBS. 

Figure 43: MasterFormat™ Work Breakdown Structure: 

[Refer to PDF for image: illustration] 

WBS Level 1: 
Construction phase; CSI 2004 structure. 

WBS Level 2: Base building construction: 
WBS Level 3 CSI subgroups: 

* Project management; 

* General requirements; 
WBS Level 4 CSI divisions: 
- 01 General requirements; 

* Facility construction; 
WBS Level 4 CSI divisions: 
- 02 Existing conditions; 
- 03 Concrete
- 04 Masonry
- 05 Metals 
- 06 Wood, plastics, and composites 
- 07 Thermal and moisture protection 
- 08 Openings
- 09 Finishes; 
- 10 Specialties 
- 11 Equipment 
- 12 Furnishings 
- 13 Special construction; 
- 14 Conveying equipment; 

* Facility services; 
WBS Level 4 CSI divisions: 
- 21 Fire suppression
- 22 Plumbing
- 23 Heating, ventilating, and air conditioning 
- 25 Integrated automation 
- 26 Electrical 
- 27 Communications 
- 28 Electronic safety and security; 

* Site and infrastructure; 
WBS Level 4 CSI divisions: 
- 31 Earthwork;
- 32 Exterior improvements;
- 33 Utilities; 
- 34 Transportation; 
- 35 Waterway and marine construction; 

* Process equipment; 
WBS Level 4 CSI divisions: 
- 40 Process integration;
- 41 Material processing and handling equipment;
- 42 Process heating, cooling, and drying equipment; 
- 43 Process gas and liquid handling, purification, security and 
storage equipment;
- 44 Pollution control equipment; 
- 45 Industry-specific manufacturing equipment; 
- 48 Electrical power generation. 

WBS Level 2: Other construction or installation. 
 
Note: The CSI numbering contains gaps or “reserved” division titles 

Reference: Levels 3 and 4: Constructions Specifications Institute, 
MasterFormat™ 2004, page Div Numbers-1. 

Source: CSI and CSC MasterFormat™ 2004 (c) 2006. 

[End of figure] 

Building Information Modeling/Management (BIM) Applications: 
 
OmniClass, MasterFormat, and UniFormat are used to index, organize, and 
retrieve a variety of different information types throughout a 
project’s life cycle. The consistent use of standard classifications 
from any of these, applied to objects, will enhance users’ ability to 
sort data or to roll up or drill down through data based on the 
hierarchy that all these classifications are built on. A standard 
implementation of any of these classifications within a BIM model will 
allow for this same information sorting and retrieval across multiple 
platforms and by all users at any stage in a facility’s life cycle. 

In conjunction with the IFD Library, the structure of the 
classification systems can be explicitly applied to the information 
used in model-based design, analysis, and management systems. A more 
consistent naming system for objects captured in a BIM has the 
potential to support the goals of the buildingSMART organization to 
improve interoperability of systems and processes. In North America, 
these systems are used by the buildingSmart alliance (the North 
American chapter of buildingSMART International) in pilot projects and 
in the development of the U.S. National Building Information Modeling 
Standard (NBIMS). 

Table 70: CSI MasterFormat™ 2004 Structure Example: Construction 
Phase: 
 
Level 2 element: 1.1 Base construction; 

Level 3 CSI subgroup: Project management; 
 
Level 3 CSI subgroup: General requirements; 
Level 4 CSI division: General requirements. 

Level 3 CSI subgroup: Facility construction; 
Level 4 CSI division: 
Existing conditions; 
Concrete; 
Masonry; 
Metals; 
Wood, plastics, composites; 
Thermal and moisture protection; 
Openings; 
Finishes; 
Specialties; 
Equipment; 
Furnishings; 
Special construction; 
Conveying equipment. 

Level 3 CSI subgroup: Facility services; 
Level 4 CSI division: 
Fire suppression; 
Plumbing; 
Heating, ventilating, air conditioning; 
Integrated automation; 
Electrical; 
Communications; 
Electronic safety and security. 

Level 3 CSI subgroup: Site and infrastructure; 
Level 4 CSI division: 
Earthwork; 
Exterior Improvements; 
Utilities; 
Transportation; 
Waterway and marine construction. 

Level 3 CSI subgroup: Process equipment; 
Level 4 CSI division: 
Process integration; 
Material processing and handling equipment; 
Process heating, cooling & drying equipment; 
Process gas and liquid handling, purification and storage equipment; 
Pollution control equipment; 
Industry specific manufacturing equipment; 
Electrical power generation. 

1.2 Other construction or installation. 

Source: CSI MasterFormat™ 2004 © 2006. 
 
Note: The numbers and titles used in this publication are from 
MasterFormat™ 2004, published by The Construction Specifications 
Institute (CSI) and Construction Specifications Canada (CSC), and are 
used with permission from CSI. For those interested in a more 
in-depth explanation of MasterFormat™ 2004 and its use in the 
construction industry, visit or contact The Construction Specifications 
Institute (CSI), 99 Canal Center Plaza, Suite 300, Alexandria, VA 
22314. 800-689-2900; 703-684-0300. 

[End of table] 

[End of Appendix 9] 

Appendix 10: Schedule Risk Analysis: 

A schedule risk analysis uses statistical techniques to predict a level 
of confidence in meeting a program’s completion date. This analysis 
focuses on critical path activities and on near-critical and other 
activities, since any activity may potentially affect the program’s 
completion date. Like a cost estimate risk and uncertainty analysis, a 
schedule risk analysis requires the collection of program risk data 
such as 
 
* risks that may jeopardize schedule success, usually found in the risk 
register prepared before the risk analysis is conducted; 

* probability distributions, usually specified by a point estimate of 
activity durations; 

* probability of a risk register risk’s occurring and its probability 
distribution of impact if it were to occur; 

* probability that a branch of activities might occur (for example, a 
test failure could lead to several recovery tasks); and; 

* correlations between activity durations. 

Schedule risk analysis relies on Monte Carlo simulation to randomly 
vary the following: 

* activity durations according to their probability distributions or 

* risks according to their probability of occurring and the 
distribution of their impact on affected activity if they were to occur 
and; 

* existence of a risk’s or a probabilistic branch’s occurring. 

The objective of the simulation is to develop a probability 
distribution of possible completion dates that reflect the program and 
its quantified risks. From the cumulative probability distribution, the 
organization can match a date to its degree of risk tolerance. For 
instance, an organization might want to adopt a program completion date 
that provides a 70 percent probability that it will finish on or before 
that date, leaving a 30 percent probability that it will overrun, given 
the schedule and the risks. The organization can thus adopt a plan 
consistent with its desired level of confidence in the overall 
integrated schedule. This analysis can give valuable insight into what-
if drills and quantify the impact of program changes. 

In developing a schedule risk analysis, probability distributions for 
each activity’s duration have to be established. Further, risk in all 
activities must be evaluated and included in the analysis. Some people 
focus only on the critical path, but because we cannot know the 
durations of the activities with certainty, we cannot know the true 
critical path. Consequently, it would be a mistake to focus only on the 
deterministic critical path when some off-critical path activity might 
become critical if a risk were to occur. Typically, three-point 
estimates—that is, best, most likely, and worst case estimates—are used 
to develop the probability distributions for the duration of workflow 
activities. After the distributions are developed, the Monte Carlo 
simulation is run and the resulting cumulative distribution curve, the 
S curve, displays the probability associated with the range of program 
completion dates. 

If the analysis is to be credible, the program must have a good 
schedule network that clearly identifies the critical path and that is 
based on a minimum number of date constraints. The risk analysis should 
also identify the tasks during the simulation that most often ended up 
on the critical path, so that near-critical path activities can also be 
closely monitored. It is important to represent all work in the 
schedule, since any activity can become critical under some 
circumstances. Complete schedule logic that addresses the logical 
relationships between predecessor and successor activities is also 
important. The analyst needs to be confident that the schedule will 
automatically calculate the correct dates and critical paths when the 
activity durations change, as they do thousands of times during a 
simulation. Because of debugging, and because the collection of 
schedule risk data can take time and resources, it is often a good idea 
to work with a summary schedule rather than the most detailed schedule. 

One of the most important reasons for performing a schedule risk 
analysis is that the overall program schedule duration may well be 
greater than the sum of the path durations for lower-level activities. 
This is in part because of 

* schedule uncertainty, which can cause activities to shorten (an 
opportunity) or lengthen (a threat). For instance, if a conservative 
assumption about labor productivity was used in calculating the 
duration of an activity, and during the simulation a better labor 
productivity is chosen, then the activity will shorten. However, most 
program schedule risk phenomena exhibit more probability of overrunning 
(threats) than underrunning (opportunities), which can cause activities 
to lengthen. 
 
* schedule structure. A schedule’s structure has many parallel paths 
joined at a merge or join point, which can cause the schedule to 
lengthen. Merge points include program reviews (preliminary design 
review, critical design review, etc.) or the beginning of an 
integration and test phase. The timing of these merge points is 
determined by the latest merging path. Thus, if a late required element 
is delayed, the merge event will also be delayed. Since any merging 
path can be risky, any merging path can determine the timing of the 
merge event. This added risk at merge points is called the “merge 
bias.” Figure 44 gives an example of the schedule structure that 
illustrates the network or pure-logic diagram of a simple schedule with 
a merge point at integration and test. 

Figure 44: Network Diagram of a Simple Schedule: 

[Refer to PDF for image: diagram] 
 
Start: Milestone date: Tue 4/29/08; ID: 1; 

Unit 1: 
Start: 4/29/08; 
Finish: 12/15/08; 
ID: 2; 
Dur: 165 d; 
Comp: 0%. 

Design Unit 1:
Start: 4/29/08; 
Finish: 6/23/08; 
ID: 3; 
Dur: 40 d; 
Res: 

Build Unit 1: 
Start: 6/24/08; 
Finish: 11/10/08; 
ID: 4; 
Dur: 100 d; 
Res: 

Test Unit 1: 
Start: 11/11/08; 
Finish: 12/15/08; 
ID: 5; 
Dur: 25 d; 
Res: 

Unit 2: 
Start: 4/29/08; 
Finish: 12/15/08; 
ID: 6; 
Dur: 165 d; 
Comp: 0%. 

Design Unit 2: 
Start: 4/29/08; 
Finish: 6/23/08; 
ID: 7; 
Dur: 40d; 
Res: 

Build Unit 2: 
Start: 6/24/08; 
Finish: 11/10/08; 
ID: 8; 
Dur: 100 d; 
Res:

Test Unit 2: 
Start: 11/18/08; 
Finish: 12/15/08; 
ID: 9; 
Dur: 25 d; 
Res:

Integration and Test: 
Start: 12/16/08; 
Finish: 4/20/09; 
ID: 10; 
Dur: 90 d; 
Comp: 0%. 

Integrate Units 1 & 2: 
Start: 12/16/08; 
Finish: 12/23/09; 
ID: 11; 
Dur: 50 d; 
Res: 

Test Integrated System: 
Start: 2/24/09;
Finish: 4/20/09;
ID: 12;
Dur: 40 d;
Res: 

Finish: 
Milestone Date: Mon 4/20/09; 
ID: 13. 

Source: Copyright 2007 Hulett and Associates, LLC. 

[End of figure] 

Since each activity has an uncertain duration, it stands to reason that 
the entire duration of the overall program schedule will also be 
uncertain. Therefore, unless a statistical simulation is run, 
calculating the completion date based on schedule logic and the most 
likely duration distributions will tend to underestimate the overall 
program critical path duration. 

Schedule underestimation is more pronounced when the schedule durations 
or logic have optimistic bias—for instance when the customer or 
management has specified an unreasonably short duration or early 
completion date. The schedule can be configured and assumptions made 
about activity durations to make a schedule match these imposed 
constraint durations. When this is the case, durations are often “best 
case” scenarios or based on unreasonable assumptions about resources 
availability or productivity. Further, the schedule may overlap 
activities or phases (for example, detailed engineering, fabrication, 
and testing) that would otherwise be more prudently scheduled in a 
series to compress the time. In addition, external contributions may be 
assumed with optimistic bias when there is little confidence that their 
suppliers will be able to comply. As a result, fitting the schedule to 
predetermined dates is dangerous. 

The preferred approach to scheduling is to build the schedule by 
starting with the WBS to define the detailed activities using program 
objectives to guide major events. When determining the durations for 
the activities, resource availability and productivity need to be 
reasonably assumed, external factors need to be realistically 
considered, and organizational risk associated with other programs and 
the priority of this program need to be considered. Once all these 
aspects have been modeled in the schedule, the scheduling system 
software can calculate the completion date. Following these best 
practice approaches to developing a schedule will provide a reasonable 
first step in determining the schedule duration. If the duration is too 
long, or the dates are too late, then more resources or less scope may 
be required. Unless more resources are provided, it is inappropriate to 
shorten the schedule to fit a preconceived date, given the original 
scope of work. 

Accordingly, because activity durations are uncertain, the probability 
distribution of the program’s total duration must be determined 
statistically, by combining the individual probability distributions 
of all paths according to their risks and the logical structure of the 
schedule. Schedule activity duration uncertainty can be represented 
several ways. The example schedule in figure 45 illustrates. 

Figure 45: Example Project Schedule: 

[Refer to PDF for image: illustration] 

ID: 0; 
Task name: Three path project GAO; 
Duration: 750 d;
Start: 1/9/08; 
Finish: 1/27/10; 

ID: 1; 
Task name: Start; 
Duration: 0 d;
Start: 1/9/08; 
Finish: 4/22/09.

ID: 2; 
Task name: Software;
Duration: 470 d;
Start: 1/9/08; 
Finish: 4/17/08.

ID: 3; 
Task name: Software design; 
Duration: 100 d;
Start: 1/9/08; 
Finish: 12/23/08.

ID: 4; 
Task name: Software coding; 
Duration: 250 d;
Start: 4/18/08;
Finish: 12/23/08.

ID: 5; 
Task name: Software testing; 
Duration: 120 d;
Start: 12/24/08;
Finish: 4/22/09.

ID: 6; 
Task name: Hardware; 
Duration: 500 d;
Start: 1/9/08; 
Finish: 5/22/09.

ID: 7; 
Task name: Hardware design; 
Duration: 100 d;
Start: 1/9/08; 
Finish: 4/17/08.

ID: 8; 
Task name: Hardware fabrication; 
Duration: 300 d;
Start: 4/18/08;
Finish: 2/11/09.

ID: 9; 
Task name: Hardware test; 
Duration: 100 d;
Start: 2/12/09;
Finish: 5/22/09.

ID: 10; 
Task name: Integration H/W and S/W; 
Duration: 250 d;
Start: 5/23/09;
Finish: 1/27/10. 

ID: 11; 
Task name: Integration; 
Duration: 150 d;
Start: 5/23/09;
Finish: 10/19/09. 

ID: 12; 
Task name: Integration test; 
Duration: 100 d;
Start: 10/20/09; 
Finish: 1/27/10. 

ID: 13; 
Task name: Finish; 
Duration: 0 d; 
Start: 1/27/10; 
Finish: 1/27/10. 

Source: Copyright 2007 Hulett and Associates, LLC. 

[End of figure] 
In this example schedule, the project begins on January 9, 2008, and is 
expected to be completed about 2 years later, on January 27, 2010. 
Three major efforts involve software, hardware, and integration. 

According to the schedule logic and durations, hardware fabrication, 
testing, and the integration of hardware and software drive the 
critical path.

The first way to capture schedule activity duration uncertainty is to 
collect various estimates from individuals and, perhaps, from a review 
of actual past program performance. Estimates derived from interviews 
or in workshops should be formulated by a consensus of knowledgeable 
technical experts and coordinated with the same people who manage the 
program’s risk mitigation watch list. Figure 46 shows a traditional 
approach with the three-point estimate applied directly to the activity 
durations. 

Figure 46: Estimated Durations for Remaining WBS Areas in the Schedule: 
 
ID: 0; 
Task name: Three path GAO project;
Report ID: 2; 
Mn Rdur: 0 d; 
ML Rdur: 0 d; 
Max Rdur: 0 d; 
Rept ID: 0. 

ID: 1; 
Task name: Start; 
Report ID: 0; 
Mn Rdur: 0 d; 
ML Rdur: 0 d; 
Max Rdur: 0 d; 
Rept ID: 0. 

ID: 2; 
Task name: Software;
Report ID: 0; 
Mn Rdur: 0 d; 
ML Rdur: 0 d; 
Max Rdur: 0 d; 
Rept ID: 0. 

ID: 3; 
Task name: Software design;
Report ID: 0; 
Mn Rdur: 85 d; 
ML Rdur: 100 d; 
Max Rdur: 150 d; 
Rept ID: 2. 

ID: 4; 
Task name: Software coding;
Report ID: 0; 
Mn Rdur: 212.5 d; 
ML Rdur: 250 d; 
Max Rdur: 375 d; 
Rept ID: 2. 

ID: 5; 
Task name: Software testing; 
Report ID: 0; 
Mn Rdur: 90 d; 
ML Rdur: 120 d; 
Max Rdur: 240 d; 
Rept ID: 2. 

ID: 6; 
Task name: Hardware;
Report ID: 0; 
Mn Rdur: 0 d; 
ML Rdur: 0 d; 
Max Rdur: 0 d; 
Rept ID: 0. 

ID: 7; 
Task name: Hardware design;
Report ID: 0; 
Mn Rdur: 85 d; 
ML Rdur: 100 d; 
Max Rdur: 130 d; 
Rept ID: 2. 

ID: 8; 
Task name: Hardware fabrication;
Report ID: 0; 
Mn Rdur: 255 d;
ML Rdur: 300 d; 
Max Rdur: 390 d; 
Rept ID: 2. 

ID: 9; 
Task name: Hardware test; 
Report ID: 0; 
Mn Rdur: 75 d; 
ML Rdur: 100 d; 
Max Rdur: 200 d; 
Rept ID: 2. 

ID: 10; 
Task name: Integration H/W and S/W;
Report ID: 0; 
Mn Rdur: 0 d; 
ML Rdur: 0 d; 
Max Rdur: 0 d; 
Rept ID: 0. 

ID: 11; 
Task name: Integration;
Report ID: 0; 
Mn Rdur: 120 d; 
ML Rdur: 150 d; 
Max Rdur: 210 d; 
Rept ID: 2. 

ID: 12; 
Task name: Integrated test;
Report ID: 0; 
Mn Rdur: 75 d; 
ML Rdur: 100 d; 
Max Rdur: 200 d; 
Rept ID: 2. 

ID: 13; 
Task name: Finish; 
Report ID: 0; 
Mn Rdur: 0 d; 
ML Rdur: x d; 
Max Rdur: 0 d;
Rept ID: 0. 

Source: Copyright 2007 Hulett and Associates, LLC. 

Note: Rept ID = Report Identification; Rdur = Remaining Duration, Mn = 
Minimum, ML = Most Likely, Max = Maximum; H/W = hardware; S/W = 
software. 

[End of figure] 

The example shows three-point estimates of remaining durations. In a 
real program schedule risk analysis, these would be developed from in-
depth interviews of people who are expert in each of the WBS areas of 
the program. To model the risks in the simulation, the risks are 
represented as triangular distributions specified by the three-point 
estimates of the activity durations. These probability distributions 
combine the effects of all risks that affect the activities. 

Once the distributions have been established, the Monte Carlo 
simulation uses random numbers to select specific durations from each 
activity probability distribution and calculates a new critical path 
and dates, including major milestone and program completion. The Monte 
Carlo simulation continues this random selection thousands of times, 
creating a new program duration estimate and critical path each time. 
The resulting frequency distribution displays the range of program 
completion dates along with the probabilities that these dates will 
occur. 

Figure 47 shows that the most likely completion date is about May 11, 
2010, not January 27, 2010, which is the date the deterministic 
schedule computed. The cumulative distribution also shows that a 
January 27, 2010, completion is less than 5 percent likely, given the 
schedule and the risk ranges used for the durations. An organization 
that wants to cover 80 percent of its known unknowns would have to add 
a time reserve of about 5 months to June 24, 2010. While it would be 
prudent to establish a 5-month reserve for this project, each 
organization should determine its tolerance level for schedule risk. 

Figure 47: Cumulative Distribution of Project Schedule, Including Risk: 

[Refer to PDF for image: combined line and vertical bar graph] 

Date: 12/7/2007 5:35:45 PM; 
Samples: 5000; 
Unique ID: 0; 
Name: Three Path Project GAO; 
Completion Std Deviation: 51.79 d; 
95% Confidence Interval: 1.44 d; 
Each bar represents 20 d; 

Graph plots frequency vs. completion date, mapping cumulative 
probability. 

Completion Probability Table: 

Prob: 0.05; 
Date: 2/19/10. 

Prob: 0.10; 
Date: 3/6/10. 

Prob: 0.15; 
Date: 3/18/10. 

Prob: 0.20; 
Date: 3/29/10. 

Prob: 0.25; 
Date: 4/5/10. 

Prob: 0.30; 
Date: 4/12/10. 

Prob: 0.35; 
Date: 4/19/10. 

Prob: 0.40; 
Date: 4/25/10. 

Prob: 0.45; 
Date: 5/2/10. 

Prob: 0.50; 
Date: 5/9/10. 

Prob: 0.55; 
Date: 5/15/10. 

Prob: 0.60; 
Date: 5/22/10. 

Prob: 0.65; 
Date: 5/29/10. 

Prob: 0.70; 
Date: 6/6/10. 

Prob: 0.75; 
Date: 6/15/10. 

Prob: 0.80; 
Date: 6/24/10. 

Prob: 0.85; 
Date: 7/5/10. 

Prob: 0.90; 
Date: 7/20/10. 

Prob: 0.95; 
Date: 8/8/10. 

Prob: 1.00; 
Date: 11/12/10. 

Source: Copyright 2007 Hulett and Associates, LLC. 

Schedule activity duration uncertainty can be determined by analyzing 
the probability of risks from the risk register. Using a probability 
distribution of the risk impact on the duration, risks are assigned to 
specific activities. This approach focuses on how risks affect time. 
Figure 48 shows how this approach can be used. 

Figure 48: Identified Risks on a Spacecraft Schedule: An Example: 

[Refer to PDF for image: illustration] 
 
ID: SUMMA; 
Description: Project summary; 
Rem duration: 1900; 
Start: 03/Mar/08; 
Finish: 12/Jun/15. 

ID: 00001; 
Description: Spacecraft project milestones; 
Rem duration: 1900; 
Start: 03/Mar/08; 
Finish: 12/Jun/15. 

ID: 00002; 
Description: Requirements definition spacecraft; 
Rem duration: 100; 
Start: 03/Mar/08; 
Finish: 18/Jul/08. 

ID: 00003; 
Description: PDR spacecraft; 
Rem duration: 0; 
Start: [Empty]; 
Finish: 11/Sep/09. 

ID: 00004; 
Description: CDR spacecraft; 
Rem duration: 0; 
Start: [Empty]; 
Finish: 03/Jun/11. 

ID: 00005; 
Description: Ship to launch site; 
Rem duration: 0; 
Start: [Empty]; 
Finish: 12/Jun/15. 

ID: 00006; 
Description: First stage; 
Rem duration: 1450; 
Start: 21/Jul/08; 
Finish: 07/Feb/14. 

ID: 00007; 
Description: FS preliminary design; 
Rem duration: 300; 
Start: 21/Jul/08; 
Finish: 11/Sep/09. 

ID: 00008; 
Description: FS PDR; 
Rem duration: 0; 
Start: [Empty]; 
Finish: 11/Sep/09. 

ID: 00009; 
Description: FS final design; 
Rem duration: 450; 
Start: 14/Sep/09; 
Finish: 03/Jun/11. 

ID: 00010; 
Description: FS CDR; 
Rem duration: 0; 
Start: [Empty]; 
Finish: 03/Jun/11. 

ID: 00011; 
Description: FS fabrication; 
Rem duration: 600; 
Start: 06/Jun/11; 
Finish: 20/Sep/13. 

ID: 00012; 
Description: Test FS engine; 
Rem duration: 100; 
Start: 23/Sep/13; 
Finish: 07/Feb/14. 

ID: 00020; 
Description: Upper stage; 
Rem duration: 1450; 
Start: 21/Jul/08; 
Finish: 07/Feb/14. 

ID: 00021; 
Description: US preliminary design; 
Rem duration: 300; 
Start: 21/Jul/08; 
Finish: 11/Sep/09. 

ID: 00022; 
Description: US PDR; 
Rem duration: 0; 
Start: [Empty]; 
Finish: 11/Sep/09. 

ID: 00023; 
Description: US Final design; 
Rem duration: 450; 
Start: [Empty]; 
Finish: 14/Sep/09. 

ID: 00024; 
Description: US CDR; 
Rem duration: 0; 
Start: [Empty]; 
Finish: 03/Jun/11. 

ID: 00025; 
Description: US fabrication; 
Rem duration: 600; 
Start: 06/Jun/11; 
Finish: 20/Sep/13. 

ID: 00026; 
Description: US test; 
Rem duration: 100; 
Start: 23/Sep/13; 
Finish: 07/Feb/14. 

ID: 00027; 
Description: Integration; 
Rem duration: 350; 
Start: 10/Feb/14; 
Finish: 12/Jun/15. 

ID: 00028; 
Description: Integration; 
Rem duration: 250; 
Start: 10/Feb/14; 
Finish: 23/Jan/15. 

ID: 00029 
Description: Integration testing; 
Rem duration: 100; 
Start: 26/Jan/15; 
Finish: 12/Jun/15. 

Source: Copyright 2007 Hulett and Associates, LLC. 

[End of figure] 

In this example of a spacecraft schedule, the work begins on March 3, 
2008, and is expected to finish more than 7 years later, on June 12, 
2015. Because of the long time and the risk associated with developing
the spacecraft technology, the risk driver method can be used to 
examine how various risks from the risk register may affect this 
schedule (figure 49).

Figure 49: A Risk Register for a Spacecraft Schedule: 

Description: 1. Requirements have not been decided; 
Optimistic: 95.00% 
Most likely: 105.00% 
Pessimistic: 120.00% 
Likelihood: 30.00% 

Description: 2. Several alternative designs considered; 
Optimistic: 95.00% 
Most likely: 100.00% 
Pessimistic: 115.00% 
Likelihood: 60.00% 

Description: 3. New designs not yet proven; 
Optimistic: 96.00% 
Most likely: 103.00% 
Pessimistic: 112.00% 
Likelihood: 40.00% 

Description: 4. Fabrication requires new materials; 
Optimistic: 96.00% 
Most likely: 105.00% 
Pessimistic: 115.00% 
Likelihood: 50.00% 

Description: 5. Lost know-how since last full spacecraft; 
Optimistic: 95.00% 
Most likely: 100.00% 
Pessimistic: 105.00% 
Likelihood: 30.00% 

Description: 6. Funding from Congress is problematic; 
Optimistic: 90.00% 
Most likely: 105.00% 
Pessimistic: 115.00% 
Likelihood: 70.00% 

Description: 7. Schedule for testing is aggressive; 
Optimistic: 100.00% 
Most likely: 120.00% 
Pessimistic: 130.00% 
Likelihood: 100.00% 

Source: Copyright 2007 Hulett and Associates, LLC. 

[End of figure] 

In figure 49, one can quickly determine that the biggest risk affecting 
the spacecraft schedule has to do with testing, because the schedule is 
very aggressive. Moreover, lack of requirements, funding delays, 
alternative designs, and the fact that some of the designs are unproven 
are also highly likely to affect the schedule. With the risk driver 
method, these risks are shown as factors that will be used to multiply 
the durations of the activities they are assigned to, if they occur in 
the iteration. Once the risks are assigned to the activities, a 
simulation is run. The results may be similar to those in figure 50. 

Figure 50: Spacecraft Schedule Results from a Monte Carlo Simulation: 

[Refer to PDF for image: combined line and vertical bar graph] 

Hits plotted vs. distribution (start of interval) in terms of 
cumulative frequency. 

Source: Copyright 2007 Hulett and Associates, LLC. 

[End of figure] 

In this example, the schedule date of June 12, 2015, is estimated to be 
9 percent likely, based on the current plan. If the organization 
chooses the 80th percentile, the date would be March 3, 2016, 
representing a 9-month time contingency. Notice that the risks have 
caused a 14-month spread, a respectable range of uncertainty, between 
the 5 percent and 95 percent confidence dates. 

Regardless of which method is used to examine schedule activity 
duration uncertainty, it is important to identify the risks that 
contribute most to the program schedule risk. In figure 51 is one 
approach to identifying activities that need close examination for 
effective risk mitigation. It compares a schedule with a well-managed 
critical path (unit 2) and two other paths that have risk but positive 
total float. The noncritical path efforts (units 1 and 3) therefore did 
not attract the program manager’s risk management attention. 

Figure 51: A Schedule Showing Critical Path through Unit 2: 
 
Task name: Risk criticality project; 
Rept ID: 2;
Mn Rdur: 0 d; 
Ml Rdur: 0 d; 
Max Rdur: 0 d; 
Curve: 0.

Task name: Start; 
Rept ID: 0;
Mn Rdur: 0 d; 
Ml Rdur: 0 d;
Max Rdur: 0 d; 
Curve: 0.

Task name: Unit 1; 
Rept ID: 0;
Mn Rdur: 0 d; 
Ml Rdur: 0 d; 
Max Rdur: 0 d; 
Curve: 0. 

Task name: Design unit 1; 
Rept ID: 0;
Mn Rdur: 18 d;
Ml Rdur: 28 d;
Max Rdur: 43 d;
Curve: 2.

Task name: Build unit 1; 
Rept ID: 0;
Mn Rdur: 35 d;
Ml Rdur: 40 d;
Max Rdur: 50 d;
Curve: 2.

Task name: Test unit 1; 
Rept ID: 0;
Mn Rdur: 20 d;
Ml Rdur: 25 d;
Max Rdur: 50 d;
Curve: 2.

Task name: Unit 2; 
Rept ID: 0;
Mn Rdur: 0 d; 
Ml Rdur: 0 d;
Max Rdur: 0 d;
Curve: 0.

Task name: Design unit 2; 
Rept ID: 0;
Mn Rdur: 25 d; 
Ml Rdur: 30 d;
Max Rdur: 38 d;
Curve: 2.

Task name: Build unit 2; 
Rept ID: 0;
Mn Rdur: 37 d;
Ml Rdur: 40 d;
Max Rdur: 45 d;
Curve: 2.

Task name: Test unit 2; 
Rept ID: 0;
Mn Rdur: 22 d;
Ml Rdur: 25 d; 
Max Rdur: 35 d;
Curve: 2.

Task name: Unit 3; 
Rept ID: 0;
Mn Rdur: 0 d; 
Ml Rdur: 0 d;
Max Rdur: 0 d;
Curve: 0.

Task name: Design unit 3; 
Rept ID: 0;
Mn Rdur: 20 d;
Ml Rdur: 30 d;
Max Rdur: 45 d;
Curve: 2.

Task name: Build unit 3; 
Rept ID: 0;
Mn Rdur: 32 d;
Ml Rdur: 37 d; 
Max Rdur: 47 d;
Curve: 2. 

Task name: Test unit 3; 
Rept ID: 0;
Mn Rdur: 20 d;
Ml Rdur: 25 d;
Max Rdur: 50 d;
Curve: 2. 

Task name: Finish; 
Rept ID: 0;
Mn Rdur: 0 d; 
Ml Rdur: 0 d;
Max Rdur: 0 d;
Curve: 0.

Source: Copyright 2007 Hulett and Associates, LLC. 

Note: Rept ID = Report Identification; Rdur = Remaining Duration, Mn = 
Minimum, ML = Most Likely, Max = Maximum; H/W = hardware; S/W = 
software. 

[End of figure] 

The measure of merit, the risk criticality, shows that the risky 
noncritical paths are more likely to delay the project than the so-
called critical path. After running the simulation, which takes into 
account the minimum, most likely, and maximum durations, one can see 
that although unit 2 is on the schedule’s deterministic critical path, 
unit 1 is 44 percent likely to ultimately delay the project and unit 3 
is 39 percent likely to do the same. In other words, in the critical 
path method, “critical path” is the least likely path to delay the 
project, in this simple case. 

Figure 52 shows the results of each unit’s probability of landing on 
the critical path, based on the Monte Carlo simulation. 

Figure 52: Results of a Monte Carlo Simulation for a Schedule Showing 
Critical Path through Unit 2: 

[Refer to PDF for image: illustration] 

ID: 0; 
Task name: Risk criticality project;
Total slack: 0 d; 
Critical: Yes; 
% Critical: 0; 
Risk Critical: No. 

ID: 1; 
Task name: Start; 
Total slack: 0 d; 
Critical: Yes; 
% Critical: 100; 
Risk Critical: No. 

ID: 2; 
Task name: Unit; 
Total slack: 12 d; 
Critical: No; 
% Critical: 44; 
Risk Critical: Yes. 

ID: 3; 
Task name: Design unit; 
Total slack: 12 d; 
Critical: No; 
% Critical: 44; 
Risk Critical: Yes. 

ID: 4; 
Task name: Build unit 1; 
Total slack: 2 d; 
Critical: No; 
% Critical: 44; 
Risk Critical: Yes. 

ID: 5; 
Task name: Test unit 1; 
Total slack: 2 d; 
Critical: No; 
% Critical: 44; 
Risk Critical: Yes. 

ID: 6; 
Task name: Unit 2;
Total slack: 0 d; 
Critical: Yes; 
% Critical: 17; 
Risk Critical: No. 

ID: 7; 
Task name: Design unit 2;
Total slack: 0 d; 
Critical: Yes; 
% Critical: 17; 
Risk Critical: No. 

ID: 8; 
Task name: Build unit 2; 
Total slack: 0 d; 
Critical: Yes; 
% Critical: 17; 
Risk Critical: No. 

ID: 9; 
Task name: Test unit 2; 
Total slack: 0 d; 
Critical: Yes; 
% Critical: 17; 
Risk Critical: No. 

ID: 10; 
Task name: Unit 3;
Total slack: 3 d; 
Critical: No; 
% Critical: 39; 
Risk Critical: Yes. 

ID: 11; 
Task name: Design unit 3;
Total slack: 3 d; 
Critical: No; 
% Critical: 39; 
Risk Critical: Yes. 

ID: 12; 
Task name: Build unit 3; 
Total slack: 3 d; 
Critical: No; 
% Critical: 39; 
Risk Critical: Yes. 

ID: 13; 
Task name: Test unit 3;
Total slack: 3 d; 
Critical: No; 
% Critical: 39; 
Risk Critical: Yes. 

ID: 14 
Task name: Finish; 
Total slack: 0 d; 
Critical: Yes; 
% Critical: 100; 
Risk Critical: No. 

Source: Copyright 2007 Hulett and Associates, LLC. 

[End of figure] 

Other measures of risk importance can be reviewed. For instance, 
sensitivity measures reflecting the correlation of the activities or 
the risks with the final schedule duration can be produced by most 
schedule risk software. Figure 53 is a standard schedule sensitivity 
index for the spacecraft project discussed earlier. 

Figure 53: Sensitivity Index for Spacecraft Schedule: 

[Refer to PDF for image: illustration] 
 
Risk Criticality Project: 
 
Duration Sensitivity: 
00005 - Test Unit 1: 47%; 
00013 - Test Unit 3: 42%; 
00003 - Design Unit 1: 34%; 
00011 - Design Unit 3: 29%; 
00004 - Build Unit 1: 21%; 
00012 - Build Unit 3: 18%; 
00007 - Design Unit 2: 10%; 
00009 - Test Unit 2: 8%; 
00008 - Build Unit 2: 7%. 

Source: Copyright 2007 Hulett and Associates, LLC. 

[End of figure] 

In this example, the testing and design of units 1 and 3 affect the 
schedule duration more than the design, testing, and building of unit 
2, even though unit 2 represents the critical path in the deterministic 
schedule. Therefore, without taking into account the risk associated 
with each unit’s duration, the program manager would not know that 
keeping a strong eye on units 1 and 3 would be imperative for keeping 
the program on schedule. 

Figure 54 is a different view of final duration sensitivity resulting 
from the risk register risks themselves, using the risk driver approach 
discussed earlier. In this case, when a risk is assigned to several 
activities, its sensitivity measure reflects the entire correlation, 
not just the correlation of one activity to the project duration. 

Figure 54: Evaluation of Correlation in Spacecraft Schedule: 
 
[Refer to PDF for image: illustration] 

Driving Schedule Risk Factors: 

6 - Funding from Congress is problematic; 
4 - Fabrication requires new materials; 
7 - Schedule for testing is aggressive; 
2 - Several alternative designs considered; 
3 - New designs not yet proven; 
5 - Lost know-how since last full spacecraft; 
1 - Requirements have not been decided; 

Factors are plotted with correlation. 

Source: Copyright 2007 Hulett and Associates, LLC. 

[End of figure] 

In the example in figure 54, funding from the Congress is the biggest 
risk driver in the program schedule, followed by new materials that may 
be needed for fabrication. While not much can be done about the 
congressional funding issue since this is an external risk, contingency 
plans can be made for several scenarios in which funding may not come 
through as planned. 

In addition to standard schedule risk and sensitivity analysis, events 
that typically occur in government programs require some new 
activities. This is called “probabilistic branching.” One such event 
that commonly occurs is the completion of a test of an integrated 
product (software program, satellite, etc.). The schedule often assumes 
that tests are successful, whereas experience indicates that tests may 
fail and only their failure will require the activities of root cause 
analysis, plan for recovery, execution of recovery, and retest. This is 
a branch that only happens with some probability. An example is shown 
in figure 55. 

Figure 55: An Example of Probabilistic Branching Contained in the 
Schedule: 

[Refer to PDF for image: illustration] 
 
ID: 1; 
Task name: Project; 
Duration: 95 d; 
Start: 6/1; 
Finish: 9/3; 
Predecessor: [Empty]. 

ID: 2; 
Task name: Start; 
Duration: 0 d; 
Start: 6/1; 
Finish: 6/1; 
Predecessor: [Empty]. 

ID: 3; 
Task name: Design unit; 
Duration: 30 d; 
Start: 6/1; 
Finish: 6/30; 
Predecessor: 2. 

ID: 4; 
Task name: Build unit; 
Duration: 40 d; 
Start: 7/1; 
Finish: 8/9; 
Predecessor: 3. 

ID: 5; 
Task name: Test unit; 
Duration: 25 d; 
Start: 8/10; 
Finish: 9/3; 
Predecessor: 4. 

ID: 6; 
Task name: Fixit; 
Duration: 0 d; 
Start: 9/3; 
Finish: 9/3; 
Predecessor: 5. 

ID: 7; 
Task name: Retest; 
Duration: 0 d; 
Start: 9/3; 
Finish: 9/3; 
Predecessor: 6. 

ID: 8; 
Task name: Finish; 
Duration: 0 d; 
Start: 9/3; 
Finish: 9/3; 
Predecessor: 7, 5. 

Source: Copyright 2007 Hulett and Associates, LLC. 

[End of figure] 

If the test unit activity fails, FIXIT and retest occur; otherwise, 
their duration is 0 days. This is a discontinuous event that leads to 
the two new activities. If the test is estimated to fail with some 
probability such as 30 percent, the resulting probability distribution 
of dates for the entire project can be depicted as in figure 56. 

Figure 56: Probability Distribution Results for Probabilistic Branching 
in Test Unit: 

[Refer to PDF for image: combined vertical bar and line graph] 

Frequency plotted vs. completion date. 
 
Date: 1/9/2006 8:03:50 PM; 
Samples: 3000;
Unique ID: 0;
Name: Probabilistic Branch; 

Completion Std Deviation: 27.45 d; 
95% Confidence Interval: 0.98 d; 
Each bar represents 10 d. 

Completion Probability Table 

Prob: 0.05; 
Date: 9/1. 

Prob: 0.10; 
Date: 9/4. 

Prob: 0.15; 
Date: 9/6. 

Prob: 0.20; 
Date: 9/8. 

Prob: 0.25; 
Date: 9/10. 

Prob: 0.30; 
Date: 9/11. 

Prob: 0.35; 
Date: 9/13. 

Prob: 0.40; 
Date: 9/14. 

Prob: 0.45; 
Date: 9/16. 

Prob: 0.50; 
Date: 9/18. 

Prob: 0.55; 
Date: 9/20. 

Prob: 0.60; 
Date: 9/23. 

Prob: 0.65; 
Date: 9/26. 

Prob: 0.70; 
Date: 10/4. 

Prob: 0.75; 
Date: 10/28. 

Prob: 0.80; 
Date: 11/3. 

Prob: 0.85; 
Date: 11/9. 

Prob: 0.90; 
Date: 11/13. 

Prob: 0.95; 
Date: 11/20. 

Prob: 1.00; 
Date: 12/16. 

Source: Copyright 2007 Hulett and Associates, LLC. 

[End of figure] 

Notice the bimodal distribution with the test success iterations on the 
left of figure 56 and the test failure iterations on the right. If the 
organization demands an 80th percentile schedule, it would be November 
3, although if it is satisfied by anything under the 70th percentile, 
the possibility of failure would not be important. 

Other capabilities are possible once the schedule is viewed as a 
probabilistic statement of how the program might unfold. One that is 
notable is the correlation between activity durations. Correlation is 
when two activity durations are both influenced by the same external 
force and can be expected to vary in the same direction within their 
own probability distributions in any consistent scenario. While 
durations might vary in opposite directions if they are negatively 
correlated, this is less common than positive correlation in program 
management. Correlation might be positive and fairly strong if, for 
instance, the same assumption about the maturity of a technology is 
made to estimate the duration of design, fabrication, and testing 
activities. If the technology maturity is not know with certainty, it 
would be consistent to assume that design, fabrication, and testing 
activities would all be longer, or shorter, than scheduled together. It 
is the “together” part of the consistent scenario that represents 
correlation. 

Without specifying correlation between these activity durations in 
simulation, some iterations or scenarios would have some activities 
long and others short in their respective ranges. This would be 
inconsistent with the idea that they all react to the maturity of the 
same technology. Specifying correlations between design, fabrication, 
and testing ensures that each iteration represents a scenario in which 
their durations are consistently long or short in their ranges. Because 
schedules tend to add durations (given their logical structure), if the 
durations are long together or short together there is a chance for 
very long or very short projects. How much longer or shorter depends, 
but without correlation, the risk analysis may underestimate the final 
effect. Figure 57 demonstrates this issue with a simple single-path 
hardware development fabrication test program. 

Figure 57: A Project Schedule Highlighting Correlation Effects: 

[Refer to PDF for image: illustration] 

ID: 0; 
Task name: Correlation Project GAO; 
Duration: 500 d; 
Start: 1/9/08; 
Finish: 5/22/09. 

ID: 1; 
Task name: Start; 
Duration: 0 d; 
Start: 1/9/08; 
Finish: 1/9/08. 

ID: 2; 
Task name: Hardware Design; 
Duration: 100 d; 
Start: 1/9/08; 
Finish: 4/17/08. 

ID: 3; 
Task name: Hardware Fabrication; 
Duration: 300 d; 
Start: 4/18/08; 
Finish: 2/11/09. 

ID: 4; 
Task name: Hardware Test; 
Duration: 100 d; 
Start: 2/12/09; 
Finish: 5/22/09. 

ID: 5; 
Task name: Finish; 
Duration: 0 d; 
Start: 5/22/09; 
Finish: 5/22/09. 
 
Source: Copyright 2007 Hulett and Associates, LLC. 

[End of figure] 

Assuming no correlation between the activities’ durations, the result 
would be as shown in figure 58. In this uncorrelated case, the 80 
percent probability date is August 10, 2009, and the standard deviation 
of completion date, a measure of dispersion, is 40.47 days. 

Figure 58: Risk Results Assuming No Correlation between Activity 
Durations: 

[Refer to PDF for image: combined vertical bar and line graph] 

Frequency plotted vs. completion date. 
 
Date: 12/27/2007 8:59:10 PM; 
Samples: 5000;
Unique ID: 0;
Name: Correlation Project GAO; 

Completion Std Deviation: 40.47 d; 
95% Confidence Interval: 1.12 d; 
Each bar represents 15 d. 

Completion Probability Table: 

Prob: 0.05; 
Date: 5/6/09. 

Prob: 0.10; 
Date: 5/15/09. 

Prob: 0.15; 
Date: 5/25/09. 

Prob: 0.20; 
Date: 6/1/09. 

Prob: 0.25; 
Date: 6/7/09. 

Prob: 0.30; 
Date: 6/12/09. 

Prob: 0.35; 
Date: 6/18/09. 

Prob: 0.40; 
Date: 6/23/09. 

Prob: 0.45; 
Date: 6/29/09. 

Prob: 0.50; 
Date: 7/4/09. 

Prob: 0.55; 
Date: 7/10/09. 

Prob: 0.60; 
Date: 7/16/09. 

Prob: 0.65; 
Date: 7/22/09. 

Prob: 0.70; 
Date: 7/27/09. 

Prob: 0.75; 
Date: 8/3/09. 

Prob: 0.80; 
Date: 8/10/09. 

Prob: 0.85; 
Date: 8/18/09. 

Prob: 0.90; 
Date: 8/29/09. 

Prob: 0.95; 
Date: 9/14/09. 

Prob: 1.00; 
Date: 11/23/09. 

GAO: No Correlations. 

Source: Copyright 2007 Hulett and Associates, LLC. 

[End of figure] 

However, if the influence of the technology maturity is strong and the 
program team believes that there is a 90 percent correlation between 
design, fabrication, and test of the hardware system, the simulation 
results will be affected dramatically. While the 90 percent correlation 
is high (correlation is measured between –1.0 and +1.0), there are 
often no actual data on correlation, so expert judgment is often used 
to set the correlation coefficients in many cases. Assuming this degree 
of correlation, we get the result in figure 59. 

Figure 59: Risk Results Assuming 90 Percent Correlation between 
Activity Durations: 

[Refer to PDF for image: combined vertical bar and line graph] 

Frequency plotted vs. completion date. 
 
Date: 12/27/2007 9:05:20 PM; 
Samples: 5000;
Unique ID: 0;
Name: Correlation Project GAO; 

Completion Std Deviation: 62.59 d; 
95% Confidence Interval: 1.73 d; 
Each bar represents 15 d. 

Completion Probability Table: 

Prob: 0.05; 
Date: 4/6/09. 

Prob: 0.10; 
Date: 4/21/09. 

Prob: 0.15; 
Date: 5/2/09. 

Prob: 0.20; 
Date: 5/11/09. 

Prob: 0.25; 
Date: 5/20/09. 

Prob: 0.30; 
Date: 5/28/09. 

Prob: 0.35; 
Date: 6/6/09. 

Prob: 0.40; 
Date: 6/23/09. 

Prob: 0.45; 
Date: 6/14/09. 

Prob: 0.50; 
Date: 7/1/09. 

Prob: 0.55; 
Date: 7/10/09. 

Prob: 0.60; 
Date: 7/19/09. 

Prob: 0.65; 
Date: 7/29/09. 

Prob: 0.70; 
Date: 8/9/09. 

Prob: 0.75; 
Date: 8/21/09. 

Prob: 0.80; 
Date: 9/4/09. 

Prob: 0.85; 
Date: 9/19/09. 

Prob: 0.90; 
Date: 10/7/09. 

Prob: 0.95; 
Date: 10/28/09. 

Prob: 1.00; 
Date: 12/25/09. 

GAO Correlation = 0.9. 

Source: Copyright 2007 Hulett and Associates, LLC. 

[End of figure] 

In this case the 80 percent probability date is September 4, 2009, 
nearly a month longer, and the standard deviation is 55 percent larger 
than when the activities were assumed independent. While the expected 
July 8, 2009, date varied little from the uncorrelated risk analysis, 
the deterministic May 22, 2009, date increased in probability to more 
than 25 percent by correlating the risks. The two results are compared 
in figure 60. 

Figure 60: Schedule Analysis Results with and without Correlation: 

[Refer to PDF for image: line graph] 

Cumulative probability plotted vs. dates. 

Lines depict the effect of correlation: 
No corelation; 
High correlation. 

Source: Copyright 2007 Hulett and Associates, LLC. 

[End of figure] 

Other rules of thumb that can mitigate schedule risk include: 

* break down longer activities to show critical handoffs—for example, 
if a task is 4 months long but a critical handoff is expected halfway 
through, the task should be broken down into separate 2-month tasks 
that logically link the handoff between tasks. Otherwise, long lags 
must be used, which are rigid and tend to skew the risk results. 

* detailed program schedules should contain a predominance of finish-to-
start logical relationships but summary schedules, typically used in 
risk analysis, may have more start-to-start and finish-to-finish 
relationships between phases. This practice requires care in completing 
the logic with the following rule: 

Each activity needs a finish-to-start or start-to-start predecessor 
that drives it as well as a finish-to-start or finish-to-finish 
successor that it drives—in other words, dangling activities must be 
avoided. In this way, risks in predecessors and successors will be 
transmitted correctly down the paths to the later program milestones. 

* work packages in detailed program schedules should be no longer than 
2 months so that work can be planned within two reporting periods, but 
for schedule risk a more summary schedule with basic activities 
representing phases is often used. 

* lags should represent only the passing of time on detailed program 
schedules and should never be used to replace a task, whereas in 
summary schedules used for schedule risk, the lags may have to be 
longer. 

* resources should be scheduled to reflect their scarcity, such as 
availability of staff or equipment. 

* constraints should be minimized, because they impose a movement 
restriction on tasks and can cause false dates in a schedule. 

* total float that is more than 5 percent of the total program schedule 
may indicate that the network schedule is not yet mature. 

The schedule risk analysis aims to answer 11 fundamental questions: 

1. Does the schedule reflect all work to be completed? 

2. Are the program critical dates used to plan the schedule? 

3. Are the activities sequenced logically? 

4. Are activity interdependencies identified and logical? 

5. If there are constraints, lags, and lead times, are they required, 
and is documentation available to justify the amounts? 

* Constraints and lags should be used sparingly. There may be 
legitimate reasons for using constraints, but each constraint should be 
investigated. For instance, start-not-earlier-than constraints might 
reflect the availability of funding or a weather limitation and may be 
logical. 

* Finish-not-later-than constraints are usually artificial and reflect 
some policy rather than a program reality. If the program does not meet 
these dates, imposing this kind of constraint in a computer model of 
the program schedule might make the schedule look good in the computer
while the program is in trouble in the field. 
 
* Constraints that push the program activities beyond the dates that 
predecessors require in order to add float or flexibility are arbitrary 
and not recommended. The schedule risk analysis should determine the 
amount of time contingency needed. 

6. How realistic are the schedule activity duration estimates? 

7. How were resource estimates developed for each activity and will the 
resources be available when needed? 

8. How accurate is the critical path and was it developed with 
scheduling software? 

9. How reasonable are float estimates? Activities’ floats should be 
realistic. High total float values often indicate that logic is 
incorrect or missing and that there are dangling activities. 

10. Can the schedule determine current status and provide reasonable 
completion date forecasts? 

11. What level of confidence is associated with the program schedule 
completion date? Does it reflect a schedule risk analysis and the 
organization’s or stakeholders’ risk tolerance? 

[End of Appendix 10] 

Appendix 11: Learning Curve Analysis: 

In this appendix, we describe the two ways to develop learning 
curves—unit formulation and cumulative average formulation—and discuss 
associated issues. 

Unit Formulation: 

Unit formulation (or unit theory) states that as the quantity of units 
doubles, unit cost decreases by a constant percentage. It is 
represented by the formula: 
 
Y = AXb, where: 
Y = the cost of the Xth unit, 
A = the first unit (T1) cost, 
X = the unit number, and, 
b = the slope coefficient (defined as the Ln (slope) / Ln (2)). 

What causes the cost to decrease as the quantity doubles is the rate of 
learning, depicted by b in the equation. Stated more simply, if the 
slope were 80 percent, then the value of unit 2 would be 80 percent of 
the value of the 1st unit, the 4th unit would be 80 percent of the 
value of the 2nd unit, and so on. As the quantity doubles, the cost 
reduces by the learning curve slope. 

Cumulative Average Formulation: 

Cumulative average formulation is commonly associated with T. P. 
Wright, who initiated an important discussion of this method in 1936. 
[Footnote 96] The theory is that as the total quantity of units 
produced doubles, the cumulative average cost decreases by a constant 
percentage. This approach uses the same functional form as unit 
formulation, but it is interpreted differently: 

Y = AXb, where: 
Y = the average cost of X units, 
A = the first unit (T1) cost, 
X = the cumulative number of units, and, 
b = the slope coefficient (defined as above). 

In cumulative average theory, if the average cost of the first 10 units 
were $100 and the slope were 90 percent, the average cost of the first 
20 units would be $90, the average cost of the first 40 units would be 
$81, and so on. 

The difference between unit formulation and cumulative average theory 
is in where the curve affects the overall cost. For the first few 
units, using cumulative average will yield higher cost savings than 
using a unit curve with the same slope. As the number of units 
increases, the difference between the results decreases.

Choosing between Unit Formulation and Cumulative Average: 

Choosing a formulation is not so much a science as an art. No firm 
rules would cause a cost estimator to select one approach over the 
other, but analyzing some factors can help decide which might best 
model the actual production environment. Some factors to consider when 
determining which approach to use are: 
 
1. analogous systems, 

2. industry standards, 

3. historic experience, and, 

4. expected production environment. 

Analogous Systems: 

Systems that are similar in form, function, development, or production 
process may help justify choosing one method over another. For example, 
if an agency is looking to buy a modified version of a commercial 
aircraft and unit curve were used to model the production cost for a 
previous version of a modified commercial jet, the estimator should 
choose unit formulation. 

Industry Standards: 

Certain industries sometimes tend to prefer one method over the other. 
For example, some space systems have a better fit using cumulative 
average formulation. If an analyst were estimating one of these space 
systems, cumulative average formulation should be used, since it is an 
industry standard. 

Historic Experience: 

Some contractors have a history of using one method over another 
because it models their production process better. The cost estimator 
should use the same method as the contractor, if the contractor’s 
method is known. 

Expected Production Environment: 

Certain production environments favor one method over another. For 
example, cumulative average formulation best models production 
environments in which the contractor is just starting production with 
prototype tooling, has an inadequate supplier base, expects early 
design changes, or is subject to short lead times. In such situations, 
there is a risk of concurrency between the development and production 
phases. Cumulative averaging helps smooth out the initial cost 
variations and provides overall a better fit to the data. In contrast, 
unit formulation is a better fit for production environments where the 
contractor is well prepared to begin production in terms of tooling, 
suppliers, lead times, and so on. As a result, there is less need for 
the data to be smoothed out by averaging the results. 

There are no firm rules for choosing one method over the other. 
Choosing between unit formulation and cumulative average formulation 
should be based on the cost estimator’s ability to determine which one 
best models the system’s costs. 

Production Rate Effects and Breaks in Production: 

Not only do costs decrease as more units are produced but also costs 
usually decrease as the production rate increases. This effect can be 
modeled by adding a rate variable to the unit learning formulation. The 
equation then becomes: 
 
Y = AXbQr, where: 
Y, A, X, and b are as defined earlier, 
Q = production rate (quantity per time period or lot), and, 
r = rate coefficient (Ln (slope) / Ln (2)). 

This rate equation directly models cost reductions achieved by 
economies of scale. The rate at which items can be produced can also be 
affected by the continuity of production. Production breaks may occur 
because of program delays (budget or technical), time lapses between 
initial and follow-on orders, or labor disputes. Examining a production 
break can be divided into two questions: 

* How much learning has been lost (or forgotten) because of the break 
in production? 
 
* How will the learning loss affect the costs of future production 
items? An analyst can answer the first question by using the Anderlohr 
method for estimating the loss of learning. The analyst can then 
determine the effect of the loss by using the retrograde method. 

Anderlohr Method: 

When assessing the effect of a production break on costs, it is 
necessary first to quantify how much learning was achieved before the 
break and then to quantify how much of it was lost by the break. 
The Anderlohr method divides learning into five categories: personnel 
learning, supervisory learning, continuity of production, methods, and 
special tooling. Personnel learning loss occurs because of layoffs 
or removal of staff from the production line. Supervisory learning loss 
occurs when the number of supervisors is reduced because personnel have 
been reduced, so that supervisors who may no longer be familiar with 
the job are no longer able to provide optimal guidance. 

Learning can also be lost when production continuity changes because 
the physical configuration of the production line has moved or 
optimization for new workers is lacking. Methods are usually affected 
least by production breaks, as long as they are documented. However, 
revisions to the methods may be required if the tooling has to change 
once the production line restarts. Finally, tools may break during the 
production halt or may not be replaced when they are worn, causing 
productivity loss. 
Each category must have a weight assigned to capture its effect on 
learning. The weights can vary by production situation but must always 
total 100 percent. To find the percentage of lost learning—known as the 
learning lost factor—the estimator must determine the learning lost 
factor in each category and then calculate the weighted average (see 
table 71). 

Table 71: The Anderlohr Method for the Learning Lost Factor: 

Category: Personnel learning; 
Weight: 30%; 
Learning lost: 51%; 
Weighted loss: 0.1530. 

Category: Supervisory learning; 
Weight: 20%; 
Learning lost: 19%; 
Weighted loss: 0.0380. 

Category: Production continuity;
Weight: 20%; 
Learning lost: 50%; 
Weighted loss: 0.1000. 

Category: Tooling; 
Weight: 15%; 
Learning lost: 5%; 
Weighted loss: 0.0075. 

Category: Methods; 
Weight: 15%; 
Learning lost: 7%; 
Weighted loss: 0.0105. 

Category: Total learning lost;
Weight: 100%; 
Weighted loss: 0.3090 or 30.9%. 

Source: DOD. 

[End of table] 

In the table, if the production break were 6 months, the effect on 
learning would be almost a 31 percent reduction in efficiency, since 
the production line shut down. 

Retrograde Method: 

Assume that 10 units had been produced before the production break. The 
true cost of the first unit produced after the production break would 
then equal the cost of the 11th unit—assuming no production break—plus 
the 30.9 percent penalty from the lost learning. The retrograde method 
simply goes back up the learning curve to the unit (X) where that cost 
occurred. The number of units back up the curve is then the number of 
retrograde or lost units of learning. Production restarts at unit X 
rather than at unit 11. 

As illustrated by the Anderlohr and retrograde methods, costs increase 
as a result of production breaks. Cost estimators and auditors should 
question how the costs were estimated to account for learning that is 
lost, taking into account all factors that can be affected by learning. 

Step-Down Functions: 

A step-down function is a method of estimating first unit production 
costs from prototype (or development) cost data. The first step is to 
account for the number of equivalent prototype units, based on both 
partial and complete units. This allows the cost estimator to capture 
the effects of units that are not entirely whole on the improvement 
curve. For example, if the development program includes a static 
article that represents 85 percent of a full aircraft, a fatigue 
article that represents 50 percent of a full aircraft, and three full 
aircraft, the development program would have 4.35 equivalent units. If 
the program is being credited with learning in development, the first 
production unit would then be unit 5.35. 

After equivalent units have been calculated, the analyst must determine 
if the cost improvement achieved during development on these prototype 
units applies to the production phase. The following factors should be 
considered when analyzing the amount of credit to take in production 
for cost improvement incurred in development: 

* the break between the last prototype unit and the start of production 
units, 

* how similar the prototype units are to the production units, 

* the production rate, and, 

* the extent to which the same facilities, processes, and people are 
being used in production as in development. 

By addressing these factors, the analyst can determine proper placement 
on the curve for the first production unit. For example, analysis might 
indicate that cost improvement is continuous and, therefore, the first 
production unit is really the number of equivalent development units 
plus one. If it is further determined that the development slope should 
be the same as the production slope, the production estimate can be 
calculated by continuing down the curve for the desired quantity. This 
is referred to as the continuous approach. 

Analysis of the four factors often leads the analyst to conclude that 
totally continuous improvement is not appropriate and that some 
adjustment is required. This could be because prototype manufacture 
was accomplished in a development laboratory rather than in a normal 
production environment or that engineering personnel were used rather 
than production personnel. Numerous reasons are possible for less 
than totally continuous cost improvement. Since all programs are 
unique, the analyst must thoroughly evaluate their particularities. 

Two Theories Associated with Less Than Continuous Improvement: 

Two theories, sequential and disjoint, address the issue of less than 
continuous improvement. Both theories maintain that the improvement 
slope is the same in production and development but that a step down in 
value occurs between the cost of the first prototype unit and the cost 
of the first production unit. 

In sequential theory, cost improvement continues where the first 
production unit equals the last production unit plus one, but a 
displacement on the curve appears at that point. In disjoint theory, 
the curve is displaced, but improvement starts over at unit one rather 
than at the last production unit plus one. These displacements are 
typically quantified as factors. Because disjoint theory restarts 
learning, it usually results in significantly lower production 
estimates. 

The continuous cost improvement concept and sequential and disjoint 
displacement theories assume the same improvement slope in production 
as in development. Plots of actual cost data, however, sometimes 
indicate that production slopes are either steeper or flatter than 
development slopes. In cases in which the historic data strongly 
support a change in slope, the analyst should consider both a step down 
and a shift. For example, changing from an engineering environment to a 
heavily automated production line might both displace the improvement 
curve downward and flatten it. 

End-of-Production Adjustments: 

As production ends, programs typically incur greater costs for both 
recurring and nonrecurring efforts. The recurring cost of end-of-
production units is often higher than would have been projected from a 
program’s historic cost improvement curve. This is referred to as toe-
up. The main reasons for toe-up are: 
 
* the transfer of more experienced and productive employees to other 
programs, resulting in a loss of learning on the production line; 

* reduced size of the final lot, resulting in rate adjustment 
penalties; 

* a decrease in worker productivity from the psychological effect of 
the imminent shutdown of the production line; 

* a shift of management attention to more important or financially 
viable programs, resulting in delayed identification and resolution of 
production problems; 

* tooling inefficiency, resulting from tear-down of the tooling 
facility while the last production lot is still in process; 
 
* production process modifications, resulting from management’s 
attempts to accommodate such factors as reductions in personnel and 
production floor space; and 
 
* similar problems with subcontractors. 

No techniques for projecting recurring toe-up costs are generally 
accepted. In truth, such costs are often ignored. If, however, the 
analyst has access to relevant historic cost data, especially 
contractor-specific data, it is recommended that a factor be developed 
and applied. 

Typically far more extensive than recurring toe-up costs are the 
nonrecurring close-out costs that account for the numerous nonrecurring 
activities at the end of a program. Examples of close-out costs are: 

* the completion of all design or “as built” drawings and files to 
match the actual “as built” system; often during a production run, 
change orders that modify a system need to be reflected in the final 
data package that is produced; 

* the completion of all testing instructions to match “as built” 
production; and; 

* dismantling the production tooling or facility at the end of the 
production run and, sometimes, the storage of that production tooling. 

[End of Appendix 11] 

Appendix 12: Technology Readiness Levels: 

Readiness level: 1. Basic principles observed and reported; 
Definition: Lowest level of technology readiness. Translation of 
scientific research into applied research and development 
begins—examples might include paper studies of a technology’s basic 
properties. 

Readiness level: 2. Technology concept or application formulated; 
Definition: Invention begins, application is speculative, and no proof 
or detailed analysis supports assumptions. Examples are limited to 
paper studies. 

Readiness level: 3. Analytical and experimental critical function or 
characteristic proof of concept; 
Definition: Active research and development begins, including 
analytical and laboratory studies to physically validate analytical 
predictions of technology’s separate elements. Examples include 
components not yet integrated or representative. 

Readiness level: 4. Component or breadboard validation in a laboratory; 
Definition: Basic technological components are integrated to establish 
that the pieces will work together. This is relatively low fidelity 
compared to the eventual system. Example is integration of ad hoc 
hardware in a laboratory. 

Readiness level: 5. Component or breadboard validation in relevant 
environment; 
Definition: Fidelity of breadboard technology increases significantly. 
The basic technological components are integrated with reasonably 
realistic supporting elements so that the technology can be tested in a 
simulated environment. Example is high-fidelity laboratory integration 
of components. 

Readiness level: 6. System or subsystem model or prototype 
demonstration in a relevant environment; 
Definition: Representative model or prototype system, well beyond level 
5, is tested in a relevant environment, representing a major step up in 
a technology’s demonstrated readiness. Examples include testing a 
prototype in a high-fidelity laboratory environment or in simulated 
operational environment. 

Readiness level: 7. System prototype demonstration in an operational 
environment; 
Definition: Prototype near or at planned operational system, 
representing a major step up from level 6, requiring the demonstration 
of an actual system prototype in an operational environment, such as in 
an aircraft, in a vehicle, or in space. Example is testing the 
prototype in a test bed aircraft. 

Readiness level: 8. System completed and flight qualified through test 
and demonstration; 
Definition: Technology has been proven to work in its final form and 
under expected conditions; in almost all cases, represents the end of 
true system development. Example is developmental test and evaluation 
of the system in its intended weapon system to determine if it meets 
design specifications. 

Readiness level: 9. System flight proven through successful mission 
operations; 
Definition: Actual application of the technology in its final form and 
under mission conditions, such as those encountered in operational test 
and evaluation. In almost all cases, this is the end of the last bug 
fixing aspects of true system development. Example is using the system 
under operational mission conditions. 
 
Source: GAO. 

[End of table] 

[End of Appendix 12] 

Appendix 13: EVM-Related Award Fee Criteria: 
 
Criterion: EVM is integrated and used for program management; 
Rating: Unsatisfactory; 
Rationale: Contractor fails to meet criteria for satisfactory 
performance[A]. 

Criterion: EVM is integrated and used for program management; 
Rating: Satisfactory; 
Rationale: Contractor team uses earned value performance data to make 
program decisions, as appropriate. 

Criterion: EVM is integrated and used for program management; 
Rating: Good; 
Rationale: Meets all satisfactory criteria, and earned value 
performance is effectively integrated into program management reviews 
and is a primary tool for program control and decisionmaking. 
 
Criterion: EVM is integrated and used for program management; 
Rating: Very good; 
Rationale: Meets all good criteria and the contractor team develops and 
sustains continual and effective communication of performance status 
with the government. 

Criterion: EVM is integrated and used for program management; 
Rating: Excellent; 
Rationale: Meets all very good criteria, and the entire contractor team 
proactively and innovatively uses EVM and plans and implements 
continual EVM process improvement. 

Criterion: Contractor manages major subcontractors; 
Rating: Unsatisfactory; 
Rationale: Fails to meet criteria for satisfactory performance. 

Criterion: Contractor manages major subcontractors; 
Rating: Satisfactory; 
Rationale: Routinely reviews the subcontractors’ performance 
measurement baseline. 

Criterion: Contractor manages major subcontractors; 
Rating: Good; 
Rationale: Meets all satisfactory criteria and the management system is 
structured for oversight of subcontractor performance. 

Criterion: Contractor manages major subcontractors; 
Rating: Very good; 
Rationale: Meets all good criteria and actively reviews and manages 
subcontractor progress so that it provides clear and accurate status 
reporting to government. 

Criterion: Contractor manages major subcontractors; 
Rating: Excellent; 
Rationale: Meets all very good criteria, the effective and timely 
communication of subcontractor cost and schedule status are reported to 
government, and issues are proactively managed. 

Criterion: Cost, expenditure, and schedule forecasts are realistic and 
current; 
Rating: Unsatisfactory; 
Rationale: Contractor fails to meet criteria for satisfactory 
performance. 
 
Criterion: Cost, expenditure, and schedule forecasts are realistic and 
current; 
Rating: Satisfactory; 
Rationale: Contractor provides procedures for delivering realistic and 
up-to-date; 
cost and schedule forecasts as presented in the CPR, EACs, contract 
funds status report, IMS, etc. Forecasts are complete and consistent 
with program requirements and reasonably documented. 

Criterion: Cost, expenditure, and schedule forecasts are realistic and 
current; 
Rating: Good; 
Rationale: Meets all satisfactory criteria, and all requirements for 
additional funding and schedule changes are thoroughly documented and 
justified. Expenditure forecasts are consistent, logical, and based 
on program requirements. The contractor acknowledges any cost growth in 
the current reporting period and provides well-documented forecasts. 

Criterion: Cost, expenditure, and schedule forecasts are realistic and 
current; 
Rating: Very good; 
Rationale: Meets all good criteria, and expenditure forecasts reflect 
constant scrutiny to ensure accuracy and currency. The contractor 
prepares and develops program cost and schedule data that allow 
government a clear view into current and forecast program costs and 
schedule. Schedule milestone tracking and projections are very accurate 
and reflect true program status. The contractor keeps close and timely 
communications with the government. 

Criterion: Cost, expenditure, and schedule forecasts are realistic and 
current; 
Rating: Excellent; 
Rationale: Meets all very good criteria, and the contractor 
consistently submits a realistic, high-quality EAC; reported 
expenditure profiles are accurate. 

Criterion: Contractor’s cost proposals are adequate during award fee 
evaluation period; 
Rating: Unsatisfactory; 
Rationale: Fails to meet criteria for satisfactory performance.

Criterion: Contractor’s cost proposals are adequate during award fee 
evaluation period; 
Rating: Satisfactory; 
Rationale: Proposal data, including subcontractor data, are logically 
organized and give government a view adequate to support cost analysis 
and technical review. A basis of estimate is documented for each 
element, and when insufficiently detailed. the contractor provides it 
to the government on request. The proposal is submitted on time. 

Criterion: Contractor’s cost proposals are adequate during award fee 
evaluation period; 
Rating: Good; 
Rationale: Meets all satisfactory criteria and provides detailed 
analysis for subcontractor and material costs. 

Criterion: Contractor’s cost proposals are adequate during award fee 
evaluation period; 
Rating: Very good; 
Rationale: Meets all good criteria. Proposal data are traceable and 
give the government a view for supporting a detailed technical review 
and thorough cost analysis; only minor clarification is required by 
government. Potential cost savings are considered in the proposal. 

Criterion: Contractor’s cost proposals are adequate during award fee 
evaluation period; 
Rating: Excellent; 
Rationale: Meets all very good criteria; change proposals stand alone 
and require no iteration for government understanding. The contractor 
stays in communication during proposal preparation and resolves issues 
effectively before submission. 

Criterion: Costs are controlled; 
Rating: Unsatisfactory; 
Rationale: Contractor fails to meet criteria for satisfactory 
performance. 

Criterion: Costs are controlled; 
Rating: Satisfactory; 
Rationale: Contractor and subcontractor control cost to meet program 
objectives. 

Criterion: Costs are controlled; 
Rating: Good; 
Rationale: Meets all satisfactory criteria; contractor stays within 
target cost and provides good control during contract performance. 

Criterion: Costs are controlled; 
Rating: Very good; 
Rationale: Meets all good criteria, and the contractor stays within 
cost and continues to provide good control during contract performance. 

Criterion: Costs are controlled; 
Rating: Excellent; 
Rationale: Meets all very good requirements; contractor provides 
suggestions and, when appropriate, proposals to the program office for 
initiatives that can reduce costs. The contractor implements cost 
reduction ideas across the program and at the subcontract level and 
identifies (and when appropriate implements) new technologies, 
commercial components, and manufacturing processes that can reduce 
costs. 

Criterion: Contractor conducts variance analysis; 
Rating: Unsatisfactory; 
Rationale: Fails to meet criteria for satisfactory performance. 

Criterion: Contractor conducts variance analysis; 
Rating: Satisfactory; 
Rationale: Variance analysis is sufficient and usually keeps the 
government informed of problem areas and their causes and corrective 
action. When detail is insufficient, the contractor provides it to the 
government promptly on request. 

Criterion: Contractor conducts variance analysis; 
Rating: Good; 
Rationale: Meets all satisfactory criteria and routinely keeps 
government informed of problem areas and their causes and corrective 
action. Updates explanations monthly and analyzes potential risks for 
cost and schedule impacts. 

Criterion: Contractor conducts variance analysis; 
Rating: Very good; 
Rationale: Meets all good criteria and always keeps government informed 
of problem areas and their causes and corrective action. Variance 
analysis is thorough and used for internal management to control cost 
and schedule. Detailed explanations and insight are provided for 
schedule slips or technical performance that could result in cost 
growth. The government rarely requires further clarification. 

Criterion: Contractor conducts variance analysis; 
Rating: Excellent; 
Rationale: Meets all very good criteria; variance analysis is extremely 
thorough. Contractor proactively keeps the government informed of all 
problem areas and their causes, emerging variances, impacts, and 
corrective actions. Keeps government informed of progress implementing 
the corrective action plans and fully integrates analysis with risk 
management plans and processes. 

Criterion: Billing and performance cumulative performance data are 
accurate, timely, and consistent and subcontractor data are integrated; 
Rating: Unsatisfactory; 
Rationale: Contractor fails to meet criteria for satisfactory. 

Criterion: Billing and performance cumulative performance data are 
accurate, timely, and consistent and subcontractor data are integrated; 
Rating: Satisfactory; 
Rationale: Billings to the government may have slight delays or minor 
errors and the CPR, contract funds status report, and IMS reports are 
complete and consistent, with only minor errors. Data can be traced to 
the WBS with minimum effort, and subcontractor cost and schedule data 
are integrated into the appropriate reports with some clarification 
required. Reports may be submitted late, but electronic data are 
correct. 

Criterion: Billing and performance cumulative performance data are 
accurate, timely, and consistent and subcontractor data are integrated; 
Rating: Good; 
Rationale: Meets all satisfactory criteria, and billing to government 
is accurate, although with slight delays. Data are complete, accurate, 
and consistent and can be traced to the WBS, with some clarification 
required. Subcontractor performance data are fully integrated into the 
appropriate on-time reports, with no clarification required. 

Criterion: Billing and performance cumulative performance data are 
accurate, timely, and consistent and subcontractor data are integrated; 
Rating: Very good; 
Rationale: Meets all good criteria, and data are complete, accurate, 
and consistent. 

Criterion: Billing and performance cumulative performance data are 
accurate, timely, and consistent and subcontractor data are integrated; 
Rating: Excellent; 
Rationale: Meets all very good criteria, and billing is submitted to 
government on time. Data are complete, accurate, and consistent and can 
be traced clearly to the WBS. CPR and contract funds status report data 
elements are fully reconcilable. Subcontractor schedule performance is 
vertically and horizontally integrated with the contractor schedule. 

Criterion: Baseline is satisfactory performance disciplined and system 
is in compliance;
Rating: Unsatisfactory; 
Rationale: Contractor fails to meet criteria for satisfactory 
performance. 

Criterion: Baseline is satisfactory performance disciplined and system 
is in compliance;
Rating: Satisfactory; 
Rationale: Contractor develops a reliable performance measurement 
baseline that includes work scope, schedule, and cost. The contractor 
or government may discover system deficiencies or baseline planning 
errors through either routine surveillance or data inaccuracies in the 
CPRs. Contract changes and undistributed budget are normally 
incorporated into the baseline in a timely manner. Management reserve 
is properly tracked, and eliminating performance variances is limited 
to correcting errors. 

Criterion: Baseline is satisfactory performance disciplined and system 
is in compliance;
Rating: Good; 
Rationale: Meets all satisfactory criteria. Contractor develops a 
reliable performance measurement baseline that includes work scope, 
schedule, and cost. The contractor or government may discover 
system deficiencies or baseline planning errors through either routine 
surveillance or data inaccuracies in the CPRs. Contract changes and 
undistributed budget are normally incorporated into the baseline in a 
timely manner. Management reserve is tracked and used properly, and 
elimination of performance variances is limited to correction of 
errors. 

Criterion: Baseline is satisfactory performance disciplined and system 
is in compliance;
Rating: Very good; 
Rationale: Meets all good criteria and the contractor builds a proper 
and realistic baseline in a timely way. The contractor ensures that 
work packages are detailed and consistent with scope of contract and 
planned consistent with schedule. The contractor conducts routine 
surveillance that reveals minor system deficiencies or minor baseline 
planning errors that are quickly assessed and corrected, resulting in 
little or no impact to data accuracy. Contractor’s EVM system is 
effectively integrated. 

Criterion: Baseline is satisfactory performance disciplined and system 
is in compliance;
Rating: Excellent 
Rationale: Meets all very good criteria and the contractor proactively 
manages the baseline and maintains timely detailed planning as far in 
advance as practical and implements proper baseline controls. The 
contractor controls and minimizes changes to the baseline, particularly 
in the near term, and system deficiencies or planning errors are few 
and infrequent. The contractor streamlines internal processes and 
maintains a high level of EVM system competency and training. 

Source: GAO and DCMA. 

[A] Program managers need to determine what satisfactory performance 
criteria will be used since each program is unique. 

[End of table] 

[End of Appendix 13] 

Appendix 14: Integrated Baseline Review Case Study And Other
Supplemental Tools: 

As described in the Cost Guide, the objectives of the integrated 
baseline review (IBR) are to gain insight into cost and schedule risk 
areas associated with the subject program (or contract) and to develop 
confidence in the program’s operating plans. The focus of this review 
should be primarily to assessing the adequacy of the baseline plan to 
execute the approved program (or contract). In chapter 19, we discuss 
the key practices for planning and executing an effective IBR. In this 
appendix, we provide supplemental information on the IBR to help 
organizations in implementing or improving their IBR capabilities, as 
well as provide our auditors with further guidance on the planned 
effort to perform a quality IBR. This information is based on the 
process the Naval Air Systems Command (NAVAIR) uses, an organization 
considered a leader in IBR process and in maximizing the value gained 
from these reviews.[Footnote 97] 

NAVAIR IBR Preparation: 
 
IBR Team Roles and Responsibilities: 

The typical IBR team is made up of the government program manager, 
technical experts, an EVM analyst, and DCMA, as well as other personnel 
who may help during the review. The duties of all team members include 
attending IBR training before the start of the IBR, reviewing contract 
documentation before baseline discussions with the control account 
manager (CAM), conducting CAM and senior manager discussions, helping 
to complete applicable documentation, providing a risk assessment based 
on the prescribed risk evaluation criteria, and helping to prepare the 
IBR out-brief. Table 72 describes the specific responsibilities of the 
key team leaders. 

Table 72: IBR Leadership Roles and Responsibilities: 
 
Key leader: Program manager; 
Role: Acts as or assigns the team leader for the IBR; jointly 
responsible for the IBR process; 
Responsibility: Plan and perform the IBR; Monitor progress on required 
actions until issues are resolved; Provide an adequate number of 
qualified personnel as IBR team members; Specify evaluation criteria 
for risk areas; 
Document risk issues identified during an IBR; Present the IBR out-
brief. 
 
Key leader: Performance measurement deputy team leader; 
Role: This lead role is filled by the NAVAIR [AIR-4.2.3] EVM analyst; 
Responsibility: Provide overall facilitation for the IBR; Provide IBR 
training before the start of the IBR; Review contract documentation 
before baseline discussions with the control account manager; Conduct 
control account manager and senior manager discussions; Provide policy 
and interpretation of EVM system guidance; Provide technical direction 
and leadership emphasizing the importance of thorough cost, schedule, 
and technical integration of contract work; Ensure that all action item 
reports are tracked in the EVM risk database; Provide an assessment of 
risk based on the prescribed risk evaluation criteria and all program 
risk based on the defined risk evaluation criteria; Help complete all 
IBR documentation; Help prepare the IBR out-brief. 

Source: NAVAIR. 

[End of table] 

Based on experience, NAVAIR officials have told us that the data review 
with the IBR team before the event is really the key to a successful 
IBR. Being prepared goes a long way to being able to dig deep in the 
data to determine whether there are issues and risks in the plan. 

IBR Team Training: 

In the weeks leading up to the IBR event, the IBR team typically 
participates in a day of training tailored to the subject program that 
includes the: 
 
* basic IBR fundamentals and review of the methodology to be followed 
on subject program; 
 
* detailed roles and responsibilities of team members; 

* guidance on baseline discussions with control account managers and 
the key documents that should be referenced (and sample data traces 
across these documents) to see how work is defined, baselined, 
measured, and scheduled; 

* results from recent schedule risk assessments, management system 
assessments, and major subcontractor IBRs (elements of the IBR NAVAIR 
performed before the IBR event to better understand the current risks 
in the baseline and focus on the program areas that align with these 
risks during the IBR); 

* IBR out-brief contents; and; 

* evaluation criteria, tools, and forms expected to be used during 
execution (see exhibits B–D at the end of this appendix for select 
NAVAIR discussion forms and sample questions). 

IBR Execution: 
 
The duration of the IBR is based on program and contract scope, 
complexity, and risk; typically, it lasts over several days. Exhibit A 
at the end of this appendix, for example, is the agenda of activities 
for an actual 4-day IBR for Program X; it had just implemented an 
overtarget baseline on its prime contract. (Specific references to the 
actual program and contractor have been removed.) 

This IBR was kicked off with the contractor’s overview briefing on its 
internal management process—risk management, baseline establishment and 
maintenance, scheduling, EVM methods and tools, work authorization, and 
standing management meetings with control account managers and 
subcontractors. The process overview briefing was followed by the 
team’s discussion with the contractor program manager. At the 
conclusion of this discussion, team members wrote up their observations 
and findings in their individual assessments. Once the individual 
assessments were completed (see exhibit B), the team came together to 
complete a consensus assessment (exhibit B). It is during these 
consensus meetings that action item reports are typically assigned to 
individual members for drafting, where applicable (see exhibit C). This 
same methodology is applied to the control account manager discussions, 
as well. 

Formal Out-brief of IBR Results: 

Figure 61 shows the team’s summary assessment of the risks in Program 
X, based on the amount of remaining work, level of severity of the 
risks (many of which affected tasks found on the integrated master 
schedule’s critical path), and the government risk evaluation criteria 
that was applied. The most critical risks identified were related to 
the prime contractor’s management of its major subcontractors. During 
the out-brief presentation, the government program manager noted 
concerns about the prime contractor’s current practices in overseeing 
selected subcontractors because of ongoing poor quality and late 
receipt of key deliverables—some of which affected the program’s 
critical path. The other critical risks were associated with the 
specific earned value metrics applied to measure progress in software 
development. 

Figure 61: IBR Team’s Program Summary Assessment Results for Program X: 

[Refer to PDF for image: illustration] 

Program level rollup of risk: 
1. Based on remaining work; 
2. Level of severity (risk level and schedule critical path taken into 
account); 
3. Also considered government risk evaluation criteria. 
 
Management processes: High; 
Subcontractor management processes and lack of valuable software earned 
value performance measures. 

Resources: Medium; 
Have key vacancies in staffing at this time; Didn’t staff to plan in 
software area; Low confidence in personnel turnover plans and risk of 
losing expertise. 
 
Cost: Medium; 
Cost impact of subcontractor (prime using management reserve to 
mitigate poor performing contractor); software continues to be a risk, 
but expect to be less than 10% of program at this time. 

Schedule: Medium; 
Activities pushing out from original baselines dates, however, appear 
to be overcoming difficulties with critical path driver subcontractor; 
Will continue to stress importance of near-term schedule. 

Technical: Medium; 
Hardware and software Integration plan was not in place; Few identified
opportunities (budget and schedule) available to mitigate potential risk
areas. 

Source: NAVAIR. 

[End of figure] 

Figure 62 shows a summary of the detailed assessment results at the 
major system development areas under evaluation during the IBR. 

Figure 62: Program X IBR Team’s Assessment Results by Program Area: 

[Refer to PDF for image: illustration] 

CAM/Area: Subcontractor; 
BCWS remaining June 2007: $49K; 
Critical path June 2007: SIL TRR; 
Management process: High; 
Resources: High; 
Cost: High; 
Schedule: High; 
Technical: High. 

CAM/Area: Subcontractor; 
BCWS remaining June 2007: $14,262K; 
Critical path June 2007: SDD; 
Management process: High; 
Resources: Low; 
Cost: Medium; 
Schedule: High; 
Technical: Low. 

CAM/Area: Prime contract/Control account manager; 
BCWS remaining June 2007: [Empty]; 
Critical path June 2007: SIL; 
Management process: Low; 
Resources: Medium; 
Cost: Low; 
Schedule: Medium; 
Technical: Low. 

CAM/Area: Prime contract/Control account manager; 
BCWS remaining June 2007: [Empty]; 
Critical path June 2007: &; 
Management process: Low; 
Resources: Medium; 
Cost: Low; 
Schedule: Low; 
Technical: Low. 

CAM/Area: Prime contract/Control account manager; 
BCWS remaining June 2007: $7,299K; 
Critical path June 2007: SDD; 
Management process: Low; 
Resources: High; 
Cost: Medium; 
Schedule: Medium; 
Technical: Medium. 

CAM/Area: Subcontractor/software; 
BCWS remaining June 2007: $1,344K; 
Critical path June 2007: NO; 
Management process: Medium; 
Resources: Medium; 
Cost: High; 
Schedule: High; 
Technical: High. 

CAM/Area: Subcontractor; 
BCWS remaining June 2007: $3,629K; 
Critical path June 2007: NO; 
Management process: High; 
Resources: Low; 
Cost: Low; 
Schedule: Low; 
Technical: Low. 

CAM/Area: Prime contractor/logistics; 
BCWS remaining June 2007: $2,658; 
Critical path June 2007: NO; 
Management process: Low; 
Resources: Low; 
Cost: Low; 
Schedule: Low; 
Technical: Low. 

CAM/Area: Prime contractor/PM; 
BCWS remaining June 2007: $2,906K; 
Critical path June 2007: SIL TRR & SDD
Management process: Low; 
Resources: Medium; 
Cost: Low; 
Schedule: Low; 
Technical: Low. 

Source: NAVAIR. 

[End of figure] 

Figure 63 is an example from the detailed findings of a particular 
program area. This final assessment represents the team’s consensus of 
overall risk ratings by IBR risk category, based on the agreed-on 
observations and findings from the control account manager discussions. 
Each corrective action corresponds to a specific action item report. 

Figure 63: Program X IBR Team’s Detailed Assessment Results for an 
Individual Program Area: 

[Refer to PDF for image: illustration] 

Assessment area: Management process; 
Score: High. 

Assessment area: Resources; 
Score: Low. 

Assessment area: Cost; 
Score: Medium. 

Assessment area: Schedule; 
Score: High. 

Assessment area: Technical; 
Score: Low. 

Overall assessment: 

* Lack of integration of the subcontractor schedule with the program 
IMS is a big concern; 

* Communication processes between subcontractor and prime contractor 
must improve; 

* Timeliness of EV and technical information transfer is inadequate. 

Dialogue highlights: Strengths: 

* Work allocated to subcontractor is well defined and understood. 

Risks and issues: 
 
* Few identified opportunities remain to mitigate potential technical 
or cost risk, no management reserve; 
 
* Special management attention/monitoring of subcontractor critical 
path items. Touch points between prime contractor and subcontractor/gov 
need to be coordinated to reduce any potential schedule risk. 

* Unable to assess total cost risk due to lack of subcontractor 
schedule with the program IMS. 

* Management process issues:

- Subcontractor PM management reserve decisions; 
- Prime subcontractor mgmt processes have yet to be validated; 
- Timing of subcontractor IMS submissions; 
- Lack of a management process to use management reserve; 
- Lack of cost and schedule integration. 

* Communications between team members and the data sharing across 
the contract and subcontract especially during A/C installation and 
test. 
 
Corrective actions: 
 
* Develop and integrate an IMS with subcontractor (CAR#4); 
 
* Although resources are assessed as low risk, identification of 
subcontractor scheduler is a high priority; 

* Prime contractor formally documents emerging processes (EAC, 
schedule inputs, program management, finalize restructure, and prime 
contractor evaluation of EVM data) 15 Oct goal to revisit (CAR#6); 

* Request a subcontractor org chart with contact information (data 
request #5); 
 
* Continue A/C induction discussion…to be continued as an action out 
of IBR (CAR#8); 
 
* Concern that prime contractor is not using subcontractor EVM data to 
manage subcontractor effort, recommend more detailed analysis of 
subcontractor CPR prior to submitting to gov (CAR#6). 

Source: NAVAIR. 

[End of figure] 

Post IBR Activities: 

After IBR event activities, the IBR team is responsible for developing 
the final action item reports, which are then formally submitted to the 
contractor, who is given about a month to respond back to the team. The 
team reviews the contractor’s responses; sometimes these require 
further negotiation on the closure of the action item reports. NAVAIR 
determines whether the contractor has responded sufficiently and the 
original risk has been addressed. NAVAIR closes the action item, 
requests further information or clarification, or decides to introduce 
the risk into the risk management plan. A decision to include the 
program’s risk in the NAVAIR risk database is based on several factors, 
including the contractor’s inability to address the item fully, the 
lack of a clear action to take at the time, or the realization that it 
is a risk that the program has to manage. In some cases, IBR reports 
can remain open for a significant amount of time. 

From their experience, NAVAIR officials have told us that they often 
find that the most difficult parts of the IBR are coming to closure on 
outstanding action item reports and keeping the program team focused 
on the issues long after actual events have occurred. Overcoming the 
perceptions that the IBR is just an event is definitely a challenge. 

In consultation with our experts, they noted that the critical factor 
in closing IBR actions is ensuring that these items receive ongoing 
attention from the program manager both government and contractor-based 
on lessons learned. The most effective way to do this is to incorporate 
these action items into the business rhythm usually monthly, including 
the monthly program management review. Any and all IBR action items 
captured in the out-brief and supporting documentation should go 
directly into the contractor’s internal action item database for 
disposition and closure with the appropriate government approvals. The 
monthly program management review should be used to track the status of 
the IBR actions. Our experts noted that having a separate list of 
things to do outside the contractor’s business rhythm (and some of the 
issues are hard to tackle) simply will not get done. Also, waiting 
long enough could mean that the action items could even be overtaken by 
events. 

In summary, the experts highlighted several key points to this lesson 
learned: 
 
* use the contractor’s action item tracking database; 

* load the actions properly into it, assign appropriate 
responsibilities, receive status updates monthly, and obtain 
appropriate government reviews and approval to close them; and; 

* hold the contractor accountable with the award fee process, as 
applicable. 

Exhibit A: 
Program X: Subject Contractor: 
Integrated Baseline Review: 
August 20–23, 2007:
 
Monday, August 20, 2007: 
Conference Room A: 
1400–1430 Opening Comments; 
1430–1600 Program X Overview Management Systems; 
1600–1700 Contractor Program Management, Program X, Program Management 
Discussion; 
1700–1800 Contractor Program Management Process Write-Up. 

Tuesday, August 21, 2007: 
Conference Room B: 
0800–0830 Government IBR Team Only; 
0830–1000 Test Team, Control Account Managers 1 and 2; 
1000–1030 Test Team Write-Up; 
1030–1045 Break; 
1045–1200 Major Subcontractor A, Control Account Managers 3 and 4; 
1200–1300 Lunch; 
1300–1330 Major Subcontractor A Write-Up; 
1330–1500 Logistics and Technical Data, Control Account Manager 5; 
1500–1530 Logistics and Technical Publication Write-Up; 
1600–1630 Informal Out-brief (to Contractor) 
1630–1730 Complete Formal Out-brief for Days 1 and 2, Control Account 
Managers. 
 
Wednesday, August 22, 2007: 
Conference Room B: 
0800–0830 Government IBR Team Only; 
0830–1030 Systems Engineering and SIL Support, Control Account Managers 
6 and 7; 
1030–1130 Systems Engineering and SIL Support Write-Up; 
1130–1230 Lunch; 
1230–1400 Major Subcontractor B, Control Account Managers 3 and 8; 
1400–1430 Major Subcontractor B Write-Up; 
1430–1500 Break; 
1500–1630 Major Subcontractor C, Control Account Managers 3 and 8; 
1630–1700 Major Subcontractor C Write-Up; 
1700–1730 Informal Out-brief (to Contractor). 

Thursday, August 23, 2007: 
Conference Room B: 
0800–1130 Government IBR Team: Documentation Wrap-Up & Preparation of 
Formal Out-brief; 
1130–1230 Lunch; 
1230–1330 IBR Schedule Reserve; 
1400 Deliver Formal Out-brief to Contractor. 
 
[End Exhibit A] 

Exhibit B: 
IBR Discussion Assessment Forms: 

In using assessment forms to frame discussions with control account 
managers, evaluators should keep in mind three fundamental objectives: 
(1) to achieve the technical plan, (2) to complete the schedule, and 
(3) to ensure the sufficiency and adequacy of resources and their time-
phasing. These objectives make up the core of the IBR. 

Individuals with experience conducting IBRs should be present for each 
discussion. Without complete understanding of the baseline plan, the 
results of an IBR are negligible. Our experts discouraged overreliance 
on checklists or questionnaires, since these may result in “failure to 
see the forest for the trees.” With these three objectives in mind, the 
forms presented here can be useful for focusing discussions. 

IBR discussion Assessment Form from NAVAIR: 

Log No. 
Team: 
Date: 

1. Manager: 
Area of responsibility: 

2. Technical scope (statement of work): 
Complete identification, definition, and flow down; 
Consistency with contract requirements; 
Assignment of responsibility, authority, and accountability. 

3. Schedules: 
Period of performance; 
Realistic planned durations; 
Logical sequence of work planned; 
Consistency with intermediate/master schedule and contract milestones; 
Significant interdependencies, interfaces, and constraints. 

4. Cost and resource risk: 
Basis of estimate[Footnote 98]; 
Budget adequacy and reasonableness (time phasing, levels, mix, type); 
Resource availability; 
Provisions for scrap, rework, retest, or repair. 

5. Management process risk: 
Integrated cost, schedule, and technical planning; 
Status of EVM system acceptance. If not accepted, EVM specialists 
should assess the adequacy of key EVM concepts: 
- Baseline change control; 
- Reliability and timeliness of management and performance data; 
- EAC determination and maintenance process; 
- Subcontract management; 
- Objectively planned earned value methods correlated with technical 
progress; 
Objective determination of progress; 
Methods correlate with technical achievement. 

6. Brief summary of discussion:

7. Action item report prepared? 

An Alternative discussion Form: 

Date: 
Time: 
Program: 
Control account Managers: 
WBS (or CLIN): 
Attendees: 

Documents reviewed: 
Statement of work: 
Organza tonal breakdown structure: 
Work authorization document: 
Integrated master schedule: 
Entrance and exit criteria: 
Resource planning: 
WBS and WBS dictionary: 
Responsibility assignment matrix: 
Control account plans: 
Critical path analyses: 
Assumptions: 
Other: 

Brief summary of subjects discussed: 

Identification of risks: 

Were any action item forms prepared? 
Yes: 
No: 
If yes, brief description of actions: 

Brief statement of strengths, weaknesses, conclusions: 
5 = Achievables adequately identified; 
4 = Probably achievable - risk mitigation effort required in one or 
more minor areas; 
3 = Potentially achievable — additional risk mitigation effort required 
in a number of areas; 
2 = Risk mitigation borderline — significant risk mitigation effort 
required; may not be achievable; 
1 = Not achievable — not achievable as currently planned; risks 
significantly affect achieving objectives. 
 
Plan’s achievability: 
Technical: 
Schedule: 
Cost: 

Planned follow-up: 

Signatures: 
Government discussion lead: 
Contractor discussion participant: 

[End Exhibit B] 

Exhibit C: 
IBR Action Item Forms: 

NAVAIR IBR Action Item Report: 
 
WBS/Control account: 
Log no.: 
Date: 
Submitted by:

Subject of issue or observation: 

Discussion of root problem and cause. (Provide impact assessment. 
Quantify problem and impacts where possible. Provide recommended 
actions and exit criteria for resolution. Attach exhibits if 
applicable. Provide reference to control account or work package 
number). 

Contractor’s response. (Address root cause of the problem, impact, 
corrective and preventive action plan; identify dates and POC. Identify 
exit criteria for corrective action). 

Subteam leader signature: 

Team leader signature:

An Alternative Action Item Form: 

Date: 
Time: 
Program: 
Control account Manager: 
WBS (or CLIN): 
Issue:
Actions required:
Criteria for success:
Estimated completion date: 
Point of contact: 
Signatures: 

Government program manager: 
Control account manager or functional lead: 

[End Exhibit C] 

Exhibit D: 
 
Sample IBR Discussion Questions: 

The following questions were used in the NAVAIR IBR training. They are 
intended only as a reference guide. NAVAIR expects its IBR teams to 
select and tailor questions to a program’s condition (that is, new 
program versus overtarget baseline, issues, and risks. 

Organization: 

To introduce the IBR team, identify (graphically if possible) the 
location of the integrated process team IPT) in the program (that is, 
its organizational breakdown structure) relative to other IPTs. 
Similarly, identify the control and schedule accounts assigned to the 
IPT and which ones the IPT will discuss in answering the remaining 
questions. Include your areas of responsibility in the program, whom 
you report to, your responsibilities toward this person, and how you 
keep this person informed of status and progress. 

What is the manager’s scope of effort? 

The manager should be able to refer to a statement of work paragraph, a 
contract WBS narrative, or a work authorization document. 

* How many people work for you and what do they do? 

* How do they report to you how do you know the performance status of 
their work. 

Is all the work planned into control accounts? 
 
The statement of work defines the effort. The contract WBS provides 
specifics, such as work definition. The work authorization and change 
documentation should show information such as dollars and hours, period 
of performance, and the scope of work and any changes. 

Are all elements of the scope planned? 

The manager should be able to show the scope of work broken down into 
work packages and the budgets and estimates to complete (ETC) 
associated with each work package and planning package. The sum of the 
work packages and planning packages should equal the control account 
budget. The actual costs plus the estimates to compete should equal the 
estimate at completion. 

What are the manager’s resources for assigned work? 

Baseline resources should be identified in the work authorization 
document, and changes in scope, cost, or schedule requirements should 
be reflected in change request documentation. 

Are the resources required to accomplish the current plan consistent 
with the original plan? 

Review the basis of estimate for reasonableness. Does the manager 
believe that the budget or ETC if different from BAC) is sufficient to 
perform the work? 

Elicit a range of possibilities low and high that represents as clearly 
as possible the complete judgment of the control account manager, as 
follows: 

* The adequacy of the planned and approved baseline to achieve the 
approved scope. 

* Risks and opportunities included or not included in the baseline. 
What are the major risks or challenges remaining to accomplish the 
control account manager’s or subcontractor’s responsibilities?

- Ask the control account manager to describe why it is a risk or 
opportunity. 
- Exchange ideas about risks and opportunities. 
- Establish the likelihood of the risk or opportunity event. 

* Ask the control account manager to explain the risk mitigation plan 
emphasizing risk mitigation milestones and associated risk performance 
measurement. 

* Determine the impact (cost and schedule) for medium and high risks. 

* Ask the control account manager to consider extreme values for his 
effort (optimistic or pessimistic). 

* Document results on the risk assessment form. 

Authorization: 
What is the status of work authorization? 

Give an example of work authorization documentation. 

Ask the control account manager to show his work authorization 
documents, which define the work to be accomplished. Ask the control 
account manager to relate these requirements to the work remaining 
within his team or WBS element when the cost to complete was analyzed 
or developed. 

Budget: 

Discuss how the control account manager’s budget was derived. 

How did you arrive at your budget figures? Do you have the backup or 
worksheets you derived your estimates from? 

Was there a negotiation process for your budgets after contract award? 
Is your budget adequate? 

How were you advised of budget? Of tasks? Of schedule? Of changes? 

Control Account: 

How many control accounts are you responsible for and what is their 
total dollar? May we see a control account plan? 

How are your budgets time-phased, and is this reflected in your control 
account plan? 

How do you status your accounts? How does the performance status of 
your accounts get into the system? 

Do you have any LOE accounts? Please describe their tasks. 

Do you have any control accounts that contain a mixture of LOE and 
discrete effort? What is the highest percentage of LOE within an 
account that also contains discrete effort? 

How do you open and close a control account? 

What does your computer run show when a control account is opened or 
closed? 

What reports do you receive that give you cost and schedule progress of 
your control accounts? 

Work Package: 

Assess whether work is measured objectively and whether LOE is 
appropriate for the nature of the work. 

How do your work package activities relate to the master program 
schedule or underlying intermediate supporting schedules? Support your 
answer with examples. 

How was the budget time-phased for each work package—i.e., what was the 
basis for the spread? Is the time-phased budget related to planned 
activities of the work package? 

For the example control account, what is your total (IPT) budget 
amount? Of this total budget amount, how much is distributed to work 
packages and how much is retained in planning packages? Do you have an 
undistributed budget and management reserve account? 

Do you use interim milestones on any of your work packages to measure 
BCWP? 

How do you define a work package? How many work packages do you have 
responsibility for? 

What options does your EVM system provide for taking BCWP? 

Do your control account plans indicate the method used in taking BCWP? 
How do you open and close work packages? 

Who prepares the budgets for your work packages? 

Demonstrate how you earn BCWP in the same way that BCWS was planned. 

Can you provide examples of how you measure BCWP or earned value for 
work-in-process? 

Planning Package: 

What is the procedure and time period for discretely developing work 
packages from the planning packages? 

Are your planning packages time-phased? 

Schedule: 

What are your schedule responsibilities? 

What schedule milestones did the manager use in planning the cost 
accounts? Ask the manager to show the team the schedule milestones used 
in planning the cost accounts. How does the current schedule compare 
with the baseline schedule? 

The manager should discuss: 
 
* relationships of work packages to milestones, 
* schedule interfaces and constraints, 
* staffing levels to support schedule milestones, 
* relationships to other organizations or IPTs, 
* schedule impacts related to other work or organizations, and, 
* level-of-effort tasks that support the schedule. 

How did the manager time-phase the work to achieve the schedule? All 
work should be logically planned in compliance with the SOW and 
schedule. 

Has the manager considered risks in developing the plan? 

Has the manager adequately planned and time-phased resources to meet 
the plan? 

Do you directly support any major master or intermediate schedule 
milestones? 

Do you have detailed schedules below the work package? How do detailed 
schedules below the work package support the work package schedules? 

How are you informed by other organizations or IPTs of changes in their 
output that may affect your control accounts schedules (horizontal 
trace)? 

Demonstrate that the progress reflected on the master program schedule 
or underlying intermediate schedules correlates to the relative 
progress reflected in the EVM system. 

Change Control: 

Has the budget baseline had changes or replanning efforts? 

Have you had any changes to your accounts? (Give example of how these 
are handled.) 

Have you had any management reserve or undistributed budget activity? 

Do you have any work originally planned for in-house that was off-
loaded? How was this accomplished? 

Earned Value: 

What methods and tools does the manager use in administering the plan? 

Examples are weekly or monthly earned value reports; master, 
intermediate, and detail schedules; periodic meetings; and independent 
assessments of technical progress. Determine how changes are 
incorporated. Evaluate the effect of changes on performance measurement 
information. Assess whether changes accord with the EVM system 
description. 

What formal training have you had in EVM? 

Estimate at Completion and Cost-to-Complete Subcontractor: 

Are you responsible for any subcontracts? How do you monitor their 
performance? How do you take BCWP? 

How are subcontracts managed? Ask the subcontracts manager to describe 
the process for managing subcontractor earned value. 

What subcontracts are your responsibility? What types of subcontracts 
exist or are planned for negotiation (e.g., fixed price vs. cost 
reimbursement)? 

* What are the major challenges or risks to the subcontractor in 
accomplishing program responsibilities?

* Are these items tracked by the program management office or 
functional manager in a risk register or plan? 

* What subcontractor technical, schedule, and cost reports must be 
submitted to you or your team? 

* What is your total budget (for each subcontract and the corresponding 
control accounts)? How is profit or fee included in your budget? 

* How was the budget established? Does it reflect an achievable value 
for the resources to fully accomplish the control account scope of 
effort? 

* What rationale was used to time phase the budget resources into 
monthly or weekly planning packages, tasks, work packages, or summary 
activities? 

* Are the time-phased budget resources consistent with your program 
master schedule? Show the trace from your control account to 
intermediate or master schedules. 

* When are you required to plan planning packages or summary activities 
in detail? What schedule document or system is used to develop detail 
planning for your control account? 

* How do you know that the work within your control accounts to be 
performed by subcontractor has been properly planned? 

* How do you check the status and performance of work on your control 
account by a subcontractor? 

* How are actual costs recorded against your cost account? 

* What techniques are available for determining earned value? Explain 
the techniques you are using for this control account. 

* How and when is the risk assessment or risk management plan updated 
for technical, schedule, and cost risk items affecting your control 
account? 

* How and when is the actual and forecast schedule update provided for 
your control account effort? 

* Are variance analysis thresholds or requirements established for 
reporting technical, schedule, or cost variances to planned goals 
established for your control accounts? Do you informally or formally 
report the cause of variance, impact, or corrective action for these 
variances? 

* What document authorizes you to begin work on a subcontract? 

* For these selected work packages, what specific outputs, products, or 
objectives are to be accomplished? 

* What specifically do you need from other control account managers to 
generate subcontractor outputs or products? How do you monitor progress?

* Who specifically needs the subcontractor outputs or products to 
perform their program functions? How do you status others on the 
progress of your outputs to them? 

For these selected work packages, what specific outputs, products, or 
objectives are to be accomplished? 
What specifically do you need from other control account managers to 
generate subcontractor 
outputs or products? How do you monitor progress? 
Who specifically needs the subcontractor outputs or products to perform 
their program 
functions? How do you status others on the progress of your outputs to 
them? 

* Specifically, what technical items produce the greatest risk to 
achieving technical, schedule, or cost goals? Are these items reviewed 
as part of a risk assessment, management plan, or other reporting tool 
to your boss or the program management office? 

* How do you determine whether the reported cost variance stems from 
subcontractor effort or company overhead rate? 

Have material budgets been planned? 

Is material tracked before delivery? 

How do you track material when deliveries are late? 

When is BCWP or earned value taken on material? 

Analysis: 

Do you have any variance thresholds for your control accounts? 

What are the variance thresholds for your control accounts? 

How do you know when you have exceeded a threshold? 

Do you have samples of any variance analysis reports? Do they show a 
statement of the problem, the variance, cause and impact; and proposed 
corrective action? 

Who receives your variance reports? What action is taken on the reports?
Which reports do you use most frequently? Why? 

Not Categorized: 

How are you reporting labor, material, and other direct costs? 

Has your IPT effort been affected by any directed contractual change? 
When did you receive authorization to proceed with the change and how 
did your IPT incorporate the change in its planning (schedule and 
budget time phasing)? 

Demonstrate that the current planning for your IPT's product delivery 
and services supports program IPTs and contract delivery commitments. 

What changes have been made to the control account planning (technical 
definition of scope, schedule, budget resources, ETCs)? 

* What documents are involved in a change to the control accounts’ 
scope of work, schedule, budget, or ETC? 

* Did the control account manager rephase or replan work? In-process 
work? Completed work? 

* Unopened work packages? Make current period or retroactive changes?
Did the control account manager transfer budget between control 
accounts? 

* How have contract changes or other changes been incorporated into the 
control account? 

[End of Appendix 14] 

Appendix 15: Common Risks To Consider In Software Cost Estimating: 

This appendix lists common risks in software acquisition and offers 
some possible risk containment actions to mitigate certain effects. It 
is organized by key area of software acquisition: (1) requirements, 
(2) design, (3) test and evaluation, (4) technology, (5) developer, (6) 
cost or funding, (7) monitoring, (8) schedule, (9) personnel resources, 
(10) security and privacy, (11) project implementation strategy and 
plans, (12) specific commercial off-the-shelf risks, (13) business 
risk, and (14) management. 

Requirements: 

Area and risk: Does not reflect user needs; 
Potential effect: System rejection; program operations adversely 
affected; cost approval increases; rework; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Those affected manage requirements, 
with review and approval. 

Area and risk: Too many or too restrictive design and implementation 
constraints; 
Potential effect: Infeasibility; increased cost to meet requirements, 
poor design; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Rewrite and review requirements to 
address functions embedded in the constraints; perform cost-benefit 
analysis of constraints and remove unnecessary or costly constraints. 

Area and risk: Uncertain requirements; 
Potential effect: Rework; unsuitable product, cost, and schedule 
increases; inaccurate cost estimates; possibly infeasible end product; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Define requirements before proceeding 
to next stage; prototype; hedge cost and schedule for risk; divide end-
product in segments and prioritize for implementation. 

Area and risk: Unstable; 
Potential effect: Rework; unsuitable product; cost and schedule 
increases; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Limit size of implementation segments;
prototype. 

Area and risk: Untraceable; 
Potential effect: Design does not meet requirements; rework; cost and 
schedule increases; unreliable testing; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Establish and maintain traceability 
to products and tests. 

2. Design: 

Area and risk: Does not achieve performance objectives; 
Potential effect: Increased program operating costs; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Review design for alternatives; 
establish performance objectives for acceptance; simulations. 

Area and risk: Does not meet requirements; 
Potential effect: System rejection; rework; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation:[Empty]; 
Possible risk containment action: Establish traceability to 
requirements. 

Area and risk: Infeasibility; 
Potential effect: Product does not work;
Potential effect on Design: [Empty]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Review design, including feasibility 
analysis. 

Area and risk: Not cost effective; 
Potential effect: Increased maintenance costs; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Empty]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Analyze design for effectiveness and 
other design alternatives before coding. 

Area and risk: More training needed; 
Potential effect: Increased program operating cost; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Empty]; 
Potential effect on Operations and maintenance: [Empty]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Review design for alternatives. 
 
3. Test and evaluation: 
 
Area and risk: Does not address operating environment; 
Potential effect: Poor system performance; increased operating and 
maintenance costs; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Perform operational capability 
testing. 

Area and risk: Inadequate acceptance testing; 
Potential effect: Premature acceptance; increased operating and 
maintenance costs; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Plan and allow time for acceptance 
testing; establish traceability to design and requirements. 

Area and risk: Insufficient time to fix; 
Potential effect: More acceptance testing needed; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Increase schedule. 

Area and risk: Insufficient time to test thoroughly; 
Potential effect: More acceptance testing needed; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Empty]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Require test plans, reports, and 
compliance matrixes as deliverables; increase schedule to allow for 
adequate testing. 

Area and risk: Test planning not begun during initial development; 
Potential effect: Increased costs; inadequate testing; rework; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Begin acceptance test planning 
immediately after requirements are baselined; establish traceability to 
requirements. 

Area and risk: Test procedures do not address all major performance and 
reliability requirements; 
Potential effect: Poor system performance; poor product quality; more 
acceptance testing needed; negative effect on program operations; 
rework; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Establish traceability to all 
requirements; include performance and reliability requirements as 
acceptance criteria; require test plans, reports, and compliance 
matrixes as deliverables. 

Area and risk: Various levels of testing are not performed (system,
integration, unit); 
Potential effect: Poor system performance; more acceptance testing 
needed; poor product quality; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Establish traceability to all 
requirements; include testing requirements in contract; require test 
plans, reports, and compliance matrixes as deliverables. 

4. Technology: 

Area and risk: Availability; 
Potential effect: Needed functionality delayed; increased program 
operation costs; business disruption; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Empty]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Devise alternate business processes; 
consider other technology. 

Area and risk: Potential advances result in less-than-optimal cost-
effective system; 
Potential effect: Increased program operating and operations and 
maintenance costs; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Empty]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Consider replacing; change business 
process. 

Area and risk: Potential changes make other components obsolete; 
Potential effect: Increased operations and maintenance costs; program 
disruption; 
Potential effect on Design: [A]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Periodic review of architecture and 
of changes in technology field; regular cycle upgrade. 

Area and risk: Relies on complex design; 
Potential effect: Increased program operating costs (additional 
training); reduced cost-benefit; increased operating and maintenance 
costs; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Empty]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Simplify and review design; conduct 
and compare parallel design activities; prototyping. 

Area and risk: Technology becomes obsolete or is abandoned; 
Potential effect: Program disruption; needed functionality unavailable; 
new acquisition required; 
Potential effect on Design: [A]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Empty]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Scheduled upgrades; periodic review 
of changes in technology field. 

Area and risk: Unproven or unreliable; 
Potential effect: Program disruption; Increased operating and 
maintenance costs; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Operational capability demonstrations
and testing; delay acquisition; use other technology. 

5. Developer: 

Area and risk: Ability to produce item; poor track record for costs and 
schedule; key personnel turnover; 
Potential effect: Needed functionality unavailable or delayed; increased
financial risk to acquirer; increased cost, reduced quality, delays; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Terminate contract; performance-based 
contract; limited task orders; EVM or similar system. 

6. Cost or funding: 

Area and risk: Funding type does not match acquisition strategy; 
Potential effect: Uncertain financing; changes in project direction; 
Potential effect on Design: [B]; 
Potential effect on Code: [B]; 
Potential effect on Test: [B]; 
Potential effect on Acceptance testing: [B]; 
Potential effect on Operations and maintenance: [B]; 
Potential effect on Program operations: [B]; 
Potential effect on User acceptance: [B]; 
Potential effect on Training: [B]; 
Potential effect on Upgrades: [B]; 
Potential effect on Documentation: [B]; 
Possible risk containment action: Plan new acquisition strategy to map 
to funding type. 

Area and risk: Marginal performance capabilities incorporated at 
excessive costs (cost-benefit tradeoffs not performed); 
Potential effect: Increased program operating costs and operating and 
maintenance costs for little benefit; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Empty]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Maintain updated business case 
analysis return on investment; re-scope; risk-adjusted cost
benefit analysis of alternatives. 

Area and risk: Realistic cost objectives not established early; 
Potential effect: Greater financial risk; potential delays; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Scheduled estimate updates and 
reviews. 

Area and risk: Schedule not considered in choosing alternative 
implementation strategies; 
Potential effect: Unrealistic schedule; increased costs; reduced 
quality; infeasibility; changes in project direction; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Re-examine business case and update 
for better information. 

Area and risk: Unstable funding; 
Potential effect: Increased costs and schedule and reduced quality due 
to stop and start; loss of project momentum; 
Potential effect on Design: [B]; 
Potential effect on Code: [B]; 
Potential effect on Test: [B]; 
Potential effect on Acceptance testing: [B]; 
Potential effect on Operations and maintenance: [B]; 
Potential effect on Program operations: [B]; 
Potential effect on User acceptance: [B]; 
Potential effect on Training: [B]; 
Potential effect on Upgrades: [B]; 
Potential effect on Documentation: [B]; 
Possible risk containment action: Incremental, modular implementation
segments that can be funded. 

7. Monitoring: 

Area and risk: Insufficient monitoring; 
Potential effect: Transfer of risk to acquirer; inability to recover
from schedule slippage for lack of timely information; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Increase monitoring in riskiest 
areas. 

8. Schedule: 

Area and risk: Dependence on other projects; 
Potential effect: Delays; increased costs; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Reduce dependencies; coordinate with 
another project. 

Area and risk: Insufficient resources to meet schedule; 
Potential effect: Delays; poor quality; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Increase resource or schedule; reduce 
scope. 

Potential effect: Schedule may be delayed waiting for external 
approvals; 
Potential effect on Design: Inability to meet due date; increased cost; 
reduced scope; product failure; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Reduce scope; increase resources 
where reasonable, allowing for greater coordination complexity; extend 
due date. 

Area and risk: Tasks allocated poorly; 
Potential effect: Poor quality; inefficiency; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Reallocate or restructure tasks. 

Area and risk: Unrealisitic; 
Potential effect: Poor quality from shortening vital tasks; cost 
increases; infeasibility; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Increase schedule; reduce scope. 

9. Personnel resources: 

Area and risk: Inadequate skills or staffing; inadequate mix of staff 
and skills; 
Potential effect: Poor quality; delays; increased costs; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Training; hire more staff; reassign 
tasks or staff outside the project. 

10. Security and privacy: 

Area and risk: Inadequate; 
Potential effect: Failure to certify system; system breaches; data 
corruption; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Address security in requirements, 
design, and testing; test periodically or as changes are made to ensure 
continued security and privacy. 

11. Project implementation strategy and plans: 

Area and risk: Architectural dependencies; 
Potential effect: Architectural components; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Empty]; 
Potential effect on User acceptance: [Empty]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Adhere to enterprise architecture. 

Area and risk: Dependence on other projects or systems; 
Potential effect: Schedule delay; cost increases; uncontrolled changes; 
service disruption; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Memorandum of understanding with 
other project or system owner; regular coordination meetings; reduce 
dependencies and predecessors. 

Area and risk: Inadequate link to mission need; 
Potential effect: Unnecessary system components; increased costs or 
schedule; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Empty]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Check back to system goals in each 
system phase. 

Area and risk: Nonmodular development and implementation approach; 
Potential effect: Funding risks; schedule; cost; quality; requirements 
instability; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Incremental or modular implementation 
segments. 

Area and risk: Overall implementation strategy does not properly 
address key development phases and technology considerations; 
Potential effect: Troubled project; product defects;
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Empty]; 
Potential effect on Program operations: [Empty]; 
Potential effect on User acceptance: [Empty]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Follow industry standards in 
development and tailor to nature of technology being implemented. 

Area and risk: Risks not acted on; 
Potential effect: Troubled project; product defects; cost and schedule 
increases; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Manage risks. 

Area and risk: Significant custom development required, with additional 
requirements; 
Potential effect: Product problems; increased operations and 
maintenance costs; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Empty]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Consider other technology or 
alternatives; reengineer or simplify business process. 

Area and risk: Subordinate strategies and plans not developed in a 
timely manner; 
Potential effect: Delays; inefficiency; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Empty]; 
Potential effect on Program operations: [Empty]; 
Potential effect on User acceptance: [Empty]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Create strategies and plans in a 
timely manner. 

Area and risk: Subordinate strategies and plans not linked to overall
strategy; 
Potential effect: Delays; inefficiency; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Empty]; 
Potential effect on Program operations: [Empty]; 
Potential effect on User acceptance: [Empty]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Create links to higher-level plans. 

Area and risk: Wide scope of work; 
Potential effect: Increased complexity, cost, or schedule; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Prioritize requirements. 

12. Commercial off-the-shelf: 

Area and risk: Long-term maintenance and support not considered; 
Potential effect: Inability to perform maintenance; lack of skills or 
resources; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Empty]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Empty]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Transition plan; training; link to 
business and human resources goals. 

Area and risk: Planned life of more than 5 years per version or vendor; 
Potential effect: Obsolescence; lack of vendor support if business is 
sold or turns down; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Empty]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Empty]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Planned upgrades or replacement. 

Area and risk: Significant number of complex interfaces to other 
systems; 
Potential effect: Data quality and timeliness; increased maintenance; 
additional requirements; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Empty]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Simplify or reduce interfaces. 

Area and risk: Total life cycle costs not considered; 
Potential effect: System goals not reached; uncertain operating and 
maintenance costs; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Empty]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Initial business case analysis with 
cost-adjusted risk analysis of alternatives; return on investment. 

13. Business: 

Area and risk: Inappropriate development approach or methodology; 
Potential effect: Cost and schedule overruns; failure; lost 
opportunity; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Use appropriate development model 
(spiral, incremental, waterfall). 

Area and risk: Downtime affects service or business; 
Potential effect: Loss of opportunity; customer dissatisfaction and 
complaints; inability to perform business function; 
Potential effect on Design: [Empty]; 
Potential effect on Code: [Empty]; 
Potential effect on Test: [Empty]; 
Potential effect on Acceptance testing: [Empty]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Empty]; 
Potential effect on Training: [Empty]; 
Potential effect on Upgrades: [Empty]; 
Potential effect on Documentation: [Empty]; 
Possible risk containment action: Monitor system performance;
periodic upgrades or enhancements; backup and redundancy. 

14. Management: 

Area and risk: Demand for service is higher than resources can handle; 
Potential effect: Staff burnout; reduced service quality; loss of 
customer satisfaction; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Prioritize service to ensure quality
remains high. 

Area and risk: Inability to manage the complexity or scope of system, 
project, or operations; 
Potential effect: Failure; system or implemented processes degrade from 
original implementation; project goes over schedule or over budget; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Training; use project sponsor to 
ensure ongoing coordination across groups; decrease scope; break effort 
into segments. 

Area and risk: Outside pressures to produce something shortens the 
requirements period; 
Potential effect: Cost and schedule increase; disputes with customer 
about end-product; contract disputes; substandard product; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Resist pressure to bypass 
requirements; use prototypes or storyboards to produce something and 
refine requirements; select a different development approach. 

Area and risk: Progress is affected by unstable or unstated business; 
Potential effect: Failure or rejection; increased schedule and cost to
implement; additional changes to system; user confusion; business 
process inefficiency or ineffectiveness; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Design processes before committing to 
system design. 

Area and risk: Users or organizations resist change; 
Potential effect: Failure or rejection; increased schedule and cost to 
implement; additional changes to system; 
Potential effect on Design: [Check]; 
Potential effect on Code: [Check]; 
Potential effect on Test: [Check]; 
Potential effect on Acceptance testing: [Check]; 
Potential effect on Operations and maintenance: [Check]; 
Potential effect on Program operations: [Check]; 
Potential effect on User acceptance: [Check]; 
Potential effect on Training: [Check]; 
Potential effect on Upgrades: [Check]; 
Potential effect on Documentation: [Check]; 
Possible risk containment action: Involve parties in requirements and 
design, communication, organizational change management plan; bring 
project sponsor to upper management; training and support. 

Source: GAO. 

[A] Architecture. 

[B] Impact area depends on financing and timing. 

[End of table] 

[End of Appendix 15] 

Appendix 16: GAO Contacts: Contacts And Acknowledgments: 
 
Karen Richey, at 202-512-4784, richeyk@gao.gov;
Jennifer Echard, at 202-512-3875, echardj@gao.gov; and
Carol Cha, at 202-512-4456, chac@gao.gov. 

Acknowledgments: 
 
Other key contributors to this guide include Gregory Borecki, Cristina 
Chaplain, Richard Hung, Jason Kelly, Nancy Kingsbury, Colleen Phillips, 
Penny Pickett, David Powner, Keith Rhodes, and Adam Vodraska. 

[End of Appendix 16] 

References: 

Abba, Wayne. Contracting for Earned Value Management. Falls Church, 
Va.: Abba Consulting, Jan. 13, 2007. 

———. Earned Value Management: The Good, the Bad, and the Really Ugly. 
Falls Church, Va.: Abba Consulting, Dec. 21, 2004. 

———. “Earned Value Management from Government Contracts to the Project 
Management Universe.” Presented at the 19th IPMA World Congress, New 
Delhi, India, November 13–16, 2005. 

———. How Earned Value Got to Primetime: A Short Look Back and a Glance 
Ahead. Reston, Va.: Dekker, 2000. 

———. Understanding Program Resource Management through Earned Value 
Analysis. Falls Church, Va.: Abba Consulting, June 2006. 

Agrawal, Raj. Overcoming Software Estimation Challenges. McLean, Va.: 
MITRE, May 22, 2007. 

Albert, Neil F. Cost Estimating: The Starting Point of Earned Value 
Management. McLean, Va.: MCR, June 2005. 

———. Developing a Work Breakdown Structure. McLean, Va.: MCR, June 16, 
2005. 

Anderson, Mark, and David Nelson. Developing an Averaged Estimate at 
Completion (EAC) Utilizing Program Performance Factors and Maturity. 
Arlington, Va.: Tecolote Research, June 14–17, 2005. 

ANSI (American National Standards Institute). Earned Value Management 
Systems Electronic Industries Alliance, (EIA)-748-B. Arlington, Va.: 
Government Electronics and Information Technology Association, 2007. 

Barrett, Bruce E., Jack E. Matson, and Joseph M. Mellichamp. Software 
Development Cost Estimation Using Function Points. Tuscaloosa: 
University of Alabama, April 1994. 

Black, Hollis M. Impact of Cost Risk Analysis on Business Decisions. 
Huntsville, Ala.: Boeing, June 14–17, 2005. 

Boatwright, Tim. Earned Value Management System Acceptance and 
Surveillance. Booz Allen Hamilton: March 30, 2006. 

Boden, Dave, and others. Interfacing Risk and Earned Value Management. 
Buckinghamshire, Eng.: Association for Project Management, 2008. 

Bolinger, Paul. Schedule Analysis of the Integrated Master Schedule. 
[Orange, Calif.]: Humphreys & Associates, May 2008. 

Bone, Lauren, and Val Jonas. “Interfacing Earned Value and Risk 
Management.” PMI-CPM 24th Annual International Conference Proceedings, 
Clearwater Beach, Florida, May 14–16, 2008. 

Book, Stephen A. Do Not Sum Most Likely Costs. Presentation to the 
American Society of Military Comptrollers, Los Angeles, California, 
April 30, 2002. 

———. Issues Associated with Basing Decisions on Schedule Variance in an 
Earned Value Management System. McLean, Va.: MCR, 2003. 

———. The Program Schedule Network and Its Critical Path. McLean, Va.: 
MCR, Aug. 26, 2005. 

———. Risks in Costing Software. McLean, Va.: MCR, Aug. 26, 2005. 

———. Schedule Risk Analysis: Why It Is Important and How to Do It. 
McLean, Va.: MCR, June 14–17, 2005. 

———. The Schedule Risk Imperative and How to Deal with It. McLean, Va.: 
MCR, Aug. 26, 2005. 

———. What Is Cost Risk Analysis? McLean, Va.: MCR, Aug. 26, 2005. 

———. What Earned Value Data Tells Us about Schedule Progress. McLean, 
Va.: MCR, Aug. 26, 2005. 

Brophy, Brian A., and Greg Hogan. How to Get the Data and Ready It for 
Analysis. Chantilly, Va.: Northrop Grumman, The Analytical Sciences 
Corporation, 2005. 

Buchholz, Mark, Shaw Cohe, and Robert Tomasetti. Earned Value 
Management: Moving toward Governmentwide Implementation. Arlington, 
Va.: Acquisition Directions Advisory, August 2005. 

Cargill, John. Decision Support for the Program Manager. Eglin Air 
Force Base, Fla.: Air Force Cost Analysis Agency, n.d. 

———. Forecasting the Future with EVMS. Eglin Air Force Base, Fla.: Air 
Force Cost Analysis Agency, n.d. 

Carroll, Ed. Software Estimating Based on Use Case Points. [Beaverton, 
Ore.]: Agilis Solutions, June 2005. 

Christensen, David S. The Costs and Benefits of the Earned Value 
Management Process. Pensacola: University of West Florida, June 1998. 

———. Determining an Accurate Estimate at Completion. Cedar City: 
Southern Utah University, 1993. 

———. Using the Earned Value Cost Management Report to Evaluate the 
Contractor’s Estimate at Completion. Cedar City: Southern Utah 
University, 1999. 

———, and Carl Templin. An Analysis of Management Reserve Budget on 
Defense Acquisition Contracts. Cedar City: Southern Utah University, 
2000. 

———, and others. A Review of the Estimate at Completion Research. Cedar 
City: Southern Utah University, 1995. 

Chvotkin, Alan, and Harris N. Miller. Response to FAR Case 2004-019: 
Earned Value Management Systems, 70 Fed. Reg. 17945 (April 8, 2005). 
Washington, D.C.: Information Technology Association of America and 
Professional Services Council, June 7, 2005. 

Coleman, Richard L., and Shishu S. Gupta. Two Timely Short Topics: 
Independence and Cost Realism. Chantilly, Va.: Northrop Grumman, 
Analytical Sciences Corp., and Intelligence Community Cost Analysis 
Improvement Group, June 16, 2005. 

———, and Jessica R. Summerville. Advanced Cost Risk. Chantilly, Va.: 
Northrop Grumman, The Analytical Sciences Corp., June 16, 2005. 

———. Basic Cost Risk. Chantilly, Va.: Northrop Grumman, The Analytical 
Sciences Corp., June 15, 2005. 

———, and others. Monte Carlo Techniques for Risk Management. Chantilly, 
Va.: Northrop Grumman, The Analytical Sciences Corp., June 2008. 

Committee ANSI/PMI 00-001-2004. A Guide to the Project Management Body 
of Knowledge, 3rd ed. Arlington, Va.: 2004. 

Comptroller General of the United States. Government Auditing 
Standards: January 2007 Revision, GAO-07-162G. Washington, D.C.: U.S. 
Government Accountability Office, January 2007. 

———. Theory and Practice of Cost Estimating for Major Acquisitions, B-
163058. Washington, D.C.: U.S. Government Accountability Office, July 
24, 1972. 

Cooper, L. Sue. Basic Schedule Development and Analysis. Seal Beach, 
Calif.: Boeing, June 14–17, 2005. 

Corrigan, Patricia. Capital Programming, Earned Value, and the Federal 
Acquisition Community. Washington, D.C.: Office of Federal Procurement 
Policy, Office of Management and Budget, April 2007. 

DAU (Defense Acquisition University). Business, Cost Estimating, and 
Financial Management Training Requirements. Fort Belvoir, Va.: n.d. 

———. Cost Estimating Methodologies. Fort Belvoir, Va.: April 2004. 

———. Defense Acquisition Acronyms and Terms. Fort Belvoir, Va.: July 
2005. 

———. Defense Acquisition Guidebook. Fort Belvoir, Va.: July 24, 2006. 

———. Introduction to Cost Analysis. Fort Belvoir, Va.: April 2004. 

———. Unit and Cumulative Average Formulations. Fort Belvoir, Va.: Feb. 
20, 2004. 

DCAA (Defense Contract Audit Agency). DCAA Contract Audit Manual. Fort 
Belvoir, Va.: January 12, 2008. 

DCARC (Defense Cost and Resource Center). Automated Information System 
Enterprise Resource Planning Work Breakdown Structure. Arlington, Va.: 
n.d. 

———. Automated Information System Enterprise Resource Planning Work 
Breakdown Structure Dictionary. Arlington, Va.: n.d. 

DCMA (Defense Contract Management Agency). About the Defense Contract 
Management Agency. Alexandria, Va.: n.d. 

———. Earned Value Management System: System Surveillance. Washington, 
D.C.: 2005. 

———. EVMS Standard Surveillance Operating Manual. Washington, D.C.: 
January, 2008. 

DCMA (Defense Contract Management Agency). EVMS Standard System 
Surveillance. Washington, D.C.: May 14, 2008. 

———. Software Acquisition Management Services. Washington, D.C.: 2005. 
Dean, Joe. Practical Software and Systems Measurement: Objective 
Information for Decision Makers. Boston, Mass.: Tecolote Research, June 
2005. 

Dechoretz, Jason. Technical Baselines. McLean, Va.: MCR, June 2005. 
Defense Economic Analysis Council. Economic Analysis Handbook. 
Monterey, Calif.: September 1997. 

Dello Russo, Francis M., Paul R. Garvey, and Neal D. Hulkower. Cost 
Analysis. Bedford, Mass.: MITRE, 1998. 

DHS (Department of Homeland Security). Capital Planning and Investment 
Control Cost Benefit Analysis Guidebook. Washington, D.C.: 2006. 

———. Investment and Acquisition Management Re-Engineering: Improving 
Life Cycle Cost Estimating. Washington, D.C.: July 10, 2008. 

DOD (Department of Defense). Automated Information System (AIS) 
Economic Analysis Guide. Washington, D.C.: May 1, 1995. 

———. Contract Performance Report Data Item Description. Washington, 
D.C.: Mar. 30, 2005. 

———. Contract Pricing Reference Guide. Washington, D.C.: 2002. 

———. Contractor Cost and Software Data Reporting. Washington, D.C.: 
June 8, 2007. 

———. Cost Analysis Guidance and Procedures, DOD 5000.4-M. Washington, 
D.C.: December 1992. 

———. Cost Analysis Improvement Group, DOD 5000.4. Washington, D.C.: 
Nov. 24, 1992. 

———. The Defense Acquisition System, DOD Directive 5000.01. Washington, 
D.C.: Nov 20, 2007. 

———. Defense Contract Management Agency, DOD Directive 5105.64. 
Washington, D.C.: Nov. 21, 2003. 

———. Department of Defense Handbook: Work Breakdown Structures for 
Defense Materiel Items, MIL-HDBK-881A. Washington, D.C.: Office of the 
Under Secretary of Defense (AT&L), July 30, 2005. 

———. Earned Value Management Glossary of Terms. Washington, D.C.: n.d. 

———. Earned Value Management Implementation Guide. Washington, D.C.: 
October 2006. 

———. Earned Value Management Policy and Initiatives. Washington, D.C.: 
May 2008. 

———. Earned Value Management Roles and Responsibilities. Washington, 
D.C.: July 3, 2007. 

———. Economic Analysis for Decisionmaking, DOD 7041.3. Washington, 
D.C.: Nov. 7, 1995. 

———. Enterprise Resource Planning Information. Washington, D.C.: April 
20, 2006. 

———. A Guide to the Project Management Body of Knowledge. Washington, 
D.C.: June 2003. 

———. Implementation of Central Repository System. Washington, D.C.: 
July 11, 2007. 

———. Instructional Systems Development/Systems Approach to Training and 
Education, MIL-HDBK-29612-2A. Washington, D.C.: Aug. 31, 2001. 

———. Integrated Master Plan and Integrated Master Schedule Preparation 
and Use Guide. Washington, D.C.: Oct. 21, 2005. 

———. Integrated Master Schedule Data Item Description. Washington, 
D.C.: Mar. 30, 2005. 

———. National Security Space Acquisition Policy. Washington, D.C.: Dec. 
27, 2004. 

———. Operating and Support Cost-Estimating Guide. Washington, D.C.: May 
1992. 

———. Operation of the Defense Acquisition System, DOD Instruction 
5000.02. Washington, D.C.: Dec 8, 2008. 

———. Over Target Baseline and Over Target Schedule. Washington, D.C.: 
May 7, 2003. 

———. Primer on Cost Analysis Requirement Descriptions. Washington, 
D.C.: Jan. 28, 2004. 

———. The Program Manager’s Guide to the Integrated Baseline Review 
Process. Washington, D.C.: June 2003. 

———. Revised Department of Defense Earned Value Management Policy. 
Washington, D.C.: May 12, 2005. 

———. Revision to Department of Defense Earned Value Management Policy. 
Washington, D.C.: Mar. 7, 2005. 

———. Risk Management Guide for DOD Acquisition, 6th ed., version 1.0. 
Washington, D.C.: August 2006. 

———. “Single Point Adjustments.” Memorandum from D. M. Altwegg, 
Business Management, Missile Defense Agency, to Missile Defense Agency 
Program Divisions, Washington, D.C., December 8, 2005. 

———. Software Acquisition Gold Practice: Track Earned Value. 
Washington, D.C.: Data Analysis Center for Software, Dec. 12, 2002. 

———. Software Resources Data Report Manual, DOD 5000.4-M-2. Washington, 
D.C.: Feb. 2, 2004. 

———. Use of Earned Value Management in DOD. Washington, D.C. July 3, 
2007. 
 
———, General Services Administration, and National Aeronautics and 
Space Administration. Federal Acquisition Regulation (FAR) Major System 
Acquisition, 48 C.F.R. part 34, Earned Value Management System, subpart 
34.2, added by Federal Acquisition Circular 2005-11, July 5, 2006, as 
Item I—Earned Value Management System (EVMS) (FAR Case 2004-019). 

DOE (Department of Energy). Cost Estimating Guide, DOE G 430.1.1. 
Washington, D.C.: Mar. 28, 1997. 

DOE (Department of Energy). Project Management and the Project 
Management Manual. Washington, D.C.: March 21, 2003. 

———. Work Breakdown Structure. Washington, D.C.; June 2003. 

Driessnack, John D. Using Earned Value Data. Arlington, Va.: MCR, June 
2005. 

———, and Neal D. Hulkower. Linking Cost and Earned Value Analysis. 
Arlington, Va.: MCR, June 2005. 

Druker, Eric, Christina Kanick, and Richard Coleman. The Challenges of 
(and Solutions for) Estimating Percentiles of Cost. Arlington, Va.: 
Northrop Grumman, Aug, 11, 2008. 

FAA (Federal Aviation Administration). Investment Analysis Standards 
and Guidelines: FAA Standard Cost Estimation Guidelines, version 1.0. 
Washington, D.C.: April 2003. 

Federal Register. Rules and Regulations, vol. 73, no. 79. Washington, 
D.C.: Apr. 23, 2008. 

Ferens, Dan. Commercial Software Models. Vienna, Va.: International 
Society of Parametric Analysis, June 2005. 

Fleming, Quentin W. Earned Value Management (EVM) Light...but Adequate 
for All Projects. Tustin, Calif.: Primavera Systems, November 2006. 

———, and Joel M. Koppelman. “The Earned Value Body of Knowledge.” 
Presented at the 30th Annual Project Management Institute Symposium, 
Philadelphia, Pennsylvania, October 10–16, 1999. 

———. “Performance Based Payments.” Tustin, Calif.: Primavera Systems, 
November 2007. 

Flett, Frank. Organizing and Planning the Estimate. McLean, Va.: MCR, 
June 12–14, 2005. 

Galorath, Daniel D. Overcoming Cultural Obstacles to Managing Risk. El 
Segundo, Calif.: Galorath Inc., 2007. 

———. Software Estimation Handbook. El Segundo, Calif.: Galorath Inc., 
n.d. 

———. Software Projects on Time and within Budget—Galorath: The Power of 
Parametrics. El Segundo, Calif.: Galorath Inc., n.d. 

———. Software Total Ownership Cost Development Is Only Job 1. El 
Segundo, Calif.: Galorath Inc., 2008. 

GAO. 21st Century Challenges: Reexamining the Base of the Federal 
Government, GAO-05-325SP. Washington, D.C.: February 2005. 

GAO. Air Traffic Control: FAA Uses Earned Value Techniques to Help 
Manage Information Technology Acquisitions, but Needs to Clarify Policy 
and Strengthen Oversight, GAO-08-756. Washington, D.C.: July 18, 2008. 

GAO. Aviation Security: Systematic Planning Needed to Optimize the 
Deployment of Checked Baggage Screening Systems, GAO-05-365. 
Washington, D.C.: Mar. 15, 2005. 

GAO. Best Practices: Better Acquisition Outcomes Are Possible If DOD 
Can Apply Lessons from the F/A-22 Program, GAO-03-645T. Washington, 
D.C.: Apr. 11, 2003. 

GAO. Best Practices: Better Management of Technology Development Can 
Improve Weapon System Outcomes, GAO/NSIAD-99-162. Washington, D.C.: 
Aug. 16, 1999. 

GAO. Best Practices: Successful Application to Weapon Acquisitions 
Requires Changes in DOD’s Environment, GAO/NSIAD-98-56. Washington, 
D.C.: Feb. 24, 1998. 

GAO. Chemical Demilitarization: Actions Needed to Improve the 
Reliability of the Army’s Cost Comparison Analysis for Treatment and 
Disposal Options for Newport’s VX Hydrolysate, GAO-07-240R. Washington, 
D.C.: Jan. 26, 2007. 

GAO. Combat Nuclear Smuggling: DHS Has Made Progress Deploying 
Radiation Detection Equipment at U.S. Ports of Entry, but Concerns 
Remain, GAO-06-389. Washington, D.C.: Mar. 22, 2006. 

GAO. Combat Nuclear Smuggling: DHS’s Cost-Benefit Analysis to Support 
the Purchase of New Radiation Detection Portal Monitors Was Not Based 
on Available Performance Data and Did Not Fully Evaluate All the 
Monitors’ Costs and Benefits, GAO-07-133R. Washington, D.C.: Oct. 17, 
2006. 

GAO. Cooperative Threat Reduction: DOD Needs More Reliable Data to 
Better Estimate the Cost and Schedule of the Shchuch’ye Facility, GAO-
06-692. Washington, D.C.: May 31, 2006. 

GAO. Customs Services Modernization: Serious Management and Technical 
Weaknesses Must Be Corrected, GAO/AIMD-99-41. Washington, D.C.: Feb. 
26, 1999. 

GAO. Defense Acquisitions: Assessments of Selected Major Weapon 
Programs, GAO-06-391. Washington, D.C.: Mar. 31, 2006. 

GAO. Defense Acquisitions: Improved Management Practices Could Help 
Minimize Cost Growth in Navy Shipbuilding Programs, GAO-05-183. 
Washington, D.C.: Feb. 28, 2005. 

GAO. Defense Acquisitions: Information for Congress on Performance of 
Major Programs Can Be More Complete, Timely, and Accessible, GAO-05-
182. Washington, D.C.: Mar. 28, 2005. 

GAO. Defense Acquisition: A Knowledge-Based Funding Approach Could 
Improve Weapon System Program Outcomes, GAO-08-619. Washington, D.C.: 
June 2008. 

GAO. Defense Acquisitions: Missile Defense Agency Fields Initial 
Capability but Falls Short of Original Goals, GAO-06-327. Washington, 
D.C.: Mar. 15, 2006. 

GAO. DOD Systems Modernization: Planned Investment in the Navy Tactical 
Command Support System Needs to Be Addressed, GAO-06-215. Washington, 
D.C.: Dec. 5, 2005. 

GAO. GAO’s High Risk Program, GAO-06-497T. Washington, D.C.: Mar. 15, 
2006. 

GAO. Global Hawk Unit Cost Increases, GAO-06-222R. Washington, D.C.: 
Dec. 15, 2005. 

GAO. A Glossary of Terms Used in the Federal Budget Process, GAO-05-
734SP. Washington, D.C.: September 2005. 

GAO. Government Auditing Standards 2007 Revision, GAO-07-162G. 
Washington, D.C.: January 2007. 

GAO. Homeland Security: Recommendations to Improve Management of Key 
Border Security Program Need to Be Implemented, GAO-06-296. Washington, 
D.C.: Feb. 14, 2006. 

GAO. Information Technology: Agencies Need to Improve the Accuracy and 
Reliability of Investment Information, GAO-06-250. Washington, D.C.: 
Jan. 12, 2006. 

GAO. Major Acquisitions: Significant Changes Underway in DOD’s Earned 
Value Management Process, GAO/NSIAD-97-108. Washington, D.C.: May 5, 
1997. 

GAO. Maximizing the Success of Chief Information Officers, GAO-01-376G. 
Washington, D.C.: Feb. 1, 2001. 

GAO. NASA: Lack of Disciplined Cost Estimating Processes Hinders 
Effective Program Management, GAO-04-642. Washington, D.C.: May 28, 
2004. 

GAO. National Airspace System: Better Cost Data Could Improve FAA’s 
Management of the Standard Terminal Automation System, GAO-03-343. 
Washington, D.C.: Jan. 31, 2003. 

GAO. Polar-Orbiting Operational Environmental Satellites: Cost 
Increases Trigger Review and Place Program’s Direction on Hold, GAO-06-
573T. Washington, D.C.: Mar. 30, 2006. 

GAO. Polar-Orbiting Operational Environmental Satellites: Information 
on Program Cost and Schedule Changes, GAO-04-1054. Washington, D.C.: 
Sept. 30, 2004. 

GAO. Space Acquisitions: DOD Needs to Take More Action to Address 
Unrealistic Initial Cost Estimates of Space Systems, GAO-07-96. 
Washington, D.C.: Nov. 17, 2006. 

GAO. Standards for Internal Control in the Federal Government: Exposure 
Draft, GAO/AIMD-98-21.3.1. Washington, D.C.: Dec. 1, 1997. 

GAO. Telecommunications: GSA Has Accumulated Adequate Funding for 
Transition to New Contracts but Needs Cost Estimation Policy, GAO-07-
268. Washington, D.C.: Feb. 23, 2007. 

GAO. Theory and Practice of Cost Estimating for Major Acquisitions, B-
163058. Washington, D.C.: July 24, 1972. 

GAO. Uncertainties Remain Concerning the Airborne Laser’s Cost and 
Military Utility, GAO-04-643R. Washington, D.C.: May 17, 2004. 

GAO. United States Coast Guard: Improvements Needed in Management and 
Oversight of Rescue System Acquisition, GAO-06-623. Washington, D.C.: 
May 31, 2006. 

Garvey, Paul R. Cost Risk Analysis without Statistics!! McLean, Va.: 
MITRE, February, 2005. 

Geiser, Todd A., and David Schaar. “Tackling Cost Challenges of Net 
Centric Warfare,” pp. 4083–92. In 2004 IEEE Aerospace Conference 
Proceedings, vol. 6. Proceedings of the 2004 IEEE Aerospace Conference, 
Institute of Electrical and Electronics Engineers. Big Sky, Mt.: March 
6–13, 2004. 

Gillam, Dale E. “Evaluating Earned Value Management and Earned Value 
Management Systems Using OMB’s Program Assessment Rating Tool (PART).” 
The Measurable News, Spring 2005, pp. 1–7. 

Gordon, Creaghe. “The Importance of Reserves and Budget Distribution in 
Achieving Budget Requirements.” Flying High, 1st Quarter, 2006, pp. 
6–8. 

———, and George Rosenthal. Proposal for Combining Risk and Earned Value 
Management. Los Gatos, Calif.: n.p., n.d. 

Groves, Angela, and Karen McRitchie. Better Software Estimates Using 
SEER-SEM. El Segundo, Calif.: Galorath Associates, n.d. 

GSA (General Service Administration). Project Estimating Requirements. 
Washington, D.C.: January 2007. Harris, Michael D. S., David Herron, 
and Stasia Iwanicki. The Business Value of IT: Managing Risks, 
Optimizing Performance, and Measuring Results. Boca Raton, Fla.: 
Auerbach Publications, 2008. 

Haugan, Gregory T. The Work Breakdown Structure in Government 
Contracting. Vienna, Va.: Management Concepts, 2008. 

———. Work Breakdown Structures for Projects, Programs, and Enterprises. 
Vienna, Va.: Management Concepts, 2008. 

HHS (Department of Health and Human Services). DHHS Project Officers’ 
Contracting Handbook. Washington, D.C.: Jan. 23, 2003. 

Higdon, Greg, Lew Fichter, and Alfred Smith. Cost Uncertainty Analysis: 
Observations from the Field. Santa Barbara, Calif.: Tecolote Research, 
April 2008. 

Hohmann, Timothy J. Estimating Software Size: Impact and Methodologies. 
El Segundo, Calif.: Galorath Associates, 1997. 

Hulett, David T. Advanced Quantitative Schedule Risk Analysis. 
Washington, D.C.: Hulett and Associates, 2007. 

———. Considering Corporate Culture in Project Risk Management Maturity 
Models. Washington, D.C.: Hulett and Associates, 2007. 

———. Corporate Culture in Project Risk Management Maturity. Washington, 
D.C.: Hulett and Associates, 2008. 

———. Integrated Cost and Schedule Risk Analysis. Washington, D.C.: 
Hulett and Associates, 2007. 

———. Integrated Cost and Schedule Risk Analysis Using Risk Drivers, 
Washington, D.C.: Hulett and Associates, 2007. 

———. Integrating Cost and Schedule Risk Analysis. Washington, D.C.: 
Hulett and Associates, n.d. 

———. The Problem with Dangling Activities. Washington, D.C.: Hulett and 
Associates, 2007. 

———, and Waylon Whitehead. Using the Risk Register in Schedule Risk 
Analysis with Monte Carlo Simulation. Washington, D.C.: Hulett and 
Associates, 2007. 

Humphreys, Gary, and Margo Visitacion. Mastering Program Management 
Fundamentals Critical for Successful EVM. Bala Cynwyd, Pa.: Primavera, 
n.d. 

Ignemi, Joseph. Jumpstart Your Val IT™ Process. Mount Laurel, N.J.: 
PRICE Systems, Aug. 6, 2008. 

Intelligence Community Cost Analysis Improvement Group. Independent 
Cost Analysis Checklist, version 1.0. Washington, D.C.: Dec. 16, 1999. 

International Society of Parametric Analysts. Parametric Estimating 
Handbook, 4th ed. Vienna, Va.: 2008. 

Johnson, Jim, and others. “Collaboration: Development and 
Management—Collaborating on Project Success.” Software Magazine, 
Sponsored Supplement, February–March 2001. 

Jones, Capers. What Are Function Points? Hendersonville, N.C.: Software 
Productivity Research, March 1995. 

Kahneman, Daniel, and Dan Lovallo. “Delusions of Success: How Optimism 
Undermines Executives’ Decisions.” Harvard Business Review, July 2003. 

Kratzert, Keith. Earned Value Management (EVM): The Federal Aviation 
Administration (FAA) Program Manager’s Flight Plan. Washington, D.C.: 
Federal Aviation Administration, January 2006. 

Kumley, Alissa C., and others. Integrating Risk Management and Earned 
Value Management: A Statistical Analysis of Survey Results. [Arlington, 
Va.]: National Defense Industrial Association, Program Management 
Systems Committee, n.d. 

Lavdas, Evaggelos. Identifying the Characteristics of a Good Cost 
Estimate: A Survey. Cranfield, Bedfordshire, Eng.: Cranfield 
University, June 2006. 

Lipke, Walter H. Independent Estimate at Completion: Another Method. 
Tinker Air Force Base, Okla.: October 2004. 

Lorell, Mark A., Julia F. Lowell, and Obaid Younossi. Evolutionary 
Acquisition Implementation Challenges for Defense Space Programs. 
Arlington, Va.: RAND, 2006. 

McRitchie, K., and S. Acelar. “A Structured Framework for Estimating IT 
Projects and IT Support.” Industry Hills, Calif.: June 2008. 

Majerowicz, Walt. “Schedule Analysis Techniques.” Integrated 
Performance Management Conference, n.p., November 2005. 

Manzer, Frederick. Integrated Baseline Review: The Link from the 
Estimate to Project Execution. Vienna, Va.: Center for Systems 
Management, June 2005. 

Martin, Kevin. Integrating Schedules and EV Metrics. KM Systems Group, 
Arlington, Va.: May 2008. 

Minkiewicz, Arlene F. Estimating Software from Requirements. Mount 
Laurel, N.J.: PRICE Systems, n.d. 

———. Measuring Object-Oriented Software with Predictive Object Points. 
Mount Laurel, N.J.: PRICE Systems, n.d. 

NASA (National Aeronautics and Space Administration). CADRe (Cost 
Analysis Data Requirement). Washington, D.C.: Mar. 22, 2005. 

———. Cost Estimating Handbook. Washington, D.C.: 2004. 

———. NASA Procedural Requirements, NPR 7120.5C. Washington, D.C.: Mar. 
22, 2005. 

———, Jet Propulsion Lab. Handbook for Software Cost Estimation. 
Pasadena, Calif.: May 30, 2003. 

NAVAIR (Naval Air Systems Command). AIR 4.2 Life Cycle Cost Estimating 
Process. Washington, D.C.: Oct. 1, 1999. 

———. Integrated Project Management Training. Washington, D.C.: January 
2004. 

———. Methods and Models for Life-Cycle Cost Analysis in the U.S. 
Department of Defense. Washington, D.C.: May 26, 2004. 

———. NAVAIR Acquisition Guide. Washington, D.C.: April 2005. 

———. Using Software Metrics and Measurements for Earned Value Toolkit. 
Washington, D.C.: October 2004. 

NAVSEA (Naval Sea Systems Command). Cost Estimating Handbook. 
Washington, D.C.: 2005. 

NCAD (Navy Cost Analysis Division). Cost Analysis 101. Arlington, Va.: 
n.d. 

———. Documentation Guide. Washington, D.C.: Feb. 3, 2004. 

———. Methods and Models for Life Cycle Cost Analysis in the U.S. 
Department of Defense. Arlington, Va.: May 25–26, 2004. 

NCCA (United States Naval Center for Cost Analysis). Software 
Development Estimating Handbook. Arlington, Va.: February 1998. 

NDIA (National Defense Industrial Association). Integrating Risk 
Management with Earned Value Management. Arlington, Va.: June 2004. 

———. Program Management Systems Committee ANSI/EIA-748A Standard for 
Earned Value Management Systems Intent Guide. Arlington, Va.: January 
2005. 

———. Program Management Systems Committee Earned Value Management 
System Acceptance Guide. Arlington, Va.: November 2006. 

NDIA (National Defense Industrial Association). Program Management 
Systems Committee Earned Value Management System Application Guide. 
Arlington, Va.: March 2007. 

———. Program Management Systems Committee Surveillance Guide. 
Arlington, Va.: October 2004. 

NIH (National Institutes of Health). Cost Benefit Analysis Guide for 
NIH Information Technology (IT) Projects. Bethesda, Md.: April 1999. 

NIST (National Institute of Standards and Technology). Engineering 
Statistics Handbook. Washington, D.C.: Department of Commerce, 
Technology Administration, July 18, 2006. 

OMB (Office of Management and Budget). Capital Programming Guide: 
Supplement to Circular A-11, Part 1, Overview of the Budget Process. 
Washington, D.C.: Executive Office of the President, June 2006. 

———. Capital Programming Guide: Supplement to Circular A-11, Part 7, 
Preparation, Submission, and Execution of the Budget, Version 1.0. 
Washington, D.C.: Executive Office of the President, June 1997. 

———. Capital Programming Guide: Supplement to Circular A-11, Part 7, 
Preparation, Submission, and Execution of the Budget, Version 2.0. 
Washington, D.C.: Executive Office of the President, June 2006. 

———. “Developing and Managing the Acquisition Workforce,” Office of 
Federal Procurement Policy Letter 05-01, Washington, D.C., April 15, 
2005. 

———. FY2007 Exhibit 300 Preparation Checklist. Washington, D.C.: Mar. 
8, 2005. 

———. Guidelines and Discount Rates for Benefit-Cost Analysis of Federal 
Programs, Circular No. A-94 Revised. Washington, D.C.: Oct. 29, 1992. 

———. Guidelines for Ensuring and Maximizing the Quality, Objectivity, 
Utility, and Integrity of Information Disseminated by Federal Agencies; 
Notice; Republication. Part IX. Washington, D.C.: Feb. 22, 2002. 

———. “Improving Information Technology Project Planning and Execution,” 
memorandum for Chief Information Officers M 05-23, Washington, D.C., 
August 4, 2005. 

———. Major Systems Acquisitions, Circular A-109. Washington, D.C.: Apr. 
5, 1976. 

———. Management of Federal Information Resources, Circular A-130. 
Washington, D.C.: 2000. 

———. The President’s Management Agenda. Washington, D.C.: 2002. 

Park, Robert E. Checklists and Criteria for Evaluating the Cost and 
Schedule Estimating Capabilities of Software Organizations. Pittsburgh, 
Pa.: Software Engineering Institute, Carnegie Mellon University, 1995. 

Phelan, Pat. Estimating Time and Cost of Enterprise Resource Planning 
(ERP) Implementation Projects Is a 10-Step Process. Stamford, Conn.: 
Gartner Inc., Apr. 27, 2006. 

Performance Management–Civilian Agency/Industry Working Group. Civilian 
Agency Reciprocity for Contractors EVMS Acceptance. n.p.: May 29, 2008. 

Price, Rick. The Missing Link: Schedule Margin Management. n.p.: n.p., 
May, 2008. 

PMI (Project Management Institute). Practice Standard for Work 
Breakdown Structures, 2nd ed. Newton Square, Pa.: 2006. 

PRICE Systems. The Promise of Enterprise Resource Planning. Mount 
Laurel, N.J.: 2004 RAND. Impossible Certainty: Cost Risk Analysis for 
Air Force Systems. Arlington, Va.: 2006. 

———. Toward a Cost Risk Estimating Policy. Williamsburg, Va.: Feb. 17, 
2005. 

Remez, Shereen G., and Daryl W. White. Return on Investment (ROI) and 
the Value Puzzle. Washington, D.C.: Capital Planning and Information 
Technology Investment Committee, Federal Chief Information Officer 
Council, April 1999. 

Reifer, Donald J. Poor Man’s Guide to Estimating Software Costs. 
Torrance, Calif.: Reifer Consultants, June 2005. 

Scaparro, John. NAVAIR Integration Project Management Brief to U.S. 
Government Accountability Office. Pax River, Md.: Naval Air Systems 
Command, Apr. 19, 2007. 

SCEA (Society of Cost Estimating and Analysis). Cost Programmed Review 
of Fun Economic Analysis—How to Choose Between Investment Options. 
Vienna, Va.: 2003. 

———. Cost Programmed Review of Fundamentals (CostPROF): Basic Data 
Analysis Principles—What to Do Once You Get the Data. Vienna, Va.: 
2003. 

———. Cost Programmed Review of Fundamentals (CostPROF): Cost Estimating 
Basics—Why Cost Estimating and an Overview of How. Vienna, Va.: 2003. 

———. Cost Programmed Review of Fundamentals (CostPROF): Cost Risk 
Analysis—How to Adjust Your Estimate to Reflect Historical Cost Growth. 
Vienna, Va.: 2003. 

———. Cost Programmed Review of Fundamentals (CostPROF): Costing 
Techniques—The Basic Types of Cost Estimates. Vienna, Va.: 2003. 

———. Cost Programmed Review of Fundamentals (CostPROF): Data Collection 
and Normalization—How to Get the Data and Ready It for Analysis. 
Vienna, Va.: 2003. 

———. Cost Programmed Review of Fundamentals (CostPROF): Earned Value 
Management Systems (EVMS)—Tracking Cost and Schedule Performance on 
Projects. Vienna, Va.: 2003. 

———. Cost Programmed Review of Fundamentals (CostPROF): Index Numbers 
and Inflation—How to Adjust for the General Rise in Prices over Time. 
Vienna, Va.: 2003. 

———. Cost Programmed Review of Fundamentals (CostPROF): Manufacturing 
Cost Estimating—Techniques for Estimating in a Manufacturing 
Environment. Vienna, Va.: 2003. 

———. Cost Programmed Review of Fundamentals (CostPROF): Parametric 
Estimating—From Data to Cost Estimating Relationships (CERs). Vienna, 
Va.: 2003. 

SCEA (Society of Cost Estimating and Analysis). Cost Programmed Review 
of Fundamentals (CostPROF): Probability and Statistics—Mathematical 
Underpinnings of Cost Estimating. Vienna, Va.: 2003. 

———. Cost Programmed Review of Fundamentals (CostPROF): Regression 
Analysis—How to Develop and Assess a Cost Estimating Relationship 
(CER). Vienna, Va.: 2003. 

———. Cost Programmed Review of Fundamentals (CostPROF): Software Cost 
Estimating—Techniques for Estimating in a Software Environment. Vienna, 
Va.: 2003. 

———. Glossary of Terms. Vienna, Va.: n.d. 

Seaver, David P. Estimates for Enterprise Resource Planning Systems, 
Mount Laurel, N.J.: PRICE Systems, n.d. 

Slater, Derek. “Enterprise Resource Planning (ERP) Software Packages 
Promise Great Benefits, but Exactly How Much Will You Have to Pay to 
Get Them?” CIO Enterprise Magazine, January 15, 1998, p. 22. 

Smith, Alfred. Examination of Functional Correlation. Santa Barbara, 
Calif.: Tecolote Research, June 2007. 

———, and Shu-Ping Hu. Common Errors When Using Risk Simulation Tools. 
Santa Barbara, Calif.: Tecolote Research. June 2005. 

———. Cost Risk Analysis Made Simple. Santa Barbara, Calif.: Tecolote 
Research, September 2004. 

———. Impact of Correlating CER Risk Distributions on a Realistic Cost 
Model. Santa Barbara, Calif.: Tecolote Research. June 2003. 

Summerville, Jessica R. Improvement Curves: How to Account for Cost 
Improvement. Chantilly, Va.: Northrop Grumman, The Analytical Sciences 
Corporation, 2005. 

United States Army. Cost Guide Manual. Washington, D.C.: May 2002. 

———, Environmental Center. Methodology for Developing Environmental 
Quality Requirements for a Cost Analysis Requirements Description. 
Aberdeen, Md.: November 2001. 

USAF (United States Air Force). Cost Risk Uncertainty and Analysis 
Handbook, (Goleta, California: Tecolote Research, Inc.). April 2007. 

———. Air Force Systems Command (AFSC) Cost Estimating Handbook. 
Reading, Mass.: 1987. 

———. Condensed GSAM Handbook Checklists. Hill Air Force Base, Utah: 
February 2003. 

———. Cost Analysis Guidance and Procedures, Air Force Instruction 65-
508. Washington, D.C.: Oct. 1, 1997. 

———. Cost Risk Analysis: Need-to-Know for Evaluations and Budgetary 
Decisions. Arlington, Va.: Government-Industry Space Council, June 
2008. 

———. Costing ERP in the Department of Defense. Washington, D.C.: June 
2008. 

———. CrossTalk. Hill Air Force Base, Utah: June–December 2002. 

———. Economic Analysis, Air Force Instruction 65-506. Washington, D.C.: 
Nov. 10, 2004. 

———. Forecasting the Future with EVMS. Arlington, Va.: n.d. 

———. Guidelines for Successful Acquisition and Management of Software 
Intensive Systems. Hill Air Force Base, Utah: February 2003. 

———. Myths of EVM. Arlington, Va.: May 11, 2005. 

———, Air Force Cost Analysis Agency. Cost Risk Analysis Handbook. 
Arlington, Va.: October 2006. 

———, Air Force Cost Analysis Agency. Decision Support for the Program 
Manager. Arlington, Va.: n.d. 

———, Air Force Cost Analysis Agency. Pre-Award IBRs. Arlington, Va.: 
Sept. 7, 2006. 

———, Air Force Cost Analysis Agency. Raytheon Missile System EVMS 
Validation Review. Arlington, Va.: Apr. 18, 2008. 

———, Air Force Cost Analysis Agency. Risk and Estimate Quality Issues. 
Arlington, Va.: n.d. 

VA (Veterans Administration). Individual VAMC Pricing Guides by VISN. 
Washington, D.C.: July 2007. 

Visitacion, Margo. Debunking Commonly Held EVM Myths. n.p.: n.p., 
September 2007. 

Wideman, Max R. “A Pragmatic Approach to Using Resource Loading, 
Production, and Learning Curves on Construction Projects.” Canadian 
Journal of Civil Engineering 21 (1994): 939–53. 

Wiley, David. Software Cost Estimating: Techniques for Estimating in a 
Software Development Environment. Chantilly, Va.: Northrop Grumman and 
Analytical Sciences Corp., 2005. 

Wilkens, Tammo T. Earned Value Clear and Simple. Bala Cynwyd, Pa.: 
Primavera Systems, Apr. 1, 1999. 

Wright, R., J. Comer, and Justin Morris. Enhancing Remediation Project 
Credibility with Defensible and Documented Cost Estimates. n.p.: n.d. 

Zubrow, Dave. Earned Value Management (EVM): Basic Concepts. 
Pittsburgh, Pa.: Carnegie Mellon Software Engineering Institute, 2002. 

———. Implementing Earned Value Management (EVM) to Manage Program Risk. 
Pittsburgh, Pa.: Carnegie Mellon Software Engineering Institute, 2002. 

[End of References] 

Image Sources: 

This section contains credit and copyright information for images and 
graphics in this product, as appropriate, when that information was not 
listed adjacent to the image or graphic. 

Front cover clockwise: 
PhotoDisc (Treasury); 
NASA (Space Shuttle); 
DOD (U.S. Navy ship); 
Corbis (Glen Canyon Dam); 
Digital Vision (Expressway); 
Eyewire (White House); 
GAO (capitol). 

[End of section] 

Footnotes: 

[1] Federal Accounting Standards Advisory Board, Statement of Federal 
Financial Accounting Standards No. 4: Managerial Cost Accounting 
Standards and Concepts (Washington, D.C.: July 1995). 

[2] In the context of the Cost Guide, a program refers to all phases in 
a capital asset’s life cycle—that is, concept analysis, technology 
definition, requirements planning, acquisition, and operations and 
maintenance. 

[3] EVM is a project management tool that integrates the technical 
scope of work with schedule and cost elements for investment planning 
and control. It compares the value of work accomplished in a given 
period with the value of the work expected in that period. Differences 
in expectations are measured in both cost and schedule variances. The 
Office of Management and Budget (OMB) requires agencies to use EVM in 
their performance-based management systems for the parts of an 
investment in which development effort is required or system 
improvements are under way. 

[4] Office of Management and Budget, Preparation, Submission, and 
Execution of the Budget, Circular No. A-11 (Washington, D.C.: Executive 
Office of the President, June 2006); Management of Federal Information 
Resources, Circular No. A-130 Revised (Washington, D.C.: Executive 
Office of the President, Nov. 28, 2000); and Capital Programming Guide: 
Supplement to Circular A-11, Part 7, Preparation, Submission, and 
Execution of the Budget (Washington, D.C.: Executive Office of the 
President, June 2006). [hyperlink, 
http://www.whitehouse.gov/omb/circulars/index.html]. 

[5] GAO, 21st Century Challenges: Reexamining the Base of the Federal 
Government, [hyperlink, http://www.gao.gov/products/GAO-05-325SP] 
(Washington, D.C.: February 2005), p. 1. 

[6] Experienced and well trained staff are crucial to developing high-
quality cost estimates. 

[7] There is at this time no standard work breakdown structure for 
major automated information systems; there is only a generic cost 
element structure that DOD requires for major automated information 
system acquisition decisions. 

[8] Major acquisition and investment means that a system or project 
requires special management attention because (1) of its importance to 
the mission or function of the agency, a component of the agency, or 
another organization; (2) it supports financial management and 
obligates more than $500,000 annually; (3) it has significant program 
or policy implications; (4) it has high executive visibility; (5) it 
has high development, operating, or maintenance costs; or (6) it is 
defined as major by the agency’s capital planning and investment 
control process. 

[9] For more information on these studies, see GAO, Best Practices: 
Successful Application to Weapon Acquisitions Requires Changes in DOD’s 
Environment, [hyperlink, http://www.gao.gov/products/GAO/NSIAD-98-56] 
(Washington, D.C.: Feb. 24, 1998), pp. 8 and 62. 

[10] See Comptroller General of the United States, Government Auditing 
Standards: January 2007 Revision, [hyperlink, 
http://www.gao.gov/products/GAO-07-162G] (Washington, D.C.: January 
2007), and GAO, Standards for Internal Control in the Federal 
Government: Exposure Draft, [hyperlink, 
http://www.gao.gov/products/GAO/AIMD-98-21.3.1] (Washington, D.C.: 
December 1997). 

[11] Further information on SCEA and PMI is at [hyperlink, 
http://www.sceaonline.org] and [hyperlink, http://www.pmi.org]. 

[12] Comptroller General of the United States, Theory and Practice of 
Cost Estimating for Major Acquisitions, B-163058 (Washington, D.C.: 
July 24, 1972), p. 1. 

[13] Comptroller General of the United States, Theory and Practice of 
Cost Estimating for Major Acquisitions, pp. 26–27. 

[14] Comptroller General of the United States, Theory and Practice of 
Cost Estimating for Major Acquisitions, pp. 28–32. 

[15] Comptroller General of the United States, Theory and Practice of 
Cost Estimating for Major Acquisitions, pp. 31–32. 

[16] Comptroller General of the United States, Theory and Practice of 
Cost Estimating for Major Acquisitions, p. 32. 

[17] The 12 steps outlined in table 2 are appropriate for estimating 
the costs of large, complex programs. We note, however, that 
planning trade-offs, initial rough-order estimations, and other less 
visible analyses can be accomplished in less time than with the process 
outlined in the table. 

[18] NASA, Cost Analysis Division, 2004 NASA Cost Estimating Handbook 
(Washington, D.C.: 2004), p. i. [hyperlink, 
http://www.nasa.gov/offices/pae/organization/cost_analysis_division.html

[19] President George W. Bush, The President’s Management Agenda: 
Fiscal Year 2002 (Washington, D.C.: Executive Office of the President, 
OMB, 2002), p. 27. 

[20] OMB first issued the Capital Programming Guide as a supplement to 
the 1997 version of Circular A-11, Part 3. We refer to the 2006 
version. See under Circulars at OMB’s Web site, [hyperlink, 
http://www.whitehouse.gov/omb]. 

[21] For our purposes in this Cost Guide, contingency reserve 
represents funds held at or above the government program office for 
“unknown unknowns” that are outside a contractor’s control. In this 
context, contingency funding is added to an estimate to allow 
for items, conditions, or events for which the state, occurrence, or 
effect is uncertain and experience shows are likely to result in 
additional costs. Management reserve funds, in contrast, are for 
“known unknowns” that are tied to the contract’s scope and managed at 
the contractor level. Unlike contingency reserve, which is funding 
related, management reserve is budget related. The value of the 
contract includes these known unknowns in the budget base, and the 
contractor decides how much money to set aside. We recognize that other 
organizations may use the terms differently. 

[22] The auditor must ask the cost estimator if the technical 
assumptions for a new program have been tested for reasonableness. A 
program whose technical assumptions are not supported by historical 
data may be a high-risk program or its data may not be valid. Closing 
the gap between what a program wants to achieve and what has been 
achieved in the past is imperative for proper data validation. 

[23] For DOD programs, the Defense Contract Management Agency (DCMA) 
should have a copy of the EVM validation letter. 

[24] GAO, Best Practices: Better Acquisition Outcomes Are Possible If 
DOD Can Apply Lessons from F/A-22 Program, [hyperlink, 
http://www.gao.gov/products/GAO-03-645T] (Washington, D.C.: Apr. 11, 
2003), pp. 2–3. 

[25] An estimate that supports an independent estimate for a DOD 
program presumably entails no requirement that the independent cost 
estimating team keep program management informed. Instead, the program 
office and independent cost estimators would be expected to maintain 
communication and brief one another on their results, so as to 
understand any differences between the two estimates. 

[26] Since schedules are the foundation of the performance plan, having 
a scheduling staff member integrated on the team is critical for 
validating the plan’s reasonableness. A scheduler can determine the 
feasibility of the network schedule by analyzing its durations. 

[27] An independent cost estimate for a major defense acquisition 
program under 10 U.S.C. § 2434 must be prepared by an office or other 
entity (such as the Office of the Secretary of Defense Cost Analysis 
Improvement Group) that is not under the supervision, direction, or 
control of the military department, defense agency, or other component 
directly responsible for carrying out the program’s development or 
acquisition. If the decision authority has been delegated to an 
official of the military department, defense agency, or other DOD 
component, then the estimate must be prepared by an office or other 
entity not directly responsible for carrying out the development or 
acquisition. 

[28] Defense Acquisition Workforce Improvement Act, codified at 10 
U.S.C. ch. 87.

[29] DAU’s Web site is at [hyperlink, https://acc.dau.mil/evm]. 

[30] As used in this Cost Guide, the technical baseline is similar to 
DOD’s Cost Analysis Requirements Description (CARD) and NASA’s Cost 
Analysis Data Requirement (CADRE). 

[31] Gregory T. Haugan, Work Breakdown Structures for Projects, 
Programs, and Enterprises (Vienna, Va.: Management Concepts, 2008), p. 
38. 

[32] When following the product-oriented best practice, there should 
not be WBS elements for various functional activities like design 
engineering, logistics, risk, or quality, because these efforts should 
be embedded in each activity. 

[33] DOD, Department of Defense Handbook: Work Breakdown Structures for 
Defense Material Items, MIL-HDBK-881A (Washington, D.C.: July 30, 
2005). 

[34] Ronald C. Wilson, Department of Defense Automated Information 
Systems Economic Analysis Guide (Washington, D.C.: Department of 
Defense, May 1, 1995), att. B, pp. 39–75, Cost Element Structure 
Definitions. 

[35] Government furnished equipment can also be an assumption and is 
not always a ground rule. 

[36] The examples and paragraph are © 2003, Society of Cost Estimating 
and Analysis, “Data Collection and Normalization: How to Get the Data 
and Ready It for Analysis.” 

[37] We discuss these terms in chapters 18 and 19. 

[38] The coefficient of variation is a useful descriptive statistic for 
comparing the degree of variation from various data sets, even if the 
means are very different. 

[39] Expert opinion, also known as engineering judgment, is commonly 
applied to fill gaps in a relatively detailed WBS when one or more 
experts are the only qualified source of information, particularly in 
matters of specific scientific technology. 

[40] See International Society of Parametric Analysts, Parametric 
Estimating Handbook©, 4th ed. (Vienna, Va.: ISPA/SCEA Joint Office, 
2008). [hyperlink, http://www.ispa-cost.org/newbook.htm]. The handbook 
and its appendixes detail, and give examples of, how to develop, test, 
and evaluate CERs. 

[41] b = log (slope)/log (2). 

[42] Appendix XI has more detail on learning and learning curves. 

[43] Daniel D. Galorath, Software Projects on Time and within 
Budget—Galorath: The Power of Parametrics, PowerPoint presentation, El 
Segundo, California, n.d., p. 3. [hyperlink, 
http://www.galorath.com/wp/software-project-failure-costs-billions-
better-estimation-planning-can-help.php]. 

[44] Jim Johnson and others, “Collaboration: Development and 
Management—Collaborating on Project Success,” Software Magazine, 
Sponsored Supplement, February–March 2001, p. 2. 

[45] A source for more information on hardware cost estimating is the 
International Society of Parametric Analysts, Parametric Estimating 
Handbook, 4th ed. 

[46] Pat Phelan, Estimating the Time and Cost of ERP Implementation 
Projects Is a 10-Step Process (Stamford, Conn.: Gartner Inc., Apr. 
10, 2006), p. 3. 

[47] Cloud computing refers to information that resides in servers on 
the Internet and is downloaded temporarily onto various hardware 
devices such as desktop and notebook computers, entertainment centers, 
and handheld telephones. 

[48] Appendix IX contains a sample IT infrastructure and IT services 
WBS; it is a supplement to the automated information system 
configuration, customization, development, and maintenance WBS 
discussed in chapter 8. 

[49] The ranges should be documented during data collection and cost 
estimating (steps 6 and 7). 

[50] DOD has a tool that is intended to do cost sensitivity analyses, 
in addition to other tools, that can be downloaded for free at 
[hyperlink, http://www.hq.usace.army.mil/cemp/e/ec/econ/econ.htm]. 

[51] Many good references outline the cost risk and uncertainty 
modeling process. The Air Force Cost Analysis Agency’s recent Cost Risk 
and Uncertainty Analysis Handbook is one example (see Alfred Smith and 
others, Air Force Cost Analysis Agency (AFCAA) Cost Risk and 
Uncertainty Analysis  Handbook (CRH), prepared for Stephen Tracy, Air 
Force Cost Analysis Agency (Goleta, California: Tecolote Research, 
Inc., October 2006). 

[52] See Stephen A. Book, “Do Not Sum ‘Most Likely’ Costs,” 
presentation to American Society of Military Comptrollers, Los Angeles, 
California, April 30, 2002. 

[53] 40 U.S.C. § 11312 (Supp. IV 2004). 

[54] © 2000 From Probability Methods for Cost Uncertainty Analysis by 
Paul Garvey. Reproduced by permission of Taylor and Francis Group, LLC, 
a division of Informa PLC. 

[55] The simulation quantifies the imperfectly understood risks in the 
program after any agreed-on mitigation has been incorporated. Unknown 
unknowns, risks that are not known when the analysis is done, may 
require periodic risk analysis leading to improvement of the estimate 
of uncertainty. 

[56] The original approach to this impact-only assessment was Floyd 
Maxwell’s of the Aerospace Corporation. Since he used it for many years 
at Aerospace, it was originally called the “Maxwell Matrix.” 

[57] Risks can be entered directly or they can be assigned as 
multiplication factors to specific cost elements or schedule 
activities. If this “risk driver” approach is used, the data collected, 
including probability of occurrence and impact (typically a 3-point 
estimate), will be on the risks themselves. Hence, the focus is on the 
risks, not on their impact on activities or cost line items. This focus 
on the risks makes it easy to understand the results and to focus on 
mitigating risks directly. 

[58] Latin hypercube simulation can also be used. This method 
partitions the “simulation draw area” into equal area segments and 
results in convergence to the “correct” answer with fewer iterations. 

[59] Cost and schedule Monte Carlo simulations tend to be performed 
separately by different specialists. Cost uncertainty models seldom 
address schedule risk issues. Performing a schedule risk analysis can 
more adequately address schedule risk issues. (More detail is in 
appendix 10.) 

[60] For the OMB guidelines, see Guidelines and Discount Rates for 
Benefit-Cost Analysis of Federal Programs, Circular No. A-94 
(Washington, D.C.: Oct. 29, 1992), and Director, OMB, “2009 Discount 
Rates for OMB Circular No. A-94,” memorandum for the heads of 
departments and agencies, Executive Office of the President, OMB, 
Washington, D.C., Dec. 12, 2008. 

[61] The system acquisition phase includes both contract and in-house 
organization efforts. If in-house staffing is selected, the effort 
should be managed in the same way as contract work. This means that in-
house efforts are expected to meet the same cost, schedule, 
and technical performance goals that would be required for contract 
work to ensure the greatest probability of program success. 

[62] Federal Acquisition Regulation (FAR), 48 C.F.R. § 34.202 (added by 
Federal Acquisition Circular 2005-11, July 5, 2006). 

[63] DOD interprets the guidelines to require a network schedule. 

[64] We consulted the expert community on the issue of reallocation of 
budget for completed activities that underrun. The experts explained 
that while the term budget in EVM represents the plan, it is not the 
same thing as funding. Therefore, in EVM, a control account’s budget is 
fully earned once the effort is 100 percent complete, even if the 
actual cost of the effort was more or less than the budget. As a 
result, budget for past work, earned value, and actual costs need to 
stay together in an EVM system in order to maintain reporting 
integrity. However, if a WBS control account’s or work package’s actual 
cost (WBS) is under running the planned budget, this may suggest that 
the budget for future work packages may be over budgeted as well. If 
that is the case, then budget for future work could be recalled into 
management reserve to be available for critical path activities. 
According to the EVM guidelines, a contractor’s EVM system should allow 
for that. 

[65] 41 U.S.C. § 263. A similar requirement in 10 U.S.C. § 2220 applied 
to the Department of Defense but was later amended to remove the 90 
percent measure. DOD has its own major program performance oversight 
requirements in chapters 144 (Major Defense Acquisition Programs) and 
144A (Major Automated Information System Programs) of title 10 of the 
U.S. Code, including the Nunn-McCurdy cost reporting process at 10 
U.S.C. § 2433. Regarding information technology programs, 40 U.S.C. § 
11317 (formerly 40 U.S.C. § 1427) requires agencies to identify in 
their strategic information resources management plans any major 
information technology acquisition program, or phase or increment of 
that program, that has significantly deviated from cost, performance, 
or schedule goals established for the program. 

[66] OMB, Preparation, Submission, and Execution of the Budget, 
Circular A-11 (Washington, D.C.: Executive Office of the President, 
June 2006), part 7, Planning, Budgeting, Acquisition, and Management of 
Capital Assets, sec. 300. [hyperlink, 
http://www.whitehouse.gov/omb/circulars/index.html]. 

[67] See, for example, ANSI/EIA 748 32 Industry Guidelines (American 
National Standards Institute (ANSI)/Electronic Industries Alliance 
(EIA) Standard, Earned Value Management Systems, ANSI/EIA-748-B-2007, 
approved July 9, 2007, at [hyperlink, 
http://www.acq.osd.mil/pm/historical/Timeline/EV%20Timeline.htm], and 
NDIA, National Defense Industrial Association (NDIA) Program Management 
Systems Committee (PMSC) ANSI/EIA-748-A Standard for Earned Value 
Management Systems Intent Guide (Arlington, Va.: January 2005). 

[68] See OMB, Capital Programming Guide, II.2.4, “Establishing an 
Earned Value Management System.” The OMB requirements are also 
reflected in the FAR at 48 C.F.R. subpart 34.2. 

[69] See, for example, DAU’s fundamental courses at [hyperlink, 
http://www.dau.mil/schedules/schedule.asp] and PMI’s literature at 
[hyperlink, 
http://www.pmibookstore.org/PMIBookStore/productDetails.aspx?itemID=372&
varID=1]. 

[70] This step demonstrates the integration of EVM and risk management 
processes. 

[71] Since the activity durations are estimates and may differ from 
those in the schedule, the actual critical path may differ from that 
computed by the scheduling software. This is one reason that a schedule 
risk analysis provides information on the schedule “criticality,” the 
probability that schedule activities will be on the final critical 
path. 

[72] According to OMB, if a preaward IBR is required, it must be 
included in the proposed evaluation process during the best value 
trade-off analysis. If a preaward IBR was not contemplated at the time 
of the solicitation, but the source selection team determines that the 
proposals received do not clearly demonstrate that the cost, schedule, 
and performance goals have a high probability of being met, an IBR may 
be conducted before the award is made. 

[73] Federal Acquisition Regulation section 32.102 and subparts 32.5 
and 32.10. 

[74] More information on EVM system acceptance is in NDIA, Program 
Management Systems Committee, “NDIA PMSC ANSI/EIA 748 Earned Value 
Management System Acceptance Guide,” draft, working release for user 
comment (Arlington, Va.: November 2006). 

[75] This criterion does not apply when the EVM system owner conducts a 
self-evaluation review. 

[76] The source of this statement is © 2003, Society of Cost Estimating 
and Analysis, “Earned Value Management Systems (EVMS) Tracking Cost and 
Schedule Performance on Projects,” p. 7. 

[77] See DOD, The Program Manager’s Guide to the Integrated Baseline 
Review Process (Washington, D.C.: Office of the Secretary of Defense 
(AT&L), April 2003). 

[78] A waterfall chart is made up of floating columns that typically 
show how an initial value increases and decreases by a series 
of intermediate values leading to a final value; an invisible column 
keeps the increases and decreases linked to the heights of the 
previous columns. Waterfall charts can be created by applying widely 
available add-in tools to Microsoft Excel. 

[79] David S. Christensen, Determining an Accurate Estimate at 
Completion (Cedar City: Southern Utah University, 1993), p. 7. 

[80] NDIA, National Defense Industrial Association (NDIA) Program 
Management Systems Committee (PMSC) Surveillance Guide (Arlington, Va.: 
October 2004). 

[81] This action is not to be confused with reprogramming agency 
appropriations. In that context, reprogramming is a shifting of funds 
within an appropriation or fund account to use them for purposes other 
than those contemplated at the time of the appropriation. (See GAO, A 
Glossary of Terms Used in the Federal Budget Process, GAO-05-734SP 
(Washington, D.C.: Sept. 1, 2005), p. 85.) The overtarget baseline 
action should also not be confused with replanning—that is, the 
replanning of actions for remaining work scope, a normal program 
control process accomplished within the scope, schedule, and cost 
objectives of the program. 

[82] OMB, Capital Programming Guide: Supplement to Circular A-11, Part 
7, Preparation, Submission, and Execution of the Budget. 

[83] See Comptroller General of the United States, How to Improve the 
Selected Acquisition Reporting System: Department of Defense, PSAD-75-
63 (Washington, D.C.: GAO, Mar. 27, 1975), p. 2.

[84] See 10 U.S.C.S. § 2433 (2002 & Supp. 2007).

[85] OMB, Capital Programming Guide: Supplement to Circular A-11, Part 
7, Preparation, Submission, and Execution of the Budget (Washington, 
D.C.: Executive Office of the President, June 2006). [hyperlink, 
http://www.whitehouse.gov/omb/circulars/index.html]. 

[86] See Federal Acquisition Circular 2005-11, July 5, 2006, Item 
I—Earned Value Management System (EVMS) (FAR Case 2004-019). 

[87] OMB, Capital Programming Guide: Supplement to Circular A-11, Part 
7. 

[88] EVM systems guidelines in American National Standards Institute 
(ANSI)/Electronic Industries Alliance (EIA) Standard 748 were developed 
and promulgated through ANSI by the National Defense Industrial 
Association’s (NDIA) Program Management Systems Committee. 

[89] NDIA PMSC EVM Systems Intent Guide, © 2004–2005 National Defense 
Industrial Association (NDIA) Program Management Systems Committee 
(PMSC), ANSI/EIA-748-A Standard for Earned Value Management Systems 
Intent Guide (January 2005 edition). 

[90] NDIA System Acceptance Guide, © 2004–2005 National Defense 
Industrial Association (NDIA) Program Management Systems Committee 
(PMSC), NDIA PMSC Earned Value Management System Acceptance Guide 
(November 2006 released working draft). 

[91] NDIA System Application Guide, © 2007 National Defense Industrial 
Association (NDIA) Program Management Systems Committee (PMSC), Earned 
Value Management Systems Application Guide (March 2007 edition). 

[92] DOD, Department of Defense Handbook: Work Breakdown Structures for 
Defense Materiel Items, MIL-HDBK-881A (Washington, D.C.: OUSD (AT&L), 
July 3, 2005). 

[93] More information is available on CSI’S Web site, [hyperlink, 
http://www.csinet.org/s_csi/index.asp]. The Construction Specifications 
Institute “maintains and advances the standardization of construction 
language as it pertains to building specifications.” It also provides 
structured guidelines for specification writing in its Project Resource 
Manual: CSI Manual of Practice, 5th ed. (New York: McGraw-Hill, 2004). 

[94] More information is available from the OmniClass Web site at 
[hyperlink, http://www.omniclass.org]. The OmniClass™ material included 
here is used with permission. Copyright © 2006 the Secretariat for the 
OCCS Development Committee. All Rights Reserved. www.omniclass.org, 
Edition 1.0, 2006-03-28 Release. 

[95] OmniClass™ Edition 1.0 May 2, 2006 p. 22-ii. 

[96] T. P. Wright, “Factors Affecting the Cost of Airplanes,” Journal 
of Aeronautical Science 3:4 (1936): 122–28; reprinted in International 
Library of Critical Writings in Economics 128:3 (2001): 75–81. 

[97] The IBR approach is scalable—e.g., the NAVAIR case study is of a 
major system acquisition, but IBRs on small programs that do not 
involve a major acquisition may not require the time and number of 
people cited in this case study to achieve IBR objectives. 

[98] “Sound basis of estimate” should be understood in the context of 
the estimate (or contract target value) that resulted after the 
proposal and negotiation process was completed—the contract value is 
the new “basis of estimate.” 

[End of section]