This is the accessible text file for GAO report number GAO-05-796SP 
entitled 'Economic Performance: Highlights of a Workshop on Economic 
Performance Measures' which was released on July 18, 2005. 

This text file was formatted by the U.S. Government Accountability 
Office (GAO) to be accessible to users with visual impairments, as part 
of a longer term project to improve GAO products' accessibility. Every 
attempt has been made to maintain the structural and data integrity of 
the original printed product. Accessibility features, such as text 
descriptions of tables, consecutively numbered footnotes placed at the 
end of the file, and the text of agency comment letters, are provided 
but may not exactly duplicate the presentation or format of the printed 
version. The portable document format (PDF) file is an exact electronic 
replica of the printed version. We welcome your feedback. Please E-mail 
your comments regarding the contents or accessibility features of this 
document to Webmaster@gao.gov. 

This is a work of the U.S. government and is not subject to copyright 
protection in the United States. It may be reproduced and distributed 
in its entirety without further permission from GAO. Because this work 
may contain copyrighted images or other material, permission from the 
copyright holder may be necessary if you wish to reproduce this 
material separately. 

GAO Workshop: 

July 2005: 

Economic Performance: 

Highlights of a Workshop on Economic Performance Measures: 

GAO-05-796SP: 

GAO Highlights: 

Highlights of GAO-05-796SP: 

Why GAO Convened This Workshop: 

Improving the economy and efficiency of federal programs has long been 
a key objective of the Government Accountability Office (GAO). To this 
end, GAO held a workshop on December 17, 2004, to discuss the use of 
economic analysis, such as benefit cost or cost effectiveness, for 
helping to measure the performance of federal programs. The workshop’s 
purpose was to: 

* discuss the present state of economic performance measures and 
identify gaps in their application and the barriers and analytical 
issues that limit their use in helping assess the performance of 
federal programs and

* identify opportunities for the federal government and professional 
and academic institutions to improve (1) the use of economic 
performance measures for evaluating federal programs and (2) the 
general economic principles and guidance on which economic performance 
analysis is based. 

What Participants Said: 

Workshop participants identified a number of issues regarding the use 
of economic performance analysis—benefit-cost or cost-effectiveness 
analysis—in evaluating federal program performance. They generally said 
the following: 

* The quality of the economic performance assessment of federal 
programs has improved but is still highly variable and not sufficient 
to adequately inform decision makers. 

* The gaps in applying economic performance measures are that they are 
not widely used, mechanisms for revisiting a regulation or program are 
lacking, retrospective analyses are often not done, and homeland 
security regulations present additional challenges and typically do not 
include economic analysis. 

* Barriers include agencies’ lack of resources and only limited demand 
from decision makers for benefit-cost analysis. In addition, some 
participants stated that organizational barriers called stovepipes or 
silos hinder communication. 

* Some analytical issues that affect the application of economic 
performance measures are limited guidance on assessing unquantifiable 
benefits, equity, and distributional effects of federal actions; lack 
of agreement on some values for key assumptions; and lack of guidance 
on tools that do not monetize outcomes, such as multiobjective 
analysis. 

* Opportunities to expand the use of measures include evaluation of 
existing programs retrospectively and application to homeland security 
issues. 

* Ways to improve the general economic principles and guidance that 
economic performance analysis is based upon include developing a 
minimum set of principles and abbreviated guidelines for economic 
performance analysis, developing one-page summaries and scorecards of 
analysis results, standardizing some key values for assumptions, and 
creating an independent and flexible organization to provide guidance 
and develop standards. 

www.gao.gov/cgi-bin/getrpt?GAO-05-796SP. 

To view the full product, including the scope and methodology, click on 
the link above. For more information, contact Nancy R. Kingsbury at 
(202) 512-2700 or kingsburyn@gao.gov. 

[End of section] 

Contents: 

Letter: 

The Workshop's Objectives: 

Summary of Workshop Discussion: 

Participants' Comments: 

Workshop Discussion: 

Background: 

The State of Economic Performance Evaluation, Including Gaps, Barriers, 
and Analytical Issues: 

The Extension of Economic Performance Measures for Evaluating Federal 
Programs: 

Improving General Economic Principles and Guidance: 

Appendixes: 

Appendix I: Economic Performance Workshop Participants: December 17, 
2004: 

Appendix II: Economic Performance Assessment: Uses, Principles, and 
Opportunities: 

Tables: 

Table 1: The Use of Economic Performance Measures for Prospective 
Assessment of Federal Programs: 

Table 2: The Use of Economic Performance Measures for Retrospective 
Assessment of Federal Programs: 

Table 3: Summary of Three Programs' Net Benefits: 

Table 4: Evaluating Economic Performance Assessments with a Scorecard: 

Table 5: The Hierarchy of Generally Accepted Accounting Principles: 

Table 6: Consistent Reporting Format: GAO's WIC Assessment: 

Table 7: Consistent Reporting Format: GAO's USDA Cotton Program 
Assessment: 

Table 8: Consistent Reporting Format: GAO's OSHA Scaffold Assessment: 

Table 9: A Scorecard for Evaluating Economic Performance Assessments: 

Table 10: Prospective and Retrospective Assessments of OSHA's Scaffold 
Rule Compared: 

Abbreviations: 

AICPA: American Institute of Certified Public Accountants: 

APB: Accounting Principles Board: 

CDC: Centers for Disease Control and Prevention: 

DOE: Department of Energy: 

DOL: Department of Labor: 

DOT: Department of Transportation: 

EPA: Environmental Protection Agency: 

FASAB: Financial Accounting Standards Advisory Board: 

FASB: Financial Accounting Standards Board: 

GASB: Governmental Accounting Standards Board: 

OMB: Office of Management and Budget: 

OSHA: Occupational Safety and Health Administration: 

PART: Program Assessment Rating Tool: 

USDA: U.S. Department of Agriculture: 

WIC: Special Supplemental Nutrition Program for Women, Infants, and 
Children: 

Letter July 2005: 

Improving the economy and efficiency of federal programs has long been 
a key objective of the U.S. Government Accountability Office (GAO). A 
focus on auditing the performance of government programs has 
complemented the agency's focus on accounting for decades. In a recent 
report, GAO highlighted the importance of a fundamental review of 
federal programs and policies in addressing the nation's long-term 
fiscal imbalance and in ensuring that the federal government's programs 
and priorities meet current and future challenges.[Footnote 1] In this 
regard, measuring the economic performance of federal programs, such as 
the extent to which program benefits exceed costs (net benefits) or are 
achieved at least cost (cost effectiveness), could be a useful way to 
assess, in conjunction with other measures, the extent to which federal 
programs are meeting the nation's priorities. 

The economic performance of some federal actions is presently assessed 
prospectively, through an Office of Management and Budget (OMB) review 
of proposed capital investments and regulations. However, few federal 
actions are monitored for their economic performance retrospectively. 
In addition, reviews by GAO have found that economic assessments that 
analyze regulations prospectively are often incomplete and inconsistent 
with general economic principles.[Footnote 2] Moreover, the assessments 
are often not useful for comparisons across the government, because 
they are often based on different assumptions for the same key economic 
variables. Furthermore, new areas of federal action, such as homeland 
security, present additional challenges because of the difficulty of 
assessing uncertainty and risk, such as those associated with terrorist 
activities. 

The Government Performance and Results Act of 1993 (Results Act) 
requires, among other things, agencies to establish budgetary 
performance goals and to identify measures to determine whether their 
programs are meeting those goals. Although economic performance 
measures are consistent with the act, they are generally not used. For 
example, GAO found that few measures under the act clearly linked 
program costs to the achievement of program goals or 
objectives.[Footnote 3] In addition, although federal agencies use 
OMB's Program Assessment Rating Tool (PART) every year to assess the 
performance of their programs, almost 50 percent of the 234 programs 
assessed for fiscal year 2004 received a rating of "results not 
demonstrated." OMB had determined that program performance information, 
performance measures, or both were insufficient or inadequate.[Footnote 
4] In particular, OMB has indicated a preference for the use of more 
economic performance measures, including net benefits, in the PART 
process. 

Accepted methods for estimating economic performance measures are based 
on general economic principles and guidelines derived from academic 
textbooks and research results presented in journal articles. Several 
federal agencies, such as the U.S. Department of Transportation and 
Environmental Protection Agency, have incorporated the principles and 
guidelines into guidance for agency economists to use in assessing 
economic performance. Unlike in some other professions, such as 
accounting, these principles and guidelines were not identified or 
created by a standard-setting authority representing the entire 
profession. 

The Workshop's Objectives: 

GAO convened a workshop on December 17, 2004, to discuss the use of 
economic analysis, such as cost benefit or cost effectiveness, for 
helping to measure the performance of federal programs. The workshop's 
objectives were to: 

* discuss the present state of economic performance measures and 
identify the gaps in their application and the barriers and analytical 
issues that limit their use in helping assess the performance of 
federal programs and: 

* identify opportunities for the federal government and professional 
and academic institutions to improve (1) the use of economic 
performance measures for evaluating federal programs and (2) the 
general economic principles and guidance on which economic performance 
analysis is based. 

A summary of the workshop discussion is presented in the next section. 
The participants are listed in appendix I. A discussion paper prepared 
for the workshop by a number of GAO staff appears in appendix II. 

We selected workshop participants from government and academia based on 
their professional publications about economic performance measures, 
their role in developing economic guidance, and the extent to which 
they have used economic performance measures in their agencies. In 
addition, four participants were asked to make presentations to the 
group on areas relating to the workshop objectives, including the use 
of economic performance measures for oversight in the executive branch, 
limitations of economic performance measures, the quality of agencies' 
economic regulatory assessments, and the use of standard-setting 
authorities to develop principles and standards of guidance for the 
accounting profession. GAO provided the participants with a discussion 
paper for background information before the workshop began. 

After the workshop was conducted, we used content analysis to 
systematically analyze a transcript of the workshop discussion and to 
identify participants' views on key questions, as well as the key 
themes that developed from the discussion. As agreed by the 
participants, the purpose of the discussion was to engage in an open, 
not-for-attribution dialogue. As a result, this report is a synthesis 
of the key themes from the workshop, not a verbatim presentation of the 
participants' statements. In addition, it does not necessarily 
represent the views of any individual participant. We did not verify 
the participants' statements, and the views expressed do not 
necessarily represent the views of GAO. 

We would like to thank the workshop's participants for taking the time 
to share their knowledge and providing their insight and perspective in 
an effort to improve government oversight accountability and 
performance. 

Summary of Workshop Discussion: 

Although the workshop participants said that they recognized that the 
quality of federal agencies' economic assessments of regulations and 
programs has generally improved over the years, they said that they 
believed that the assessments' quality is still highly variable. 
Assessments vary in how they are performed and in the measures they 
use. The participants also said that many economic assessments 
conducted to support agency decisions are insufficient to inform 
decision makers whether proposed regulations and programs are achieving 
goals cost effectively or generating net benefits for the nation. 

Sidebar: 

* The quality of economic performance assessments has improved but is 
still generally not sufficient. 

[End of sidebar] 

Participants identified gaps in the application of economic performance 
measures. First, economic performance measures are often not widely 
used for programs in the federal government. Second, while some 
agencies have done retrospective economic performance assessments, 
participants said that in general federal agencies often do not assess 
the performance of regulations or existing programs retrospectively, 
even though this information could be useful in managing programs. 
Third, once a program has been enacted, mechanisms often do not exist 
for determining whether actual performance is similar to predicted 
effectiveness. Fourth, regulations related to homeland security present 
additional challenges because of the difficulties associated with 
quantifying the probability of a terrorist attack and the benefits that 
might be generated as a result of proposals related to them. In 
addition, proposed regulations involving these issues generally do not 
measure their expected economic performance. 

Sidebar: 

* Economic performance measures are not widely used. 

* Performance of regulations or programs is often not assessed 
retrospectively. 

* Mechanisms for revisiting regulations or programs are lacking. 

* Homeland security regulations present challenges and typically do not 
include economic analysis. 

[End of sidebar] 

Some participants stated that economic performance measures are not 
widely used because of several barriers. They cited as an example a 
lack of demand from many decision makers to know the full costs of 
federal programs. In addition, participants pointed out that agencies 
often lack resources in terms of both funds and time for assessing the 
economic performance of programs already in place. Organizational 
stovepipes or silos that limit communication--between federal agencies 
and between the agencies and the economics profession--about how to 
conduct comprehensive and useful economic assessments were identified 
as another barrier. 

Sidebar: 

* Limited demand for benefit-cost analysis from decision makers. 

* Little provision of resources to agencies to assess existing 
programs. 

* Existence of organizational "stovepipes."

[End of sidebar] 

The participants generally agreed that several analytical issues should 
be resolved to improve the consistency and credibility of economic 
performance measures. For example, they cited insufficient guidance for 
agencies to appropriately include benefits or costs of federal actions 
that cannot be quantified or monetized or the effects of actions on 
different income, racial, or other population groups. In addition, lack 
of agreement and guidance regarding the most appropriate set of values 
to use for key economic assumptions, such as the benefit associated 
with a reduced risk of mortality, hinders the consistent application of 
economic performance measures across government agencies. Participants 
also cited lack of guidance for tools such as those used for 
multiobjective analysis of such things as the benefits of agency 
outcomes without putting the benefits into monetary terms. 

Sidebar: 

* Limited guidance on assessing unquantifiable benefits, equity, and
distributional effects. 

* Lack of agreement on some key values. 

* Lack of guidance on tools that do not monetize outcomes, such as 
multiobjective analysis. 

[End of sidebar] 

There was general agreement that the use of economic performance 
measures should be expanded, especially for retrospective analysis of 
existing programs. Besides providing information on the performance of 
existing programs, retrospective analysis could provide lessons on how 
to improve prospective analysis of proposed programs. Along these 
lines, analyzing economic performance could be one way to evaluate 
agencies' performance through budget processes. Some participants also 
indicated that economic performance measures could be used to evaluate 
the risk and uncertainty associated with homeland security programs and 
regulations. 

Sidebar: 

* Expand use of analysis, particularly for retrospective evaluation of 
existing programs. 

* Use economic performance measures to inform federal budgets and the 
risk and benefits of Homeland Security programs. 

[End of sidebar] 

The participants identified opportunities for the federal government 
and professional and academic institutions to improve economic 
principles and guidance that could ultimately enhance the use of 
economic performance measures for evaluating federal regulations and 
programs. For example, it was suggested that a minimum set of general 
economic principles and abbreviated guidelines might help agencies 
overcome barriers in assessing the economic performance of their 
regulations and programs. In addition, the analytical challenges of 
quantifying the risk and uncertainties associated with homeland 
security issues require more extensive guidance in order to deal with 
the development of regulations. Scorecards that rate the quality of 
economic assessments and one-page summaries of key results, as well as 
expert review of the agencies' economic assessments, were cited, by 
some, as tools for improving quality and credibility. Some participants 
indicated that standardizing some key values for economic assumptions 
could help improve quality throughout the government. 

Develop a minimum set of principles and abbreviated guidelines. 

Sidebar: 

* Develop guidance for dealing with Homeland Security issues. 

* Develop one-page summaries and scorecards of economic performance 
analysis; use expert review to provide procedures and strategies. 

* Standardize some key values. 

* Develop an independent and flexible organization to provide guidance 
and develop standards. 

[End of sidebar] 

The participants identified a number of existing organizations that 
might more formally develop and improve principles and guidance for 
economic performance analysis. For example, several participants 
expressed interest in the accounting profession's use of standard- 
setting authorities to develop comprehensive principles, standards, and 
guidance to ensure the quality, consistency, and credibility of 
accounting and financial reporting. Some participants indicated, 
however, that professional economics institutions are not designed to 
govern or monitor the application of economics. 

The participants identified some other organizational formats that 
could be used, such as those that the National Academies and National 
Bureau of Economic Research use. For example, the National Academies 
convene expert panels, workshops, and roundtables to examine science 
and technology issues. These formats might help resolve analytical 
issues and improve principles and guidance. Alternatively, it was 
generally agreed that creating a new organization, if it were 
organizationally independent and flexible enough, might help address a 
variety of significant issues. 

Participants' Comments: 

We provided a draft of this report to the workshop participants for 
their review and comment. Seven of fourteen participants external to 
GAO chose to provide comments. They generally agreed with the summary 
of the workshop discussion and stated that it was fair and complete. In 
addition, they provided clarifying points and technical comments, which 
we incorporated as appropriate. 

If you would like additional information on the workshop or this 
document, please call (202) 512-2700. The workshop was planned and this 
report was prepared under the direction of Scott Farrow, Chief 
Economist. Other major contributors were Carol Bray, Alice Feldesman, 
Tim Guinane, Luanne Moy, and Penny Pickett. 

Signed by: 

Nancy R. Kingsbury, Managing Director: 
Applied Research and Methods: 

Signed by: 

Scott Farrow, Chief Economist: 
Applied Research and Methods: 

[End of section]

Workshop Discussion: 

Background: 

Economists typically use economic assessments to measure the 
performance of federal regulations and programs. The assessments 
estimate the net benefits or cost effectiveness of federal actions on a 
nationwide basis.[Footnote 5] Economic performance assessment differs 
from a straightforward financial appraisal in that all gains (benefits) 
and losses (costs) that accrue to society in general (not just to the 
government) as a result of the program are to be counted. 

Although other professions, such as accounting, rely on standard- 
setting authorities to develop principles and guidance, economic 
performance measures are based on principles and guidance that have 
been developed in the economic literature over more than 75 years. This 
literature includes academic textbooks and research presented in 
journal articles as well as federal agency guidance. The agency 
guidance includes, among other things, Office of Management and Budget 
(OMB) Circulars A-4 and A-11, Part 7, Section 300, and A-94.[Footnote 
6] 

Circular A-4 is designed to assist analysts in the regulatory agencies 
in estimating the benefits and costs of proposed regulatory actions. 
Circular A-11, Part 7, Section 300, establishes policy for the 
planning, budgeting, acquisition, and management of federal capital 
assets. It provides guidance for budgetary analysis of alternatives for 
making federal capital investments. Circular A-94 provides additional 
guidelines and appropriate discount rates for benefit-cost and cost- 
effectiveness analysis. While OMB's guidance for economic performance 
is useful both for producing economic assessments and auditing 
performance, it is distinctly less standardized than accounting 
guidance provided to accountants and auditors. 

In addition, in some instances agencies use a multiobjective method of 
analysis to assess programs. In this type of analysis, program impacts 
are not put into monetary terms. Instead, identified impacts are given 
a weighted ranking that allows decision makers to evaluate federal 
actions on the basis of their place in the ranked scale. The role of 
this analysis is somewhat uncertain in the context of economic 
performance measurement. 

The State of Economic Performance Evaluation, Including Gaps, Barriers, 
and Analytical Issues: 

The workshop participants generally agreed that while economic 
performance analyses that assess government programs have improved 
somewhat, their quality is highly variable. In addition, the analyses 
often miss key information needed to inform decisions makers about 
whether the government actions that are proposed can be expected to be 
cost effective or generate positive net benefits. 

One participant said that a comparison of present economic assessments, 
using estimates of the program's cost per life saved, with assessments 
completed in the early 1980s had found a discernible improvement in 
analysis. Another participant agreed on the signs of increased 
sophistication in the types of measure used, such as the discount rate 
or approach to using discounting. In addition, there has been some 
diffusion of knowledge within the agencies from the economics 
literature about using a statistical value of life. 

Despite these improvements, however, participants said that the quality 
of analysis is still highly variable--some analyses are quite good, 
others not. One participant said that there is incredible variability 
across agencies in how economic performance assessments are performed-
-whether an agency follows a fairly standard cost-benefit analytic 
framework or something else. 

Another participant pointed out that many economic performance analyses 
are still not sufficient because they miss key information. For 
example, one participant said that the majority of economic performance 
evaluations reviewed did not discuss net benefits or analysis of 
alternatives to proposed regulatory options. In addition, only a fairly 
small number of analyses dealt well with the uncertainty associated 
with estimated benefits and costs; only a few provided both point 
estimates and a range of total costs or benefits. 

These gaps limit the evaluations' usefulness to decision makers. 
Without more information about the uncertainty associated with the 
estimates, the assessments may not be sufficient to inform decision 
makers about whether proposed regulations and programs would be likely 
to achieve their goals cost effectively or generate positive net 
benefits for the nation. 

Gaps in the Application of Economic Performance Assessments: 

The participants identified some gaps in the application of economic 
performance analysis: (1) economic performance measures are generally 
not widely used for programs in the federal government, (2) 
retrospective analyses of programs are often not being done, (3) 
mechanisms for revisiting a program or regulation are often lacking, 
and (4) regulations involving homeland security issues present 
additional challenges and often do not include an economic assessment 
of the benefits and costs of proposed regulations and 
programs.[Footnote 7]

Sidebar: 

Few agencies appear to use measures of economic performance, even 
though they are consistent with the Government Performance and Results 
Act. For example, in a survey of federal managers across the 
government, the percentage who reported having measures related to 
economic performance for their programs to a "great" or "very great" 
extent was 12 percentage points lower than any other type of measure 
GAO asked about. In addition, we found that of approximately 730 
performance measures six federal agencies used to assess their 
programs, none involved net benefits, and only 19 linked some kind of 
cost to outcome. Of these 19, one agency used 16. 

[End of sidebar] 

The participants said that while some agencies have used economic 
performance measures, in general they were not widely used in the 
federal government. For example, one participant pointed out that while 
there has been progress on the quality of economic assessments being 
produced, there is still the issue of whether assessments are being 
done at all. 

Participants observed that in some cases programs have been assessed 
retrospectively but that, generally, little retrospective analysis is 
being done. They believed that retrospective analysis is necessary to 
inform the Congress and other decision makers of the cost and 
effectiveness of legislative and regulatory decisions. One participant 
stated that about 100,000 new federal regulations have been adopted 
since 1981, when OMB began to keep records of them. About a thousand of 
these were judged to be economically significant--that is, imposing 
costs greater than $100 million per year. However, the participant 
said, few of the set of regulations has ever been looked at to 
determine whether they have achieved their objectives, what they 
actually cost, and what their real benefits are. In fact, the 
participant added, little is known about the impact of regulations once 
they are adopted. Another participant pointed out that there is no 
consistent mechanism for reviewing a regulation once it has been 
enacted. 

Some participants observed that a retrospective analysis might reveal 
that the costs or benefits of a regulation or program after enactment 
might vary significantly from those estimated in the prospective 
analysis. Because the prospective analyses of a regulation's or 
program's benefits and costs are based on projections of likely future 
impact, the estimates might vary significantly from actual effects. 
Variation can occur on either the cost or benefit side of analysis. For 
example, one participant pointed out that it has been shown that some 
prospective analyses have overstated costs and understated benefits, 
while others have done the reverse. 

Sidebar: 

A retrospective review of the Occupational Safety and Health 
Administration (OSHA) scaffold standards indicated that while still 
positive, the program's actual benefits were significantly less than 
the agency estimated when the rules were proposed. The program's annual 
net benefit was projected at $204 million before implementation; 
retrospectively, annual benefit was estimated at $63 million. 
Retrospective analysis can be useful to decision makers by providing 
information on whether a program has the potential to produce 
additional benefits or whether the benefits produced justify the costs. 

[End of sidebar] 

Some participants also mentioned the use of economic performance 
measures for new areas of federal action, such as homeland security. 
They indicated that regulations the Department of Homeland Security is 
developing present additional challenges for analyzing risk and 
uncertainties associated with terrorist activities. In addition, a 
number of regulations are being proposed for homeland security without 
an assessment of whether the proposals' estimated benefits justify 
their estimated costs. One participant suggested that requiring an 
economic assessment for proposed homeland security regulations would be 
useful. Another responded that such a requirement raises the question 
of how to estimate the probability of future terrorist attacks or how 
to determine if any particular measure, such as airport screening or an 
extra border patrol, would reduce the probability of damages from a 
terrorist attack. Another participant said that the focus should be not 
only on reducing the consequences of an attack but also on the 
probability of an attack. Developing a process that reduces the impact 
of attack--or "public mitigation"--would reduce the expected value of 
disruption. Participants pointed to the little experience the United 
States has with quantifying terrorism issues and the time it will take 
to build a body of knowledge on how to quantify effects. 

Major Barriers to the Use of Economic Performance Measures: 

The participants identified several major barriers that impede the use 
of economic performance measures. For example, they said that there is 
(1) frequently only limited demand from decision makers for assessments 
of program costs; (2) a lack of both time and funds for conducting 
economic assessments and, in some instances, a lack of incentive for 
agencies to use resources for implementing a program for conducting 
economic performance assessment, particularly when an agency has 
already decided to act; and (3) a number of organizational barriers, 
called stovepipes, that hinder communication within agencies, between 
agencies, and between economists and decision makers about how to 
conduct comprehensive and useful assessments. 

One participant said that a main impediment to this kind of analysis is 
that some decision makers may not be interested in knowing how a 
favored program is performing. In addition, even though some agency 
decision makers may require an economic performance assessment of 
proposed regulations, they might not provide sufficient resources to 
the staff to conduct a thorough analysis. Another participant suggested 
that there seems to be little demand from decision makers to know the 
total economic cost of a federal action. Participants did agree, 
however, that decision makers should be aware that their actions have 
certain consequences and costs. For example, the effects of a 
regulation or program cannot be known if an analysis is not done. 

Some participants mentioned the limited resources that agencies have 
for conducting economic performance analysis and the agencies' apparent 
reluctance to spend their resources, particularly when a decision to 
act may have already been made. One participant pointed out that funds 
are generally not authorized or appropriated for economic assessment. 
Consequently, in order to do these studies, agencies must use funds 
that are authorized and appropriated for program purposes. Using funds 
that would reduce resources for the program itself works as a 
disincentive for economic assessment. One participant observed that 
agency departments often seem to believe that analysis is done only 
after the decision to regulate is made. Under these conditions, 
analysis may or may not provide input into the final decision-making 
process. For example, instead of considering all the relevant policy 
alternatives, some analyses focus on just the preferred alternative. 
Another participant observed that one difficulty is the revisions an 
agency makes to a regulation after it has been proposed but before the 
rule is final. For example, an analysis completed to support a proposed 
rule may not represent the alternatives and other economic factors that 
make up the final rule. 

Another participant said that despite what appears to be a long lead 
time between developing a regulation and issuing it, the economic 
assessment is often done in a very compressed period, limiting the time 
in which the agency's analysts can conduct the assessment. In addition, 
one participant said that regulations are often mandated by legislation 
and the legislation is generally not subject to economic performance 
assessment. 

The participants also identified organizational "silos" as barriers to 
communication between agencies, within agencies, and between agencies 
and the economics profession on how to properly conduct assessments. 
One participant stated that it was surprising that interaction among 
analysts conducting the assessments was not more seamless. Another 
participant stated that the many silos within government departments 
limit interaction within the agencies. The participant noted having 
tried to obtain information from other departments but running into 
brick walls all the time--walls whose masonry is very firm. 

Analytical Issues That Affect Consistency and Credibility: 

The participants generally agreed that the consistency and credibility 
of economic performance analysis of federal regulations and programs 
could be improved by resolving several analytical issues. These 
include, but are not limited to, how to appropriately consider the 
benefits that cannot be put into monetary terms and the effect of 
federal actions on different income, racial, or other population 
groups. The participants said that guidance is insufficient on how to 
appropriately include these issues in economic performance analyses. 

In addition, some participants noted a general lack of agreement about 
the values to be used for key economic assumptions. One is the value of 
a statistical life, which is used to estimate the effect of safety, 
health, or environmental improvements in reducing the risk of 
mortality. The participants also indicated a lack of guidance on how to 
use alternative analytical tools, such as multiobjective analysis, 
which can be used to evaluate program benefits by ranking them with a 
weighted scale rather than in monetary terms. 

Participants expressed concern, however, that leaving nonmonetized 
benefits or costs out might inappropriately bias estimates of the net 
benefits of the federal action being analyzed. For example, one 
participant pointed out that economic assessments typically conducted 
to support proposed health, safety, and environmental regulations 
quantify the benefits but do not express them in monetary terms. As a 
result, the net benefits estimates exclude the benefits that cannot be 
monetized. Another participant mentioned that the difficulty associated 
with quantifying and monetizing benefits is one of the underlying 
challenges related to homeland security. These benefits include such 
things as the gain from averting terrorist attacks, something that is 
very difficult to estimate. 

In addition, one participant pointed out that a strict economic 
efficiency analysis--that is, one based on maximizing net benefits-- 
might leave out important policy alternatives on fairness or equity 
that cannot be put in money terms. Equity issues include how a program 
might affect people in different income or racial groups. Another 
participant indicated the need for some rigorous analytical way to look 
at federal regulations that, by their very nature, cannot possibly be 
justified on economic grounds. For example, a program that might not 
provide the largest net benefits might be justified for other reasons, 
such as that it provides assistance to groups such as the nation's 
disabled or poor. 

Sidebar: 

Different agencies often use significantly different values for these 
same measures. For example, the U.S. Army Corps of Engineers tends not 
to value "statistical lives saved," while, at the time of the reports 
reviewed, the method of the Centers for Disease Control and Prevention 
valued statistical lives saved (of a 35-year-old man, for example) at 
$0.94 million; the Department of Transportation, at $2.7 million; and 
the Environmental Protection Agency, at $6.1 million. Such differences 
make it difficult to compare economic performance measures across 
agencies. 

[End of sidebar] 

One participant said that when economists talk about distribution, they 
usually mean the distribution of income. In a regulatory setting, 
distribution often refers to the regulatory costs across other types of 
groups. For example, one participant indicated that expenditures on 
health could be reallocated from healthier people to people who are 
sick. 

The participants also generally agreed that the consistency and 
credibility of economic performance measures could be improved if there 
was agreement on the most appropriate values to use for key assumptions 
in an analysis. For example, federal agencies use different estimates 
of the value of a statistical life to estimate the benefits associated 
with a reduction in the risk of mortality. 

One participant indicated that some agencies use cost-effectiveness 
measures such as cost per health outcome, or quality-adjusted life- 
years, instead of net benefits. In any case, variability in the values 
of key assumptions and measures makes for a lack of consistency and for 
difficulty in comparing measures across agencies. 

Participants also pointed out that agencies are dealing with the 
difficulty associated with monetizing benefits and assessing equity 
issues by using multiobjective evaluation measures. Although this type 
of analysis does not put impacts into monetary terms, it derives an 
estimate of impacts from a weighted ranking of the objectives of the 
federal action. One participant explained that in simplistic terms, 
this is done by identifying the multiple objectives of the proposed 
federal action and eliciting a weight by which to rank each objective 
on a scale. The weights come from an assessment of the variation and 
importance of the action. Another participant pointed to a link between 
these kinds of methods and economic performance analysis. Nonetheless, 
while agencies are using this type of analysis more frequently to 
evaluate federal actions, it is generally not mentioned in federal 
guidance, such as OMB's. 

The Extension of Economic Performance Measures for Evaluating Federal 
Programs: 

The workshop's participants generally agreed that there were 
opportunities to expand the use of economic performance measures, 
especially in retrospective evaluations of existing programs. Along 
these lines, analyzing economic performance could be one way to 
evaluate agencies' performance through budget processes. In addition, 
participants indicated that economic performance measures could be used 
to assess the risk and uncertainty associated with homeland security 
programs and regulations. 

Several participants said that retrospective evaluations of existing 
programs or regulations would not only inform decision makers about 
their performance but could also help to identify ways to improve 
prospective analyses. For example, comparing the actual benefits and 
costs achieved by a regulation with the prospective estimates developed 
for the proposed rule might be useful in identifying errors in the 
methods and assumptions that the economists used to develop the 
estimates. One participant said that we could identify the mistakes 
made in these analyses and transfer that knowledge to the next 
prospective analysis. 

Sidebar: 

Economic performance measures have several potential uses. For example, 
EPA has received a report advising that its approach to estimating 
fines for noncompliance that are based on profits should consider 
incorporating probabilistic external costs, an economic performance 
concept. Changing government budgeting practice toward performance 
budgeting may create opportunities for incorporating economic 
performance information including OMB's PART reviews. OMB has indicated 
a preference for using more economic performance measures in the PART 
process. 

[End of sidebar] 

Participants also pointed out that net benefits and cost effectiveness 
are important for assessing budgets as well as regulations. One 
participant indicated that OMB circulars A-11 and A-94 could be linked 
together to use economic performance analysis in examining the 
budgetary process. For example, Circular A-11 specifies that agencies 
provide at least three viable alternatives to proposed capital 
investments and that the economic performance criteria used to develop 
those alternatives be based on guidance from Circular A-94. The 
participant also said that while Circular A-94 guidance may not be as 
extensive as the more recent Circular A-4, it includes the same basic 
principles for assessing benefits and costs. Another participant asked 
whether we know how much the federal government spends on permanent 
laws, tax benefits, and entitlements. Economic performance measures 
could be used to evaluate them. 

The participants generally agreed that economic performance measures 
could be used to evaluate the performance of homeland security programs 
and regulations. One participant suggested that a substantial fraction 
of the federal budget involves homeland security issues. However, other 
participants indicated that federal agencies would have to build on the 
analytical foundation for assessing whether the benefits of these 
investments exceed their costs. For example, developing ways to assess 
the probability of a terrorist attack, and the extent to which a 
program or regulation might reduce that probability, could help. 

Improving General Economic Principles and Guidance: 

The participants generally agreed that opportunities exist for 
improving the principles and guidance agencies use for conducting 
benefit-cost analyses and assessing economic performance. For example, 
it might be useful to have abbreviated guidance on the minimum key 
principles for conducting an economic analysis. One participant said 
that when an agency has to do an evaluation and it is confronted with 
OMB's Circular A-4, it might throw up its hands, saying the resources 
are not available. 

Another participant pointed out that we have to be concerned about 
"ossification" of the process agencies might have to go through to 
assess economic performance. For example, too many analytical 
requirements in too many different guidance documents might lead 
agencies to move away from doing any analysis. One participant 
suggested that Circular A-4 could represent the comprehensive end of a 
continuum of guidance documents, while more abbreviated guidance would 
facilitate performance analysis when fewer resources are available. 

Another participant said that some progress in getting the agencies to 
do more analysis could be made if the guidance at least stipulated a 
minimally accepted set of principles that they could use. Minimum 
standards could include such things as whether an analysis used a 
discount rate. 

The participants also generally agreed that the uncertainty and risk 
associated with investments in homeland security present additional 
challenges. Additional techniques are needed to help evaluate the 
uncertainty of terrorist activities, for example. One participant said 
that we need a serious effort to build an analytical capability to look 
hard at proposals that come under the homeland security banner, such as 
a framework for looking at proposals on the risk of terrorist 
activities. 

One participant said that in time, if guidance such as Circular A-4 
remains in place, agencies will develop technical expertise and will 
begin to conduct fuller and more complete economic analyses of homeland 
security issues. Another participant indicated that evaluating federal 
actions related to homeland security, particularly budgetary processes, 
requires clearly defining the objectives of an action. For example, the 
participants thought that it is probably not realistic to expect 
security in the United States to be restored to some level that existed 
before September 2001. It might be more realistic to engage in a mix of 
public and private sector activities designed to minimize the 
consequences of another attack. This may require developing additional 
principles and guidance. 

One-Page Summaries: 

Requiring that economic performance assessments include a one-page 
summary of the key results of the analysis could help improve 
consistency. The summary would present the analysis results concisely 
and understandably. The summary might include a statement of the 
program's objectives, a description of the baseline, and some 
discussion of at least the quantities, if not the actual monetization, 
of the direct inputs and outputs for the program activity. One 
participant expressed the strong feeling that a standard summary in 
front of an economic performance analysis that presents the results by 
providing the point estimates as well as the range of the estimated 
benefits and costs, to account for the uncertainty of the estimates, 
would be extremely useful. A good summary would allow reviewers to 
compare the results from different analyses. 

Sidebar: 

Economic performance measures are sometimes reported in a format 
similar to a statement of income. Published literature and government 
guidance are not clear about the format for such statements, and we did 
not find consistent reporting formats in the economics textbooks we 
reviewed. 

OMB has asked agencies to report on a standard form in their annual 
regulatory accounting reports, but this form is not required for any 
other use. 

[End of sidebar] 

Scorecards: 

Some participants stated that better consistency and coverage of 
economic performance measures could be achieved with tools like 
scorecards for rating the overall quality of assessments. For example, 
scorecards could be used, like checklists, to evaluate assessments for 
the extent to which they address a minimum set of economic criteria. 
The criteria might include whether the analysis estimated costs and 
benefits, used a discount rate to estimate present values, and 
considered a reasonable set of alternatives. One participant said that 
there should be a set of criteria for economic performance measures in 
the public domain that would allow us to monitor performance. 

Sidebar: 

Auditors use generally accepted auditing standards in rendering their 
professional opinion; this opinion can be thought of as a scorecard 
summary of a financial statement's consistency with generally accepted 
accounting principles. No federal guidance is available to link 
principles and guidelines to a formal quality evaluation of economic 
performance assessments of federal programs: There is no generally 
accepted scorecard. 

[End of sidebar] 

Expert Review: 

The participants also suggested that external experts could review 
economic performance analyses and suggest procedures and strategies on 
how to develop and use such measures as the value of statistical life. 
For example, one participant recommended peer review of the procedures 
agencies use to conduct the analyses and particular decisions about 
assumptions and measures in the analysis. The strategies developed 
through this review by experts could be either general or very 
specific. 

Standardizing Key Values: 

Several participants indicated that standardizing some values for key 
assumptions would improve the quality and consistency of federal 
agencies' economic performance assessments. The use of common values 
for such things as the value of a statistical life would make it 
possible to compare the results of analyses across agencies. One 
participant said that instead of recommending that agencies develop 
their own best practices for assessments, they should be encouraged to 
collaborate on methods and key assumptions. 

Sidebar: 

Economics textbook authors and academics we consulted pointed out that 
the quality of economic performance analysis could be improved by 
better standardization of, among other things, value of days lost from 
work and values for cases of various diseases and mortality. 

[End of sidebar] 

New Organizations and Processes: 

The participants identified a number of organizations that could serve 
as examples in developing and improving economic principles for 
measuring the economic performance of federal programs. 

Sidebar: 

General principles and guidelines that economists use in assessing 
economic performance are based on textbook presentations, research in 
journal articles, and federal agency guidance but are not identified or 
created by standard-setting authorities. In contrast, accountants and 
auditors have several standard-setting authorities, as well as academic 
literature and agency guidance, to improve quality, consistency, and 
comparability. 

Generally accepted accounting principles provide layers of guidance to 
those who produce financial statements and to auditors. At the top of a 
hierarchy are pronouncements by professional standards groups, the 
Financial Accounting Standards Board for the private sector and 
nonprofit organizations, Governmental Accounting Standards Board for 
state and local governments, Federal Accounting Standards Advisory 
Board for the federal government. Below these in acceptance are 
textbooks, published articles, and guidance from agencies. 

[End of sidebar] 

In response to a presentation by GAO's Chief Accountant, participants 
discussed accounting and auditing standards and how those standards are 
established. The Chief Accountant defined the difference between 
accounting and auditing standards. In general, boards of professionals 
and highly qualified subject matter experts develop the standards. 
Through deliberation, public exposure and comments, and other 
processes, the boards develop a hierarchy of standards broadly 
applicable to accounting and auditing. In both accounting and auditing, 
consistency and quality are important aspects of financial reporting. 

While several of the workshop participants expressed interest in the 
accounting model for setting standards, they also expressed concern 
about adopting such a model for economic performance evaluation. One 
participant pointed out that the types of issues assessed in the 
federal government are more diverse than in accounting. Although there 
is certainly virtue in standardization, it is not clear what would 
constitute a set of standards for all benefit-cost analyses. Other 
participants, however, acknowledged that economics institutions such as 
the American Economic Association are not designed to govern or monitor 
the application of economics. 

The participants identified other types of standard-setting 
organizations that could be turned to for improving economics 
principles and guidance. For example, the National Academies convene 
expert consensus committees, workshops, and roundtables. Because of the 
National Academies' strict conflict of interest standards, their expert 
consensus panels include academicians but not members from sponsors. 
Workshops are often day gatherings that bring together experts who 
present and review papers. A roundtable is an ongoing series of 
meetings that bring together representatives from industry, government, 
and academia to discuss recent research. 

Other types of organizations the participants mentioned included 
Brookings Institution type panels and working groups convened by the 
National Bureau of Economic Research. The panels and working groups 
typically consist of distinguished economists given a mandate to assess 
government programs. Research conferences were also suggested as a way 
to convene experts to discuss benefit-cost analysis issues and then 
produce a book of conference papers. The participants also mentioned, 
in general terms, the possibility of creating a new organization, such 
as a government management or performance advisory board, to assess 
government performance. One participant mentioned that funding a new 
organization could prove to be a major issue. Some participants agreed 
that if such an institution were established, it should be 
organizationally independent and flexible enough to address a variety 
of issues and settings. 

[End of section]

Appendixes: 

Appendix I: Economic Performance Workshop Participants: December 17, 
2004: 

External to GAO: 

Name: Neil R. Eisner; 
Title: Assistant General Counsel, Office of Regulation and Enforcement; 
Organization: U.S. Department of Transportation. 

Name: John Graham; 
Title: Administrator; 
Organization: Office of Management and Budget, Office of Information 
and Regulatory Affairs. 

Name: Robert Hahn; 
Title: Executive Director; 
Organization: AEI- Brookings Joint Center for Regulatory Studies. 

Name: Robert Haveman; 
Title: Professor Emeritus; 
Organization: University of Wisconsin, La Follette School of Public 
Affairs. 

Name: Arlene Holen; 
Title: Associate Director for Research and Reports; 
Organization: Congressional Budget Office. 

Name: Sally Katzen; 
Title: Professor; 
Organization: University of Michigan, Law School. 

Name: Thomas McGarity; 
Title: Professor; 
Organization: University of Texas, School of Law. 

Name: Albert M. McGartland; 
Title: Director, National Center for Environmental Economics; 
Organization: U.S. Environmental Protection Agency. 

Name: Wilhelmine Miller; 
Title: Senior Program Officer; 
Organization: Institute of Medicine. 

Name: John F. Morall III; 
Title: Branch Chief, Health, Transportation and General Government 
Branch; 
Organization: Office of Management and Budget, Office of Information 
and Regulatory Affairs. 

Name: Daniel H. Newlon; 
Title: Economics Program Director; 
Organization: National Science Foundation. 

Name: Greg Parnell; 
Title: Professor, and President, Decision Analysis Society; 
Organization: U.S. Military Academy, West Point. 

Name: V. Kerry Smith; 
Title: University Distinguished Professor; 
Organization: North Carolina State University. 

Name: Richard Zerbe; 
Title: Professor; 
Organization: University of Washington. 

GAO staff: 

Name: Robert F. Dacey; 
Title: Chief Accountant; 
Organization: U.S. Government Accountability Office. 

Name: Scott Farrow; 
Title: Chief Economist; 
Organization: U.S. Government Accountability Office. 

Source: GAO. 

[End of table]

[End of section]

Appendix II: Economic Performance Assessment: Uses, Principles, and 
Opportunities: 

Introduction: 

The impact of federal programs and tax preferences on the U.S. economy, 
including their costs and benefits, is substantial.[Footnote 8] The 
cost to implement all federal programs was about $2.2 trillion in 2003, 
or roughly 20 percent of the U.S. gross domestic product. Similarly, 
federal tax preferences were estimated to be approximately $700 billion 
in 2003. The overall economic benefits of these programs have not been 
estimated, but they are believed to be substantial. 

Because federal agencies generally do not monitor the economic 
performance of their programs, the extent to which each program 
generates positive net benefits (benefits minus costs) or whether it 
achieves its goals cost effectively (for the lowest possible cost) is 
uncertain. We have reported that federal agencies are generally 
required to assess the potential economic performance of proposed major 
regulatory actions and some investments but that their assessments are 
often inconsistent with general economic principles and 
guidelines.[Footnote 9] Without assessments that include elements of 
quality such as consistency and comparability, federal decision makers 
may be missing information that would aid in oversight and 
accountability. 

Economic performance measures such as net benefits and cost 
effectiveness are based, to the extent feasible, on quantifying and 
valuing all material impacts on a nation's citizens. Such measures 
create a structure in which to report costs and benefits, evaluate cost 
savings, and, with a number of assumptions, evaluate whether the 
nation's well-being is improved. The appeal of the measures is 
demonstrated by the requirement in several statutes and executive 
orders that economic performance be assessed and factored into federal 
agency decision making.[Footnote 10] Nonetheless, critics of economic 
performance measures question their usefulness because of imprecision 
in valuation and difficulties in determining the effect of federal 
programs on the nation's well-being. We assume in this study that 
economic performance measures are used in conjunction with other 
measures to evaluate federal programs and policies. 

The objectives of our work were to assess the potential for improving 
the quality and expanding the application of economic measures. 
Specifically, we reviewed the extent to which: 

1. federal agencies are required or have chosen to measure the economic 
performance of their programs,

2. general economic principles and guidelines are available for 
creating and evaluating economic performance assessments of federal 
programs, and: 

3. the federal government can improve its oversight and accountability 
of the economic performance of federal programs as part of its overall 
performance objectives. 

To meet these objectives, we formed a GAO team with expertise in 
assessing the economic, accounting, budgetary, and performance effects 
of federal programs. We also solicited input from several external 
experts from the economics and accounting professions. 

For objective 1, we identified commonly known applications of economics 
measures and reviewed six federal agencies' performance reports on the 
status of their programs under the Government Performance and Results 
Act of 1993 (Results Act) as of 2002.[Footnote 11] We chose the 
agencies judgmentally, as agencies with programs for which economic 
performance assessments were more rather than less likely to be 
conducted. In addition, we reviewed GAO reports on the extent to which 
agencies have used economic assessments of the potential impact of 
major regulatory actions and infrastructure investments. 

For objective 2, we reviewed OMB guidance on conducting economic 
assessments, and we reviewed elements of accounting standards and 
economic principles and guidelines for conducting economic assessments. 

For objective 3, we used GAO economic evaluations of the Special 
Supplemental Nutrition Program for Women, Infants, and Children (WIC) 
and the USDA Cotton Program to demonstrate two of many ways in which 
consistency could be improved.[Footnote 12] We also reviewed and 
supplemented the Department of Labor's (DOL) Occupational Safety and 
Health Administration (OSHA) economic analysis of construction industry 
safety standards for scaffolds. To do this, we used information from 
the Federal Register notice and other published sources. The 
assessments generally reflect the programs and conditions as they were 
at the time of original publication. We chose the OSHA analysis, USDA 
Cotton Program, and WIC because economic assessments were readily 
available, the programs were relevant but not highly controversial, and 
they illustrated several measures of net benefit. 

Summary: 

Even though federal agencies are required to assess the prospective 
economic performance of proposed major regulatory actions, and some 
other activities, agencies are not required, and generally do not 
choose, to evaluate programs retrospectively. And when agencies are 
encouraged to use economic performance measures retrospectively, such 
as under the Results Act, they use few such measures. In a recent 
survey, for example, GAO found fewer federal managers reporting having 
measures that linked program costs to program results to a "great" or 
"very great" extent, compared to all other types of Results Act 
measures.[Footnote 13] In addition, at the time of the analysis, GAO 
found that of approximately 730 performance measures six federal 
agencies used to assess their programs, none involved net benefits, and 
only about 19 linked some measure of cost to outcome. DOE used 16 of 
these 19. 

General principles and guidelines are available for assessing the 
economic performance of federal programs, but certain aspects of them 
may be too general to ensure that the assessments address some elements 
of quality, such as consistency and comparability. In addition, while 
economists generally accept the principles and guidelines, some 
agencies and noneconomists are less accepting. For example, in 
conducting economic assessments, some agencies do not account for 
benefits like the value of reduced risk of mortality, because they 
disagree that these benefits can be appropriately valued. However, 
assessments that do not account for these benefits are inconsistent 
with general economic principles and guidelines. Moreover, when 
agencies do account for these benefits, different agencies often use 
significantly different values, generating results that are not 
comparable. 

In general, economic principles and guidelines are based on the 
economics literature and federal guidance. In our opinion, these 
principles and guidelines are too general in certain areas, because no 
standard-setting authority in the economics profession identifies or 
creates more specific practices for assessing economic performance. The 
accounting profession, in contrast, has standard-setting authorities 
that identify or create generally accepted accounting principles for 
financial reporting and generally accepted auditing standards for 
auditing financial statements. This guidance helps ensure the quality 
of financial reporting by, among other things, improving consistency 
and comparability. 

The federal government could improve its oversight and accountability 
of federal programs, regulations, and taxes by expanding its use of 
economic performance measures and improving the quality of its economic 
performance assessments, both prospective and retrospective. 
Specifically, oversight, accountability, and quality could be improved 
by: 

1. expanding the use of economic performance measures, especially for 
retrospective analysis of existing programs, and: 

2. using a consistent reporting format and developing a scorecard, 
based on existing economic principles and guidelines and best 
practices, for evaluating the quality of economic assessments. 

In illustrating the use of economic performance measures for new 
applications, our retrospective review of OSHA's construction industry 
safety standards for scaffolds demonstrated that the program's benefits 
have been significantly less than the agency estimated when the 
standards were proposed, so that additional improvements may be 
possible. Our use of a consistent reporting format for the existing GAO 
economic assessments of the USDA Cotton Program and WIC demonstrated 
how such a format supports comparability in presentation, as does a 
scorecard in the evaluation of the quality of an assessment. 

Background: 

The economic performance of government programs is typically assessed 
by economists using estimates of the nationwide net benefit or cost 
effectiveness of the programs.[Footnote 14] Economics literature of 
more than 75 years supports these methods. Economic performance 
assessment differs from a straightforward financial appraisal in that 
all gains (benefits) and losses (costs) that accrue to society in 
general (not just to government) as a result of a program are to be 
counted. In general, if the discounted value of the benefits exceeds 
the costs, the net benefits are positive. If these positive net 
benefits exceed the net benefits of alternatives, the program is 
economically worthwhile, although decision makers may consider other 
performance criteria as well, such as geographic or socioeconomic 
impact. Cost effectiveness, or the cost to achieve a particular 
objective expressed in nonmonetary terms (for example, reductions in 
tons of pollutants), is a special case of net-benefits analysis, in 
which the benefits of a program are quantified but not valued in dollar 
terms. 

While some programs may result in benefits that are greater or less 
than costs, other programs may have benefits that are just equal to 
costs because of an equal transfer from one party to another. These 
benefits merely redistribute income or transfer resources between 
social groups but do not affect production or productivity. Such 
programs are called transfer programs and are not counted as having net 
benefits. Transfer programs typically include Social Security, interest 
on federal debt held by the public, and some types of welfare programs. 
In some cases, however, it can be difficult to determine whether a 
program has an impact on an economic performance measure or just 
transfers resources between social groups. 

By including monetary based measures (monetization), economic 
performance assessment allows the aggregation of program impacts. Costs 
are usually measured in terms of a program's actual money costs. In 
general, benefits are more difficult to measure, because many benefits 
may have no observable market providing prices. In these cases, it is 
necessary to construct representational, or "surrogate," markets--that 
is, models--in ways that are generally accepted by economists, in order 
to estimate the monetary value of the benefit. 

Modeling can present substantial problems and areas of ambiguity that 
can lead to imprecise measurement and fundamental disagreements among 
economists and noneconomists. One such ambiguity is in determining a 
monetary amount to estimate the value of a reduction in the risk of 
mortality. This value generally represents a statistical assessment of 
the amount of money individuals would be willing to pay to reduce the 
risk of one death in a population. Other instances are benefits that 
cannot be expressed in monetary terms and noneconomic factors that are 
part of a program's performance. For example, a welfare program may 
represent a transfer in economic terms but decision makers may consider 
the resulting income redistribution worthwhile. In this case, economic 
assessments expressing benefits in money terms would best be used in 
conjunction with other performance measures. 

Agencies assess the economic performance of federal programs in several 
circumstances. Although with many exceptions, the Unfunded Mandates 
Reform Act of 1995 requires agencies to prepare a qualitative and 
quantitative assessment of anticipated costs and benefits before 
issuing a regulation that may result in annual expenditures by state, 
local, and tribal governments of $100 million annually, in the 
aggregate or by the private sector. In addition, under Executive Order 
12866 and Circular A-11, Part 7, Section 300, certain federal agencies 
are required to consider the benefits and costs of proposed regulatory 
actions and infrastructure investments before selecting a regulatory 
alternative or a capital investment.[Footnote 15] In this context, OMB 
and some other federal agencies have developed guidance that functions 
as general economic principles and best practices for assessing 
economic performance. 

In addition, under the Results Act, federal agencies are required to 
establish performance goals and to choose measures to determine whether 
their programs are meeting these goals. In response to congressional 
requests, GAO has sometimes used economic principles and best practices 
to assess the economic performance of government programs. Examples 
include reports on the progress of the USDA Cotton Program and 
WIC.[Footnote 16]

Agency Economic Assessments: 

Although federal agencies are generally required to assess the 
potential economic performance of proposed major regulatory actions, 
they generally do not monitor how these and other federal programs have 
actually performed.[Footnote 17] In addition, although measures of 
economic performance are consistent with the Results Act, few agencies 
appear to use them. For example, in our survey of federal managers at 
grades GS-13 and above, the percentage of managers who reported having 
measures related to economic performance for their programs to a 
"great" or "very great" extent was 12 percentage points lower than any 
other type of Results Act measure we asked about.[Footnote 18] In 
addition, in our staff study, we found that of approximately 730 
performance measures six federal agencies used to assess their 
programs, none involved net benefits and only 19 linked some kind of 
cost to outcome; one agency used 16 of these 19.[Footnote 19] Examples 
of a partial measure that linked cost to outcome are average cost per 
case receiving medical services and the administrative cost per grant 
application. 

Table 1 gives a preliminary summary of examples for which prospective 
economic assessments are required. The broad-based uses are for 
regulatory and investment purposes. 

Table 1: The Use of Economic Performance Measures for Prospective 
Assessment of Federal Programs: 

Economic performance measure: Budget planning: investment (general); 
Authority or guidance: OMB Circular A-11; OMB Circular A-94; 
congressional mandates; 
Reporting form: Benefit-cost statement; guidance; not much detail on 
form; 
Required? Yes; 
Timing: Before implementation. 

Economic performance measure: Regulatory evaluation; 
Authority or guidance: Executive order and regulatory accounting 
statement; 
Reporting form: Varies widely; guidance is for a benefit-cost analysis; 
Required? Yes, for major regulations; 
Timing: Before implementation. 

Economic performance measure: Agency-specific statutes; 
Authority or guidance: Specific statutes; 
Reporting form: Usually specifies a benefit-cost analysis; 
Required? Yes, if exists; 
Timing: Some before and some after implementation; Note: U.S. Army 
Corps of Engineers offshore oil and gas leasing and pipeline safety, 
and some EPA programs. 

Source: GAO analysis. 

[End of table]

In addition, as table 2 shows, retrospective economic assessments-- 
after program implementation--are generally not required. 

Table 2: The Use of Economic Performance Measures for Retrospective 
Assessment of Federal Programs: 

Economic performance measure: Government Performance and Results Act; 
Authority or guidance: Cost effectiveness named in committee report; 
net benefits not so named; 
Reporting form: Varies but generally cost per unit outcome; 
Required? No; 
Timing: After implementation. 

Economic performance measure: Program Assessment Rating Tool (PART) 
review; 
Authority or guidance: OMB's suggestion to agencies to include such 
measures; 
Reporting form: Cost effectiveness and net benefit; 
Required? No; 
Timing: After implementation. 

Economic performance measure: Program evaluation; 
Authority or guidance: Used on an ad hoc basis; 
Reporting form: Varies; cost effectiveness or net benefit; 
Required? No; 
Timing: After implementation. 

Economic performance measure: Economic analysis; 
for example, GAO self- initiated or congressional request; 
Authority or guidance: GAO statutory authority; congressional request; 
Reporting form: Varies; 
Required? No; 
Timing: After implementation. 

Source: GAO analysis. 

[End of table]

Under the Results Act, federal agencies are required to establish 
performance goals for the results of their programs and to track 
progress. These assessments are retrospective--occurring after a 
program's implementation. In these studies, cost-effectiveness (cost 
efficiency) measures are encouraged, along with quantity impacts and 
other measures. Net-benefit measures are not specifically cited but are 
consistent with the act in that such measures provide objective 
information on the relative effectiveness of federal programs and 
spending.[Footnote 20] Although economic performance measures are 
encouraged, they are often not used, as we discussed above. 

As table 2 shows, agencies may conduct economic assessments in 
instances other than to follow the Results Act. These include Program 
Assessment Rating Tool (PART) reviews and ad hoc assessments to monitor 
program progress. OMB has indicated a preference for using more 
economic performance measures in the PART process. In addition, GAO 
conducted several retrospective reviews in response to congressional 
requests to monitor the progress of the USDA Cotton Program and WIC. 

Other potential uses for economic performance measures exist. EPA has 
received a report advising that its approach to estimating fines for 
noncompliance that are based on profits should also consider 
incorporating probabilistic external costs--an economic performance 
concept. Changes in government budgeting practice toward performance 
budgeting may also create opportunities for incorporating economic 
performance information in budget material. 

Economic Principles and Guidelines: 

Certain aspects of the general economic principles and guidelines 
available for assessing economic performance may be too general to 
ensure some aspects of their quality, such as their consistency and 
comparability. For example, in conducting economic assessments not 
associated with the Results Act, some agencies do not account for 
benefits like the value of reduced risk of mortality, because they 
disagree with economists that these benefits can be appropriately 
valued or expressed in a cost-effectiveness measure.[Footnote 21] 
Nonetheless, assessments that do not account for these benefits are 
inconsistent with general economic principles and guidelines. And when 
different agencies do account for these benefits, they often use 
significantly different values, generating results that are not 
comparable. 

The accounting profession has authorities that identify or create 
generally accepted accounting principles for financial reporting and 
generally accepted auditing standards for auditing financial 
statements. This guidance helps ensure the quality of financial 
reporting by, among other things, improving consistency and 
comparability. No standard-setting authority in the economics 
profession identifies or creates credible practices for agencies when 
they need to work through specific difficulties in assessing economic 
performance. 

General principles and guidelines economists use for assessing economic 
performance are based on textbook presentations, research reported in 
journal articles, and federal agency guidance. In contrast, accountants 
and auditors have several standard-setting authorities to identify or 
create standards and principles, in addition to academic literature and 
agency guidance that provide specific guidance to improve consistency 
and comparability. Generally accepted accounting principles provide 
layers of guidance to those who produce financial statements. 

Pronouncements by professional standards groups, such as the Financial 
Accounting Standards Board (FASB) for the private sector and nonprofit 
groups, Governmental Accounting Standards Board (GASB) for state and 
local governments, and Federal Accounting Standards Advisory Board 
(FASAB) for the federal government, are at the top of the hierarchy. 
(The hierarchy is described briefly in enclosure I.) Below these in 
acceptability are materials such as textbooks, published articles, and 
guidance from agencies. Guidance for economic performance measurement 
and reporting is at a comparably low level in terms of acceptable 
standards. 

While OMB guidance is useful in producing economic assessments and 
auditing performance evaluations, it is distinctly less standardized 
than guidance for accountants and auditors. Existing general guidance 
on conducting a program's economic assessment appears to leave many 
practical gaps for federal applications that reduce consistency and 
comparability. Issues for which guidance is general may include, but 
may not be limited to, the value of days lost from work, values for 
various diseases and mortality, efficiency losses from taxation, the 
incorporation of multiple sources of estimates, changes in risk, 
benefits from improvements in information, and estimates of the 
efficiency effects of incentives implicit in transfers. 

Guidance is general in that it recommends assigning monetary values to 
benefits but it does not specify which value to use. For example, for 
programs that might reduce the risk of fatalities, OMB's guidance 
encourages agencies to include the value of the risk reduction (based 
on the value of a "statistical" life) as a benefit of a federal 
program. But OMB does not require this assessment or provide guidance 
on the generally accepted value of a statistical life to use in 
estimating the benefit. As a result, agencies' economic assessments 
often do not include these benefits or, when they do, estimates of the 
benefit are based on different values. For example, the U.S. Army Corps 
of Engineers tends not to value statistical lives saved, while the 
Centers for Disease Control and Prevention (CDC) values statistical 
lives saved (based on the life of a 35-year-old man, for example) at 
$0.94 million, DOT at $2.7 million, and EPA at $6.1 million.[Footnote 
22] Such differences create difficulty in comparing economic 
performance measures across agencies. 

Improving Measures' Use and Improving Their Quality: 

The federal government could strengthen program oversight and 
accountability by expanding the retrospective analysis of existing 
programs and by adopting a consistent reporting format, and a 
scorecard, for evaluating their quality. 

A retrospective review of program performance could provide benefits 
through the expanded application of economic performance measures. For 
example, our review of OSHA's construction industry safety standards 
for scaffolds demonstrated that retrospective analysis can be 
informative. The actual benefits of the program are now estimated to be 
significantly less than the agency estimated when the standards were 
proposed. 

Our use of a trial reporting format for our economic assessments of the 
USDA Cotton Program and WIC demonstrated how a consistent format 
enhances the synthesis of information, for both individual assessments 
and several assessments compared across applications, as does using a 
scorecard to evaluate an assessment's quality. (Enclosure II describes 
the programs; details of the scorecard are in enclosure III.) 

Before OSHA's program was implemented, the annual net benefit was 
projected at $204 million, taking into account costs to the private 
sector and government and benefits resulting from reduced injury and 
death in the private sector. Retrospectively, the annual benefit was 
estimated at $63 million (see table 3 and enc. IV).[Footnote 23] This 
kind of finding could assist congressional oversight by better 
informing federal decision makers about whether a program has the 
potential to produce additional benefits or whether the benefits 
produced justify the costs. 

Table 3: Summary of Three Programs' Net Benefits: 

Dollars in millions. 

Benefit: Total annual in money terms; 
Prospective: OSHA scaffold rule: $217; 
Retrospective: OSHA scaffold rule: $76; 
Retrospective: USDA Cotton Program: $770
Retrospective: WIC: $1,036. 

Cost: Total annual in money terms; 
Prospective: OSHA scaffold rule: $13; 
Retrospective: OSHA scaffold rule: $13; 
Retrospective: USDA Cotton Program: $1,509
Retrospective: WIC: $296. 

Net benefits: Total annual in money terms; 
Prospective: OSHA scaffold rule: $204; 
Retrospective: OSHA scaffold rule: $63; 
Retrospective: USDA Cotton Program: -$739
Retrospective: WIC: $740. 

Nonmoney or noneconomic benefits[A]; 
Prospective: OSHA scaffold rule: Not identified; 
Retrospective: OSHA scaffold rule: Not identified; 
Retrospective: USDA Cotton Program: Ensuring producer income
Retrospective: WIC: Lower anemia rates[B]. 

Source: GAO analysis. 

[A] Includes, for example, benefits that accrue from encouraging small 
businesses, helping minorities, or redistributing income to society's 
less fortunate persons. 

[B] Nonmonetized benefits also include better maternal health, improved 
nutritional status, and improved health of children born subsequently. 

[End of table]

Quality, including aspects of consistency and comparability, can also 
be improved by using a consistent format for reporting the results of 
economic assessments. As various economics textbook authors and 
academics we consulted pointed out, quality could be improved by better 
standardization of such things as value of days lost from work, values 
for cases of various diseases and mortality, efficiency losses from 
taxation, incorporating multiple sources of estimates, changes in risk, 
benefits of improvements in information, and estimating the efficiency 
effects of incentives implicit in transfers. 

Accounting has a set of standard financial statements, including 
balance sheets and statements of income. Economic performance measures 
are sometimes reported in a format similar to that of a statement of 
income, although the time covered may be long and value may be reported 
as present value. Such statements can also summarize outcomes that 
cannot be put in monetary terms, such as distributional and qualitative 
outcomes and uncertainty or sensitivity results. Published literature 
and government guidance are not clear about the format for such 
statements. We did not find consistent reporting formats in the 
economics textbooks we reviewed. OMB has asked agencies to report their 
annual regulatory accounting reports on a standard form, but the form 
is not required for any other use.[Footnote 24]

A consistent format for reporting the results of an economic assessment 
would make it easier to (1) integrate major program impacts, (2) 
understand the bottom line of the economic performance analysis, and 
(3) compare results between assessments. For example, in our review of 
the three economic assessments shown in table 3, we found that the 
results of each one were distributed throughout their reports. This is 
not unusual for such assessments. The lack of a common form comparable 
to a financial statement also hindered the synthesis of information. 
The results of the case studies are presented in table 3 in a 
consistent, but highly abbreviated, format. The more detailed example 
we provide in enclosure II would assist in identifying major impacts 
that cannot be valued and would account for uncertainty. 

The type of consistency shown in table 3 (and in enclosure II) would 
enable a noneconomist to note key components of the benefits and their 
magnitude and whether they were positive or negative. Trained readers 
might be sensitive to complexities or assumptions of the analysis 
without further explanation. For example, in addition to clearly 
showing the benefits retrospectively attributable to the programs, the 
summary in table 3 can facilitate synthesis of information.[Footnote 
25] Two of the programs have positive benefits, one negative. These 
results are somewhat unexpected. For example, as a type of welfare 
program, WIC might be considered a transfer program with zero net 
benefits, since income is merely transferred from one social group to 
another. 

In fact, the economic assessment of the program illustrates that WIC is 
estimated to have an impact through increasing birth weights, as well 
as reducing neonatal mortality and the incidence of iron deficiencies. 
All these factors are linked to behavioral and development problems in 
children, which, if avoided, could reduce medical, education, and other 
costs. In addition, OMB has classified many farm programs, such as the 
USDA Cotton Program, as transfer programs with no economic effect. This 
assessment, however, shows that the program has significant effects on 
the economy that are negative. This demonstrates the type of confusion 
that often surrounds transfers. A common format for reporting would 
better inform decision makers about programs' economic performance. 

Developing a scorecard, based on existing principles and guidelines, 
for evaluating the quality of economic assessments would also improve 
comparability. For example, auditors use generally accepted auditing 
standards in rendering their professional opinion. This opinion can be 
thought of as a scorecard summary of the consistency of financial 
statements with generally accepted accounting principles. The opinion 
may be: 

1. "unqualified," indicating that the audited financial statements are 
in conformity with generally accepted accounting principles;

2. "qualified," indicating that except for the effects of the matter to 
which the qualification relates, the financial statements are in 
conformity with generally accepted accounting principles;

3. "adverse," indicating that the financial statements are not in 
conformity with generally accepted accounting principles; or: 

4. "disclaimer of opinion," indicating that the auditor is unable to 
form an opinion as to the financial statements' conformity with 
generally accepted accounting principles. 

In economics, no professional or federal guidance is available to link 
principles and guidelines to a formal, quality evaluation of economic 
performance assessments of federal programs. Therefore, there is no 
generally accepted scorecard for evaluating them. 

A scorecard would clearly and concisely illustrate the extent to which 
an assessment complies with general principles and guidelines for 
assessing economic performance. For example, it could show whether a 
discount rate was used in an assessment and whether it was used 
correctly. Table 4 gives examples of general principles and 
illustrations of economic opinions from a scorecard applied to the 
OSHA, USDA Cotton Program, and WIC studies.[Footnote 26]

Table 4: Evaluating Economic Performance Assessments with a Scorecard: 

General principle[A]: Accounting entity; 
Primary principle: The responsible unit--the source initiating the 
impact (i.e., the federal program); 
How assessment meets principle[B]: OSHA scaffold rule: Fully meets 
requirement; 
How assessment meets principle[B]: USDA Cotton program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Accounting entity; 
Primary principle: Measures nationwide impact; 
How assessment meets principle[B]: OSHA scaffold rule: Fully meets 
requirement; 
How assessment meets principle[B]: USDA Cotton program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Accounting entity; 
Primary principle: Accounts for net impact and not transfers; 
How assessment meets principle[B]: OSHA scaffold rule: Fully meets 
requirement; 
How assessment meets principle[B]: USDA Cotton program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Discount rate; 
Primary principle: Discount rate is based on OMB guidance or another 
rate developed by appropriate techniques; 
How assessment meets principle[B]: OSHA scaffold rule: Meets not at 
all; 
How assessment meets principle[B]: USDA Cotton program: Not applicable; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Consistent format; 
Primary principle: Presentation summarizes the key results, using a 
consistent format; 
How assessment meets principle[B]: OSHA scaffold rule: Meets not at 
all; 
How assessment meets principle[B]: USDA Cotton program: Meets not at 
all; 
How assessment meets principle[B]: WIC: Meets not at all. 

General principle[A]: Transparent; 
Primary principle: Presentation explicitly identifies and evaluates 
data, models, inferences, and assumptions; 
How assessment meets principle[B]: OSHA scaffold rule: Meets not at 
all; 
How assessment meets principle[B]: USDA Cotton program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Transparent; 
Primary principle: Presentation and documentation are sufficient to 
permit readers to replicate and quantify the effects of key 
assumptions; 
How assessment meets principle[B]: OSHA scaffold rule: Not applicable; 
How assessment meets principle[B]: USDA Cotton program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Not applicable. 

General principle[A]: Comprehensive monetization; 
How assessment meets principle[B]: OSHA scaffold rule: Partially meets 
requirement; 
How assessment meets principle[B]: USDA Cotton program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Partially meets requirement. 

General principle[A]: Economic performance; 
Primary principle: Net benefits or cost effectiveness reported; 
How assessment meets principle[B]: OSHA scaffold rule: Fully meets 
requirement; 
How assessment meets principle[B]: USDA Cotton program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Internal quality control; 
How assessment meets principle[B]: OSHA scaffold rule: Not applicable; 
How assessment meets principle[B]: USDA Cotton program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: External quality control; 
Primary principle: Peer review was done; 
How assessment meets principle[B]: OSHA scaffold rule: Not applicable; 
How assessment meets principle[B]: USDA Cotton program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Opinion of economic analysis; 
How assessment meets principle[B]: OSHA scaffold rule: [C]; 
How assessment meets principle[B]: USDA Cotton program: Unqualified; 
How assessment meets principle[B]: WIC: Unqualified. 

Source: GAO analysis. 

[A] Based on OMB guidelines and GAO analysis. 

[B] A = fully meets requirement; P = partially meets requirement; N = 
meets not at all; NA = not applicable. 

[C] No opinion, since the OSHA example was not a complete economic 
assessment. 

[End of table]

Enclosure III details the complete scorecard that we developed from OMB 
guidance, supplementing it from comparisons with accounting standards. 
As can be seen in the partial scorecard in table 4, we evaluated the 
three assessments relative to the augmented OMB guidelines. By 
summarizing what the assessments did and how they ranked in quality, 
this type of scorecard could inform federal decision makers about the 
quality of economic performance measures. The USDA Cotton Program and 
WIC assessments in table 4 were rated with an economic "unqualified" 
opinion, because the principles were generally followed; the OSHA 
scaffold rule was not rated, because the assessment was incomplete. 

[End of section]

Enclosure I: The Hierarchy of Generally Accepted Accounting Principles: 

Generally accepted accounting principles are presented as a hierarchy 
for accountants to determine appropriate accounting principles for 
transactions and auditors forming opinions on financial statements. For 
nongovernment and federal government entities, principles in category 
"a" are ranked highest. Those in category "e"--guidance from regulatory 
agencies and sources such as textbooks, handbooks, and articles--have 
the lowest rank. Sources from category "e"--in the absence of 
literature comparable to "a" to "d"--provide guidance for economic 
assessments. 

Table 5: The Hierarchy of Generally Accepted Accounting Principles: 

Category A: 
Principles of accounting for nongovernment: 
* Financial Accounting Standards Board (FASB), Statements and 
Interpretations;
* Accounting Principles Board (APB), Opinions; and
* American Institute of Certified Public Accountants (AICPA), 
Accounting Research Bulletins. 

Principles of accounting for federal government: 
* FASAB Statements and Interpretations and
* AICPA and FASB pronouncements if made applicable to federal 
government entities by FASAB Statements and Interpretations.
* 
Category B: 
* Principles of accounting for nongovernment: 
* FASB Technical Bulletins and
* AICPA Industry Guides and Statements of Position if they have been 
cleared. 

Principles of accounting for federal government: 
* FASAB Technical Bulletins and
* Cleared AICPA Industry Guides and Statements of Position if 
specifically applicable to federal government entities.
* 
Category C: 
Principles of accounting for nongovernment: 
* Consensus positions of the FASB Emerging Issues Task Force and
* Cleared AICPA Practice Bulletins. 

Principles of accounting for federal government: 
* AICPA Practice Bulletins if specifically applicable to federal 
government and cleared by FASAB and
* FASAB Accounting and Auditing Policy Committee technical releases. 

Category D: 
Principles of accounting for nongovernment: 
* AICPA accounting interpretations;
* FASB “Q and As”;
* Industry practices if widely recognized and prevalent; and
* FASB, AICPA audit guides, SOPs, and practice bulletins that have not 
been cleared. 

Principles of accounting for federal government: 

* Implementation guides FASAB staff publishes and
* Practices widely recognized and prevalent in the federal government.
* 
Category E: 
Principles of accounting for nongovernment: 
* Other accounting literature, including FASB concept statements, AICPA 
Issues Papers, International Accounting Standards Committee statements, 
Governmental Accounting Standards Board (GASB) statements, 
interpretations, and Technical Bulletins;
* Pronouncements of other professional associations or AICPA technical 
practice aids and regulatory agencies; and
* Accounting textbooks, handbooks, and articles.
* 
Principles of accounting for federal government: 

* Pronouncements in hierarchy categories “a” through “d” for 
nongovernment entities when not specifically applicable to federal 
government entities;
* Other accounting literature, including FASB concept statements, AICPA 
Issues Papers, International Accounting Standards Committee statements, 
GASB statements, interpretations, Technical Bulletins, and concept 
statements;
* Pronouncements of other professional associations or AICPA technical 
practice aids and regulatory agencies; and
* Accounting textbooks, handbooks, and articles. 

Source: D. M. Pallais, M. L. Reed, and C. A. Hartfield, PPC's Guide to 
GAAS: Standards for Audits, Compilations, Reviews, Attestations, 
Consulting, Quality Control and Ethics: Audit Reports (Fort Worth, 
Texas: Practitioners Publications Co., Oct. 2002), ch. 18, exhibit 18- 
1. 

[End of table]

[End of section]

Enclosure II: A Consistent Reporting Format for Economic Assessments: 

To demonstrate how a consistent format could be used to synthesize 
information in a comparable way, we used GAO economic assessments of 
USDA's Cotton Program and WIC and our retrospective review of OSHA's 
scaffold regulation for the construction industry.[Footnote 27] We 
selected these assessments because they were readily available, the 
programs were relevant but not highly controversial, and the programs 
illustrated income transfers and other measures of net benefit. 

WIC: 

USDA's Food and Nutrition Service administers WIC. The program is 
designed for eligible pregnant, breastfeeding, and post partum women 
and for infants and children up to age 5. Participants must have family 
incomes at or below 185 percent of the federal poverty level and must 
be at nutritional risk, as judged by a competent professional. WIC 
provides supplementary food, nutrition, and health education and 
referral to health and social services. In particular, participants are 
given coupons for purchasing specified kinds of food. 

GAO conducted an economic assessment of WIC to estimate the extent to 
which the program can reduce the cost of other federally funded 
programs, such as Medicaid. WIC might be viewed as a transfer program, 
merely transferring income from one group in society to another, with 
no economic impact. Our assessment, however, indicated that the program 
does have an impact through such benefits as increasing birth weight 
and reducing neonatal mortality and the incidence of iron deficiency. 
Low birth weight and iron deficiency are linked to children's behavior 
and development. 

Some of the program's benefits cannot be monetized, and distributional 
considerations, such as equity, may be significant in determining 
benefits. Nevertheless, we concluded, at that time, that given what can 
be valued, the program's benefits exceed the costs. The monetized 
benefits are in health care cost savings and special education, which 
are resource savings to the economy. 

We summarize these in table 6 in a format we used for the two other 
programs. The results for all three are reported in a format similar to 
a statement of income. We designed this format to include information 
on key quantitative measures, benefits, and costs. It allows the net 
benefits (or cost-effective results) to be seen and its major 
components understood. 

Table 6: Consistent Reporting Format: GAO's WIC Assessment: 

Key quantitative measure: Total low birth weight births averted; 
Expected value: Quantity: Number of births (in thousands): 36.5; 
Expected value: Unit value: Averted cost per birth: $28.4. 

Key quantitative measure: Total low birth weight births (first-year 
survivors); 
Expected value: Quantity: Number of births (in thousands): 30.8; 
Expected value: Unit value: Averted cost per birth: $33.7. 

Benefit of WIC (for WIC dollars spent): 

Category: Federal savings; 
Range of dollar values: Medium: $1.14; 
Range of dollar values: Low: $1.12; 
Range of dollar values: High: $1.51. 

Category: State and local government savings; 
Range of dollar values: Medium: $1.04. 

Category: Private sector savings; 
Range of dollar values: Medium: $1.32. 

Category: Total annual monetized benefit; 
Range of dollar values: Medium: $3.50; 
Range of dollar values: Low: $3.46; 
Range of dollar values: High: $3.50. 

Category: Total benefit from averted expenditures; 
Range of dollar values: Medium: $1,036. 

Cost of WIC: Government cost; 
Range of dollar values: Medium: $296. 

Total annual monetized cost; 
Range of dollar values: Medium: $296. 

Performance measure: Net monetized benefits; 
Range of dollar values: Medium: $740. 

Nonmonetizable impact: Benefits; 
Range of dollar values: Medium: [A]. 

Nonmonetizable impact: Costs; 
Range of dollar values: Medium: [B]. 

Category: Size of nonmonetized benefits needed to change sign; 
Range of dollar values: Medium: -$740. 

Source: GAO analysis. 

Note: Values are average annual values. Averted cost per birth is in 
thousands of dollars; benefit of WIC for WIC dollars spent is in 
dollars; all other dollars are in millions. 

[A] Nonmonetized benefits include better maternal health, lower anemia 
rates, improved nutritional status, and improved health of children 
born subsequently. 

[B] Nonmonetized costs include medical costs for nondisabled low birth- 
weight children. 

[End of table]

As shown in table 6, WIC services were estimated to save $1.036 billion 
annually, because an estimated 36.5 thousand births at low birth 
weights were estimated to have been averted and 30.8 thousand low birth 
weight babies survived the first year. Providing WIC services to 
pregnant women who delivered their babies in 1990 cost the federal 
government $296 million. The program resulted in a net benefit of $740 
million ($1.036 billion minus $296 million). 

The expected return on the investment in prenatal WIC services is 
large, because low birth weight is a socially expensive outcome. Low 
birth weight infants, especially those with very low birth weights 
(under 3.3 pounds), have higher initial hospitalization costs. In 
addition, a smaller portion of these infants survive their initial 
hospitalization. Finally, they typically require more care because of 
disability or special education, which is expensive. Additional 
information, such as the unit value of savings distributed to various 
segments of society (in this case, various government levels) is also 
provided in table 6. 

USDA Cotton Program: 

USDA's Cotton Program is designed to support cotton farmers' income and 
cotton exports. GAO conducted an evaluation of the program to estimate 
its costs. From 1986 through 1993 (the period GAO evaluated), about 90 
percent of all acreage devoted to cotton was enrolled in the program. 
The program has since been changed. 

As a program that shifts money from taxpayers to farmers, it initially 
appears to be a transfer program. In fact, OMB typically classifies 
agricultural programs like this one as transfer programs with no 
economic impact.[Footnote 28] However, the reasonably predictable 
results of the program design affect cotton production and prices and, 
therefore, have an impact on the economy. This impact occurs because in 
addition to the program's basic components--to support producers' 
income--the program required producers to idle acreage. Through program 
benefits, the government pays producers not to produce on the idled 
acres. With land taken out of production, society is prevented from 
benefiting economically from potential crops or using the land for 
other purposes. We concluded that, based on what can be valued, the 
program benefits were less than costs, resulting in a negative net 
benefit. 

The results of our economic assessment are summarized in table 7, the 
same type of table as table 6. 

Table 7: Consistent Reporting Format: GAO's USDA Cotton Program 
Assessment: 

Dollars in millions. 

Key quantitative measure: Average in the absence of program; 
Expected value: Quantity: million pounds: 7,524; 
Expected value: Unit value: per pound: $0.66. 

Key quantitative measure: Program average; 
Expected value: Quantity: million pounds: 6,865; 
Expected value: Unit value: per pound: $0.66. 

Benefit of USDA Cotton Program: Net gain to buyers; 
Range of dollar values: Medium: $16; 
Range of dollar values: Low: -$38; 
Range of dollar values: High: $63. 

Benefit of USDA Cotton Program: Net gain to producers; 
Range of dollar values: Medium: $754; 
Range of dollar values: Low: $659; 
Range of dollar values: High: $866. 

Total annual monetized benefit; 
Range of dollar values: Medium: $770; 
Range of dollar values: Low: $621; 
Range of dollar values: High: $929. 

Total benefit from USDA Cotton Program; 
Range of dollar values: Medium: $770; 
Range of dollar values: Low: $621; 
Range of dollar values: High: $929. 

Cost of USDA Cotton Program: Government cost; 
Range of dollar values: High: $1,509. 

Total annual monetized cost; 
Range of dollar values: Medium: $1,509; 
Range of dollar values: Low: $1,509; 
Range of dollar values: High: $1,509. 

Performance measure: Net monetized benefits; 
Range of dollar values: Medium: -$739; 
Range of dollar values: Low: -$888; 
Range of dollar values: High: -$580. 

Nonmonetizable impact: Benefits; 
Range of dollar values: Medium: [A]. 

Nonmonetizable impact: Costs; 
Range of dollar values: Medium: [B]. 

Nonmonetizable impact: Size of nonmonetized benefits needed to change 
sign; 
Range of dollar values: Medium: $739; 
Range of dollar values: Low: $888; 
Range of dollar values: High: $580. 

Source: GAO analysis. 

Note: Values are average annual values. 

[A] Nonmonetized benefits include ensuring producer income. 

[B] Not identified. 

[End of table]

As the table shows, the program cost taxpayers, through the federal 
government, an average of $1.5 billion annually in program payments. 
Because of provisions of the program that required farmers to idle 
acreage, however, benefits to farmers were estimated to be only $770 
million. This is because, among other things, the idled acreage was 
economically inefficient. As a result, the program net benefits were 
negative--an annual loss of $739 million, on average, for crop years 
1986-93.[Footnote 29]

This assessment illustrates that while OMB typically identifies farm 
programs as transfers, standard economic analysis suggests that these 
programs have real net national impact.[Footnote 30] In addition, this 
impact may be negative--farmers gaining and consumers incurring larger 
costs than they would have without the program. 

OSHA's Safety Standards for Scaffolds: 

OSHA administers the safety standards for scaffolds used in the 
construction industry; the standards are designed to protect employees 
in that industry from falls, structural instability, electrocution, and 
overloading. The standards can be viewed as an element of a regulatory 
program--that is, a rule on occupational safety. When the rule was 
written, OSHA determined that approximately 9 percent of all fatalities 
in the construction industry were attributable to accidents related to 
scaffolding. Although OSHA's final rule on scaffolds did not require an 
economic analysis under Executive Order 12866, OSHA did a prospective 
economic analysis to help inform the federal decision making. 

The rule's key benefits were forecast as coming from reduced injuries 
and deaths.[Footnote 31] OSHA did not originally value the risk of 
mortality, so from an economic performance perspective, the net 
benefits are undervalued.[Footnote 32] The prospective rule, however, 
was cost beneficial, even without valuing fatalities avoided. In OSHA's 
prospective analysis, the agency reported a positive annual net benefit 
for the rule, based only on monetizing the value of work days lost from 
injuries and the estimated cost of compliance and government costs. 

We monetized the value of fatalities avoided by the scaffold rule, by 
applying EPA's value of a statistical life ($6.1 million), DOT's ($2.7 
million), and CDC's ($0.94 million) as estimated at the time of the 
rule.[Footnote 33] When fatalities avoided are monetized, the estimated 
net benefits increase by tens to hundreds of millions of dollars per 
year. Table 8 summarizes the results of our assessment, in the same 
format as we applied to the USDA Cotton Program and WIC. 

Table 8: Consistent Reporting Format: GAO's OSHA Scaffold Assessment: 

Key quantitative measure: Injuries avoided; 
Expected value: Quantity: Number: 4,455; 
Expected value: Unit value: Cost per statistical life: $20.2[C]. 

Key quantitative measure: Fatalities avoided; 
Expected value: Quantity: Number: 47; 
Expected value: Unit value: Cost per statistical life: $2.7[D]. 

Benefit of scaffold rule: Gain from injuries avoided; 
Range of dollar values: Medium: $90; 
Range of dollar values: Low[A]: $90; 
Range of dollar values: High[B]: $90. 

Benefit of scaffold rule: Gain from fatalities avoided; 
Range of dollar values: Medium: $127; 
Range of dollar values: Low[A]: $44; 
Range of dollar values: High[B]: $287. 

Benefit of scaffold rule: Total annual monetized benefit; 
Range of dollar values: Medium: $217; 
Range of dollar values: Low[A]: $134; 
Range of dollar values: High[B]: $377. 

Category: Total benefit from scaffold rule; 
Range of dollar values: Medium: $217; 
Range of dollar values: Low[A]: $134; 
Range of dollar values: High[B]: $377. 

Cost of scaffold rule: Inspections; 
Range of dollar values: Medium: $5; 
Range of dollar values: Low[A]: $5; 
Range of dollar values: High[B]: $5. 

Cost of scaffold rule: Training; 
Range of dollar values: Medium: $2; 
Range of dollar values: Low[A]: $2; 
Range of dollar values: High[B]: $2. 

Cost of scaffold rule: Protection against falls; 
Range of dollar values: Medium: $6; 
Range of dollar values: Low[A]: $6; 
Range of dollar values: High[B]: $6. 

Category: Total annual monetized costs; 
Range of dollar values: Medium: $13; 
Range of dollar values: Low[A]: $13; 
Range of dollar values: High[B]: $13. 

Performance measure[E]: Net monetized benefits; 
Range of dollar values: Medium: $204; 
Range of dollar values: Low[A]: $122; 
Range of dollar values: High[B]: $364. 

Performance measure[E]: Cost effectiveness (fatality avoided per cost); 
Range of dollar values: Medium: 0 cost per life saved; 
Range of dollar values: Low[A]: 0 cost per life saved; 
Range of dollar values: High[B]: 0 cost per life saved. 

Performance measure[E]: Present value net benefit 7%; 
Range of dollar values: Medium: $2,918; 
Range of dollar values: Low[A]: $1,737; 
Range of dollar values: High[B]: $5,201. 

Nonmonetizable impact: Benefits; 
Range of dollar values: Medium: [F]. 

Nonmonetizable impact: Costs; 
Range of dollar values: Medium: [F]. 

Nonmonetizable impact: Size of nonmonetized benefits needed to change 
sign; 
Range of dollar values: Medium: - $204; 
Range of dollar values: Low[A]: -$122; 
Range of dollar values: High[B]: -$364. 

Source: GAO assessment using OSHA, Centers for Disease Control and 
Prevention, Environmental Protection Agency, and Department of 
Transportation data or methods. 

Note: Values are average annual values. Dollars are in millions. 

[A] Based on Centers for Disease Control and Prevention's methodology, 
yielding $0.94 million for the value of a statistical life of a 35- 
year-old man. 

[B] Based on Environmental Protection Agency's value of $6.1 million 
per value of statistical life. 

[C] Here, a constant cost per injury is assumed, based on the total 
value provided and the number of injuries. 

[D] Based on Department of Transportation's value of $2.7 million per 
value of statistical life. 

[E] OSHA omitted the value of life, with a net benefit of $77 million. 

[F] Not identified. 

[End of table]

As shown in table 8, if fatalities avoided are included, the rule is 
estimated to generate $204 million in annual national net benefits. 
That value can be as low as $122 million and as high as $364 million, 
depending on the value of statistical life used. As the benefits of the 
rule exceed the costs, even if fatalities are omitted, the cost per 
life saved (a cost-effectiveness measure) is zero. 

[End of section]

Enclosure III: A Scorecard for Evaluating Economic Program Assessments: 

We developed a scorecard from OMB guidance, and other relevant 
criteria, in order to illustrate links between accounting and economics 
criteria. To demonstrate how the scorecard could be used, we applied it 
to the OSHA, USDA Cotton Program, and WIC programs previously 
discussed. The scorecard includes reference to an opinion, similar to 
opinions rendered in financial statement audits, that indicate the 
extent to which the economic assessments met the criteria.[Footnote 34]

The scorecard's categories are illustrative rather than comprehensive. 
Since our scope and methodology did not investigate OSHA's data in 
detail, many items in OSHA's assessment of the scaffold rule were 
identified as "not applicable." Therefore, we were not able to render 
an opinion on that assessment. The categories in the scorecard present 
nonetheless a consistent format for evaluating the extent to which an 
economic assessment adhered to accepted principles and guidelines. 

Table 9: A Scorecard for Evaluating Economic Performance Assessments: 

General principle[A]: Accounting entity; 
Primary principle: The responsible unit--the source causing the impact 
(i.e., the federal program); 
How assessment meets principle[B]: OSHA scaffold rule: Fully meets 
requirement; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Accounting entity; 
Primary principle: Measures nationwide impact; 
How assessment meets principle[B]: OSHA scaffold rule: Fully meets 
requirement; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Accounting entity; 
Primary principle: Accounts for net impacts and not transfers; 
How assessment meets principle[B]: OSHA scaffold rule: Fully meets 
requirement; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Reliability; 
Primary principle: Results of assessment are verifiable; 
How assessment meets principle[B]: OSHA scaffold rule: Not applicable; 
How assessment meets principle[B]: USDA Cotton Program: Partially meets 
requirement; 
How assessment meets principle[B]: WIC: Partially meets requirement. 

General principle[A]: Reliability; 
Primary principle: Data and assumptions used are a faithful 
representation of what actually happened; 
How assessment meets principle[B]: OSHA scaffold rule: Not applicable; 
How assessment meets principle[B]: USDA Cotton Program: Partially meets 
requirement; 
How assessment meets principle[B]: WIC: Partially meets requirement. 

General principle[A]: Reliability; 
Primary principle: Precision of results is made explicit; 
How assessment meets principle[B]: OSHA scaffold rule: Meets not at 
all; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Reliability; 
Primary principle: Data, assumptions, and descriptions are unbiased; 
How assessment meets principle[B]: OSHA scaffold rule: Not applicable; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Comparable; 
Primary principle: Similar methods and assumptions are used when 
analyzing different entities; 
How assessment meets principle[B]: OSHA scaffold rule: Not applicable; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Consistent; 
Primary principle: Similar methods and assumptions are used for 
analyzing similar events in different time periods; 
How assessment meets principle[B]: OSHA scaffold rule: Not applicable; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Not applicable. 

General principle[A]: Revenue and benefits recognition; 
Primary principle: Accounts for revenues and benefits when they are 
realized and earned; 
How assessment meets principle[B]: OSHA scaffold rule: Fully meets 
requirement; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: General measurement standard; 
Primary principle: Estimates dollar value of material impact resulting 
from, or affected by, program; 
How assessment meets principle[B]: OSHA scaffold rule: Meets not at 
all; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

Primary principle: Alternative plans: Estimates quantitative material 
impacts but does not monetize them; 
How assessment meets principle[B]: OSHA scaffold rule: Alternative 
plans: Partially meets requirement; 
How assessment meets principle[B]: USDA Cotton Program: Alternative 
plans: Partially meets requirement; 
How assessment meets principle[B]: WIC: Alternative plans: Partially 
meets requirement. 

General principle[A]: Alternative plans; 
Primary principle: Evaluates most likely conditions expected, with and 
without the program; 
How assessment meets principle[B]: OSHA scaffold rule: Fully meets 
requirement; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Alternative plans; 
Primary principle: Analyzes all reasonable alternative courses of 
action; 
How assessment meets principle[B]: OSHA scaffold rule: Meets not at 
all; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Alternative plans; 
Primary principle: Considers extent to which entities comply with 
related laws and regulations; 
How assessment meets principle[B]: OSHA scaffold rule: Fully meets 
requirement; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Partially meets requirement. 

General principle[A]: Discount rate; 
Primary principle: Discount rate is based on OMB guidance or other rate 
developed through appropriate techniques; 
How assessment meets principle[B]: OSHA scaffold rule: Meets not at 
all; 
How assessment meets principle[B]: USDA Cotton Program: Not applicable; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Uncertainty; 
Primary principle: Considers the effect of uncertainty on results; 
How assessment meets principle[B]: OSHA scaffold rule: Meets not at 
all; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Clear rationale; 
Primary principle: Presents justification for program (e.g., market 
failure, legislative requirement); 
How assessment meets principle[B]: OSHA scaffold rule: Not applicable; 
How assessment meets principle[B]: USDA Cotton Program: Not applicable; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Consistent format used; 
Primary principle: Presentation summarizes key results consistently; 
How assessment meets principle[B]: OSHA scaffold rule: Meets not at 
all; 
How assessment meets principle[B]: USDA Cotton Program: Meets not at 
all; 
How assessment meets principle[B]: WIC: Meets not at all. 

General principle[A]: Transparent; 
Primary principle: Presentation explicitly identifies and evaluates 
data, models, inferences, and assumptions; 
How assessment meets principle[B]: OSHA scaffold rule: Meets not at 
all; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Transparent; 
Primary principle: Presentation and documentation are enough to permit 
readers to replicate and quantify the effects of key assumptions; 
How assessment meets principle[B]: OSHA scaffold rule: Not applicable; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Not applicable. 

General principle[A]: Comprehensive monetization; 
How assessment meets principle[B]: OSHA scaffold rule: Partially meets 
requirement; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Partially meets requirement. 

General principle[A]: Economic performance; 
Primary principle: Net benefits or cost effectiveness are reported; 
How assessment meets principle[B]: OSHA scaffold rule: Fully meets 
requirement; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Internal quality control; 
How assessment meets principle[B]: OSHA scaffold rule: Not applicable; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: External quality control; 
Primary principle: Peer review was done; 
How assessment meets principle[B]: OSHA scaffold rule: Not applicable; 
How assessment meets principle[B]: USDA Cotton Program: Fully meets 
requirement; 
How assessment meets principle[B]: WIC: Fully meets requirement. 

General principle[A]: Opinion of economic analysis; 
How assessment meets principle[B]: OSHA scaffold rule: [C]; 
How assessment meets principle[B]: USDA Cotton Program: Unqualified; 
How assessment meets principle[B]: WIC: Unqualified. 

Source: GAO analysis. 

[A] Based on OMB guidelines and GAO analysis. 

[B] A = fully meets requirement; P = partially meets requirement; N = 
meets not at all; NA = not applicable. 

[C] No opinion, since the OSHA example was not a complete economic 
assessment. 

[End of table]

[End of section]

Enclosure IV: Assessing OSHA's Scaffold Rule by Retrospective Analysis: 

A retrospective analysis of OSHA's scaffold rule, when compared to the 
results of the prospective analysis in enclosure II, provides 
information on the net benefits estimated before the rule was 
implemented, which can be compared with the net benefits after the rule 
had been in effect for some time. Most economic performance measures 
are estimated prospectively for regulatory or capital spending 
purposes. Feedback on what occurs after a program has been implemented 
can assist in a program's oversight and modification, if appropriate, 
and can help improve the quality of other prospective studies. 

Seong and Mendeloff recently reported a retrospective analysis of 
OSHA's scaffold rule.[Footnote 35] Their study focused on benefits; no 
retrospective information on costs is known to be available. In table 
10, the prospective assessment is compared with the retrospective 
assessment. 

Table 10: Prospective and Retrospective Assessments of OSHA's Scaffold 
Rule Compared: 

Dollars in millions. 

Key quantitative measure: Injuries avoided; 
Expected value: Prospective: Quantity: 4,455
Expected value: Retrospective: Quantity: 1,564. 

Key quantitative measure: Fatalities avoided; 
Expected value: Prospective: Quantity: 47; 
Expected value: Retrospective: Quantity: 17. 

Benefit of scaffold rule: Injuries avoided: Gain; 
Expected value: Prospective: Value[A]: $90; 
Expected value: Retrospective: Value: $32. 

Benefit of scaffold rule: Fatalities avoided: Gain; 
Expected value: Prospective: Value[A]: 127; 
Expected value: Retrospective: Value: 45. 

Benefit of scaffold rule: Total annual monetized benefit; 
Expected value: Prospective: Value[A]: $217; 
Expected value: Retrospective: Value: $76. 

Cost of scaffold rule: Inspections; 
Expected value: Prospective: Value[A]: $5; 
Expected value: Retrospective: Value: $5. 

Cost of scaffold rule: Training; 
Expected value: Prospective: Value[A]: $2; 
Expected value: Retrospective: Value: $2. 

Cost of scaffold rule: Protection against falls; 
Expected value: Prospective: Value[A]: $6; 
Expected value: Retrospective: Value: $6. 

Total annual monetized cost; 
Expected value: Prospective: Value[A]: $13; 
Expected value: Retrospective: Value: $13. 

Performance measure: Net monetized benefits (annual); 
Expected value: Prospective: Value[A]: $204; 
Expected value: Retrospective: Value: $63. 

Performance measure: Cost-effectiveness (fatality avoided per cost); 
Expected value: Prospective: Value[A]: 0 cost per life saved; 
Expected value: Retrospective: Value: 0 cost per life saved. 

Performance measure: Present value net benefit 7%; 
Expected value: Prospective: Value[A]: $2,918; 
Expected value: Retrospective: Value: $908. 

Nonmonetizable impact: Benefits; 
Expected value: Prospective: Value[A]: [B]; 
Expected value: Retrospective: Value: [B]. 

Nonmonetizable impact: Costs; 
Expected value: Prospective: Value[A]: [B]; 
Expected value: Retrospective: Value: [B]. 

Nonmonetizable impact: Size of unmonetized benefits needed to change 
sign; 
Expected value: Prospective: Value[A]: -$204; 
Expected value: Retrospective: Value: -$63. 

Source: GAO analysis. 

Note: Values are average annual values. 

[A] Value based on Department of Transportation value of a statistical 
life of $2.7 million. 

[B] Not identified. 

[End of table]

As table 10 shows, in the prospective assessment, injuries avoided were 
estimated at 4,455, fatalities avoided at 47; in the retrospective 
assessment, injuries avoided were estimated at 1,564, fatalities 17. In 
addition, prospectively, the annual benefits of the program were 
projected to be $204 million; retrospectively, $63 million.[Footnote 
36] 

These estimates, based on realizations of deaths in the construction 
industry, indicate that the expected benefits of the OSHA scaffold rule 
have not been fully realized, since the number of fatalities has not 
decreased as much as expected. Even with the lower realization of 
safety benefits in the retrospective assessment, however, it appears 
that the rule has a favorable economic performance evaluation. However, 
the retrospective analysis suggests that (1) additional safety benefits 
may be obtained from the rule and (2) OSHA may usefully investigate the 
difference between the expected number of fatalities avoided and the 
estimated number actually avoided. If the difference is found to be an 
understandable forecasting error, that result could potentially inform 
future estimates for this and other related programs. 

(460571): 

FOOTNOTES

[1] GAO, 21st Century Challenges: Reexamining the Base of the Federal 
Government, GAO-05-325SP (Washington, D.C.: Feb. 2005). 

[2] See GAO, Regulatory Reform: Agencies Could Improve Development, 
Documentation, and Clarity of Regulatory Economic Analyses, GAO/RCED- 
98-142 (Washington, D.C.: May 26, 1998), and Clean Air Act: 
Observations on EPA's Cost-Benefit Analysis of Its Mercury Control 
Options, GAO-05-252 (Washington, D.C.: Feb. 28, 2005). 

[3] GAO, Results-Oriented Government: GPRA Has Established a Solid 
Foundation for Achieving Greater Results, GAO-04-38, (Washington, D.C.: 
Mar. 10, 2004). 

[4] GAO, Performance Budgeting: Observations on the Use of OMB's 
Program Assessment Rating Tool for the Fiscal Year 2004 Budget, GAO-04- 
174 (Washington, D.C.: Jan. 30, 2004). 

[5] Cost-effectiveness analysis measures the least costly way of 
achieving an objective. It is typically used when the outcomes can be 
quantified but not monetized. 

[6] See OMB Circular No. A-4, 68 Fed. Reg. 58366 (Oct. 9, 2003); OMB 
Circular No. A-11, Preparation, Submission, and Execution of the Budget 
(May 17, 2005); OMB Circular No. A-94, 57 Fed. Reg. 53519 (Nov. 10, 
1992). Under the Unfunded Mandates Reform Act of 1995, agencies are 
required to prepare a qualitative and quantitative assessment of the 
anticipated costs and benefits before issuing a regulation that may 
result in annual expenditures by state, local, and tribal governments 
or by the private sector of $100 million annually. Under Executive 
Order 12866 and OMB Circular A-11, certain federal agencies are 
required to consider the benefits and costs of proposed regulatory 
actions and capital expenditures. 

[7] The sidebars appearing in the margins of this section (Workshop 
Discussion) are excerpts from the background paper distributed to the 
workshop participants. The full context of these excerpts can be seen 
in appendix II. 

[8] Scott Farrow, Tim Guinane, Carol Bray, Phillip Calder, Elizabeth 
Curda, Andrea Levine, Robert Martin, and Don Neff prepared this paper 
for discussion at the December 17, 2004, GAO Workshop on Economic 
Performance Measures, with assistance from Pat Dalton, Joe Kile, Nancy 
Kingsbury, Paul Posner, and Jeff Steinhoff. We are grateful to Jay 
Fountain, Edward Gramlich, Aidan Vining, and Richard Zerbe for their 
help in reviewing the paper. It has been edited for this report. 

[9] GAO/RCED-98-142. 

[10] See, for example, Unfunded Mandates Reform Act of 1995, 2 U.S.C. 
§§1501-56, and Executive Order 12866. 

[11] The six agencies were the Department of Agriculture (USDA), 
Department of Education (Education), Department of Energy (DOE), 
Department of Labor (DOL), Department of Transportation (DOT), and 
Environmental Protection Agency (EPA). 

[12] GAO, Cotton Program: Costly and Complex Government Program Needs 
to Be Reassessed, GAO/RCED-95-107 (Washington, D.C.: June 20, 1995), 
and Early Intervention: Federal Investments Like WIC Can Produce 
Savings, GAO/HRD-92-18 (Washington, D.C.: Apr. 7, 1992). 

[13] GAO-04-38. 

[14] Cost-effectiveness analysis is closely related to net-benefit 
analysis, but the two types of analyses ask different questions. Cost 
effectiveness asks, what is the least costly way of achieving a 
particular objective? Cost-effectiveness analysis is used when there 
are difficulties in assigning monetary values to the outcomes of 
projects but the outcomes can be quantified along one nonmonetary 
dimension. 

[15] Under Executive Order 12866, agencies are required to assess the 
benefits and costs of proposed regulations that are expected to have an 
annual effect on the economy of $100 million or more. 

[16] GAO/RCED-95-107 and GAO/HRD-92-18. 

[17] One example of retrospective analysis from GAO's work is 
Environmental Protection: Assessing Impacts of EPA's Regulations 
through Retrospective Studies, GAO/RCED-99-250 (Washington, D.C.: Sept. 
14, 1999). 

[18] GAO-04-38. Only 31 percent of the federal managers we surveyed 
reported having performance measures that linked product or service 
costs with the results achieved to a "great" or "very great" extent. 

[19] These numbers depend on how the agencies enumerated their measures 
in 2002, the year of our review, and involved evaluating the text in 
the Results Act reports. The evaluation required a degree of 
professional judgment to determine the total number of indicators and 
measures linking cost to program outcome. Nonetheless, the general 
result did not depend on the specific result of the number used. 

[20] See S. Rep. No. 103-58, at 29-30 (1993). 

[21] Typically, economists use an estimate of the value of a 
statistical life to estimate the value of reduced risk of mortality. 
This is the amount people are willing to pay to avoid the risk of one 
more death in a population. 

[22] These are late 1990 values, which would have generally increased 
with inflation by 2005. 

[23] We did not retrospectively investigate assumptions of the 
prospective assessment, other than the evidence on the changes in 
fatalities. 

[24] OMB, Office of Information and Regulatory Affairs, Informing 
Regulatory Decisions: 2003 Report to Congress on the Costs and Benefits 
of Federal Regulations and Unfunded Mandates on State, Local, and 
Tribal Entities (Washington, D.C.: Sept. 2003). 

[25] In a direct comparison of the net benefits, it is assumed that the 
methodologies used to measure those benefits have been standardized, 
making such comparisons feasible. All the studies, for example, would 
have had to include the same value of a statistical life, if 
applicable. 

[26] Since GAO developed the criteria and the initial reports on the 
Cotton and WIC programs, this evaluation was not independent. In 
addition, the standards for rendering an economics opinion have not 
been formally developed. Nonetheless, for illustration and discussion 
purposes only, we rendered an opinion of "unqualified" since the 
limitations ("N" and "P" in table 4) did not appear to be significant. 

[27] GAO/RCED-95-107 and GAO/HRD-92-1. 

[28] OMB, Informing Regulatory Decisions. 

[29] In general, consumers did not gain from the program--they paid 
higher prices than they would have paid in the absence of the program. 
The assessment shows a small gain for consumers for one year that 
affected the average. The gain occurred because the government released 
cotton, accumulated under the program in previous years, from 
government stock, lowering prices from what they would have been 
otherwise. 

[30] There are no economic gains from a pure transfer payment because 
the benefits to those who receive such a transfer are matched by the 
costs borne by those who pay for it. Therefore, transfers should be 
excluded from the calculation of net present value. It should also be 
recognized that a transfer program might have benefits that are less 
than the program's real economic costs because of inefficiencies that 
can arise in program delivery of benefits and in financing. 

[31] We did not investigate agency material cited as being publicly 
available in the regulation docket; we used only information from the 
Federal Register notice and other published sources. Consequently, the 
OSHA example is for illustration and might be materially different if 
the supporting information were investigated. 

[32] Since we completed this analysis, OSHA has used the EPA value of a 
statistical life for a proposed regulation. 

[33] This uses CDC's methodology for a 35-year-old man. 

[34] Since GAO developed both the standards and the reports, which were 
evaluated using the standards, the evaluation was clearly not 
independent. Recognizing this, we provide the scorecard for 
illustration and discussion. 

[35] Si Kyung Seong and John Mendeloff, "Assessing the Accuracy of 
OSHA's Estimation of the Benefit of Safety Standards," paper presented 
at the Research Conference of the Association for Public Policy 
Analysis and Management, Dallas, Texas, November 7-9, 2002; a revised 
version is available at www.aei-brookings.org (December 3, 2003). 

[36] We did not retrospectively verify other assumptions in the 
prospective analysis. 

GAO's Mission: 

The Government Accountability Office, the investigative arm of 
Congress, exists to support Congress in meeting its constitutional 
responsibilities and to help improve the performance and accountability 
of the federal government for the American people. GAO examines the use 
of public funds; evaluates federal programs and policies; and provides 
analyses, recommendations, and other assistance to help Congress make 
informed oversight, policy, and funding decisions. GAO's commitment to 
good government is reflected in its core values of accountability, 
integrity, and reliability. 

Obtaining Copies of GAO Reports and Testimony: 

The fastest and easiest way to obtain copies of GAO documents at no 
cost is through the Internet. GAO's Web site ( www.gao.gov ) contains 
abstracts and full-text files of current reports and testimony and an 
expanding archive of older products. The Web site features a search 
engine to help you locate documents using key words and phrases. You 
can print these documents in their entirety, including charts and other 
graphics. 

Each day, GAO issues a list of newly released reports, testimony, and 
correspondence. GAO posts this list, known as "Today's Reports," on its 
Web site daily. The list contains links to the full-text document 
files. To have GAO e-mail this list to you every afternoon, go to 
www.gao.gov and select "Subscribe to e-mail alerts" under the "Order 
GAO Products" heading. 

Order by Mail or Phone: 

The first copy of each printed report is free. Additional copies are $2 
each. A check or money order should be made out to the Superintendent 
of Documents. GAO also accepts VISA and Mastercard. Orders for 100 or 
more copies mailed to a single address are discounted 25 percent. 
Orders should be sent to: 

U.S. Government Accountability Office

441 G Street NW, Room LM

Washington, D.C. 20548: 

To order by Phone: 

Voice: (202) 512-6000: 

TDD: (202) 512-2537: 

Fax: (202) 512-6061: 

To Report Fraud, Waste, and Abuse in Federal Programs: 

Contact: 

Web site: www.gao.gov/fraudnet/fraudnet.htm

E-mail: fraudnet@gao.gov

Automated answering system: (800) 424-5454 or (202) 512-7470: 

Public Affairs: 

Jeff Nelligan, managing director,

NelliganJ@gao.gov

(202) 512-4800

U.S. Government Accountability Office,

441 G Street NW, Room 7149

Washington, D.C. 20548: