This is the accessible text file for GAO report number GAO-04-550T 
entitled 'Performance Budgeting: OMB's Performance Rating Tool Presents 
Opportunities and Challenges For Evaluating Program Performance' which 
was released on March 11, 2004.

This text file was formatted by the U.S. General Accounting Office 
(GAO) to be accessible to users with visual impairments, as part of a 
longer term project to improve GAO products' accessibility. Every 
attempt has been made to maintain the structural and data integrity of 
the original printed product. Accessibility features, such as text 
descriptions of tables, consecutively numbered footnotes placed at the 
end of the file, and the text of agency comment letters, are provided 
but may not exactly duplicate the presentation or format of the printed 
version. The portable document format (PDF) file is an exact electronic 
replica of the printed version. We welcome your feedback. Please E-mail 
your comments regarding the contents or accessibility features of this 
document to Webmaster@gao.gov.

This is a work of the U.S. government and is not subject to copyright 
protection in the United States. It may be reproduced and distributed 
in its entirety without further permission from GAO. Because this work 
may contain copyrighted images or other material, permission from the 
copyright holder may be necessary if you wish to reproduce this 
material separately.

Testimony:

Before the Subcommittee on Environment, Technology, and Standards, 
Committee on Science, House of Representatives:

United States General Accounting Office:

GAO:

For Release on Delivery Expected at 10:00 a.m. EST:

Thursday, March 11, 2004:

PERFORMANCE BUDGETING:

OMB's Performance Rating Tool Presents Opportunities and Challenges For 
Evaluating Program Performance:

Statement of Paul L. Posner:

Managing Director, Federal Budget Issues:

Strategic Issues:

GAO-04-550T:

GAO Highlights:

Highlights of GAO-04-550T, a testimony before the Subcommittee on 
Environment, Technology, and Standards, Committee on Science, House of 
Representatives 

Why GAO Did This Study:

The Office of Management and Budget’s (OMB) Performance Assessment 
Rating Tool (PART) is meant to provide a consistent approach to 
evaluating federal programs during budget formulation. The subcommittee 
asked GAO to discuss its overall findings and recommendations 
concerning PART, based on a recent report, Performance Budgeting: 
Observations on the Use of OMB’s Program Assessment Rating Tool for the 
Fiscal Year 2004 Budget (GAO-04-174).

What GAO Found:

PART helped structure OMB’s use of performance information for internal 
program and budget analysis and stimulated agency interest in budget 
and performance integration. Moreover, it illustrated the potential to 
build on GPRA’s foundation to more actively promote the use of 
performance information in budget decisions. OMB deserves credit for 
inviting scrutiny of its federal program performance reviews and 
sharing them on its Web site.

The goal of PART is to evaluate programs systematically, consistently, 
and transparently. OMB went to great lengths to encourage consistent 
application of PART in the evaluation of government programs, including 
pilot testing the instrument, issuing detailed guidance, and conducting 
consistency reviews. Although there is undoubtedly room for continued 
improvement, any tool is inherently limited in providing a single 
performance answer or judgment on complex federal programs with 
multiple goals.  

Performance measurement challenges in evaluating complex federal 
programs make it difficult to meaningfully interpret a single bottom-
line rating. The individual section ratings for each PART review 
provided a better understanding of areas needing improvement than the 
overall rating alone. Moreover, any tool that is sophisticated enough 
to take into account the complexity of the U.S. government will always 
require some interpretation and judgment. Therefore it is not 
surprising that OMB staff were not fully consistent in interpreting 
complex questions about agency goals and results. 

The lack of program performance information at the agency level also 
creates challenges in effectively measuring program performance. PART 
provides an opportunity to consider strategically targeting the 
assessments on groups of related programs contributing to common 
outcomes to more efficiently use scarce analytic resources and focus 
decision makers’ attention on the most pressing performance issues 
cutting across individual programs and agencies.

The relationship between PART and the broader GPRA strategic planning 
process is still evolving and highlights the critical importance of 
defining the unit of analysis for program evaluation. Although PART can 
stimulate discussion on program-specific performance measurement 
issues, it is not a substitute for GPRA’s strategic, longer-term focus 
on thematic goals, and department- and governmentwide crosscutting 
comparisons. 

PART clearly serves OMB’s needs, but questions remain about whether it 
serves the various needs of other key stakeholders. If PART results are 
to be considered in the congressional debate, it will be important for 
OMB to (1) involve congressional stakeholders early in providing input 
on the focus of the assessments; (2) clarify any significant 
limitations in the assessments and underlying performance information; 
and (3) initiate discussions with key congressional committees about 
how they can best leverage PART information in congressional 
authorization, appropriations, and oversight processes.

What GAO Recommends:

In the recent report on PART, GAO recommended that the Director of OMB 
(1) address the capacity demands of PART, (2) strengthen PART guidance, 
(3) address evaluation information scope and availability issues, (4) 
focus program selection on critical operations and crosscutting 
comparisons, (5) expand the dialogue with Congress, and (6) articulate 
and implement a complementary relationship between PART and GPRA. 

OMB generally agreed with GAO’s findings, conclusions, and 
recommendations and said it is already taking actions to address many 
of the recommendations. 

GAO also suggested that Congress consider the need for a structured 
approach to articulating its perspective and oversight agenda on 
performance goals and priorities for key programs. 

[End of section]

Mr. Chairman and Members of the Subcommittee:

I am pleased to be here today to discuss performance budgeting and the 
Office of Management and Budget's (OMB) Program Assessment Rating Tool 
(PART). Since the 1950s, the federal government has attempted several 
governmentwide initiatives designed to better align spending decisions 
with expected performance--what is commonly referred to as "performance 
budgeting." The consensus is that prior efforts--including the Hoover 
Commission, the Planning-Programming-Budgeting-System, Management by 
Objectives, and Zero-Based Budgeting--did not succeed in significantly 
shifting the focus of the federal budget process from its long-standing 
concentration on the items of government spending to the results of its 
programs. However, the persistent attempts reflect a long-standing 
interest in linking resources to results.

In the 1990s, Congress and the executive branch laid out a statutory 
and management framework that provides the foundation for strengthening 
government performance and accountability, with the Government 
Performance and Results Act of 1993[Footnote 1] (GPRA) as its 
centerpiece. GPRA is designed to inform congressional and executive 
decision making by providing objective information on the relative 
effectiveness and efficiency of federal programs and spending. A key 
purpose of the act is to create closer and clearer links between the 
process of allocating scarce resources and the expected results to be 
achieved with those resources. We have learned that this type of 
integration is critical from prior initiatives that failed in part 
because they did not prove to be relevant to budget decision makers in 
the executive branch or Congress.[Footnote 2] GPRA requires both a 
connection to the structures used in congressional budget presentations 
and consultation between the executive and legislative branches on 
agency strategic plans; this gives Congress an oversight stake in 
GPRA's success.[Footnote 3]

This administration has made the integration of performance and budget 
information one of five governmentwide management priorities under the 
President's Management Agenda (PMA).[Footnote 4] Central to this 
initiative is PART. OMB developed PART as a diagnostic tool meant to 
provide a consistent approach to evaluating federal programs and 
applied it in formulating the President's fiscal years 2004 and 2005 
budget requests. PART covers four broad topics for all 
"programs"[Footnote 5] selected for review: (1) program purpose and 
design, (2) strategic planning, (3) program management, and (4) program 
results (i.e., whether a program is meeting its long-term and annual 
goals) as well as additional questions that are specific to one of 
seven mechanisms or approaches used to deliver the program.[Footnote 6]

GPRA expanded the supply of performance information generated by 
federal agencies, although as the PART assessments demonstrate, more 
must be done to develop credible performance information. However, 
improving the supply of performance information is in and of itself 
insufficient to sustain performance management and achieve real 
improvements in management and program results. Rather, it needs to be 
accompanied by a demand for that information by decision makers and 
managers alike. PART may mark a new chapter in performance-based 
budgeting by more successfully stimulating demand for this information-
-that is, using the performance information generated through GPRA's 
planning and reporting processes to more directly feed into executive 
branch budgetary decisions.

My statement today focuses on seven points:

* PART helped structure OMB's use of performance information for its 
internal program and budget analysis, made the use of this information 
more transparent, and stimulated agency interest in budget and 
performance integration. Moreover, it illustrated the potential to 
build on GPRA's foundation to more actively promote the use of 
performance information in budget decisions.

* The goal of PART is to evaluate programs systematically, 
consistently, and transparently. OMB went to great lengths to encourage 
consistent application of PART in the evaluation of government 
programs, including pilot testing the instrument, issuing detailed 
guidance, and conducting consistency reviews. Although there is 
undoubtedly room for continued improvement, any tool is inherently 
limited in providing a single performance answer or judgment on complex 
federal programs with multiple goals.

* Performance measurement challenges in evaluating complex federal 
programs make it difficult to meaningfully interpret a bottom-line 
rating. The individual section ratings for each PART review provided a 
better understanding of areas needing improvement than the overall 
rating alone.

* As is to be expected with any new reform, PART is a work in progress 
and we have noted in our work where OMB might make improvements. Any 
tool that is sophisticated enough to take into account the complexity 
of the U.S. government will require some exercise of judgment. 
Therefore it is not surprising that we found some inconsistencies in 
OMB staff interpreting and applying PART.

* PART provides an opportunity to more efficiently use scarce analytic 
resources, to focus decision makers' attention on the most pressing 
policy issues, and to consider comparisons and trade-offs among related 
programs by more strategically targeting PART assessments based on such 
factors as the relative priorities, costs, and risks associated with 
related clusters of programs and activities. The first year PART 
assessments underscored the long-standing gaps in performance and 
evaluation information throughout the federal government. By reaching 
agreement on areas in which evaluations are most essential, decision 
makers can help ensure that limited resources are applied wisely.

* The relationship between PART and its process and the broader GPRA 
strategic planning process is still evolving. Although PART can 
stimulate discussion on program-specific performance measurement 
issues, it is not a substitute for GPRA's strategic, longer-term focus 
on thematic goals and department-and governmentwide crosscutting 
comparisons. PART and GPRA serve different but complementary needs, so 
a strategy for integrating the two could help strengthen both.

* Federal programs are designed and implemented in dynamic environments 
where competing program priorities and stakeholders' needs must be 
balanced continually and new needs must be addressed. While PART 
clearly serves the needs of OMB in budget formulation, questions remain 
about whether it serves the various needs of other key stakeholders. If 
the President or OMB wants PART and its results to be considered in the 
congressional debate, it will be important for OMB to (1) involve 
congressional stakeholders early in providing input on the focus of the 
assessments; (2) clarify any significant limitations in the assessments 
as well as the underlying performance information; and (3) initiate 
discussions with key congressional committees about how they can best 
take advantage of and leverage PART information in congressional 
authorization, appropriations, and oversight processes. Moreover, 
Congress needs to consider ways it can articulate its oversight 
priorities and performance agenda.

My statement is based on our recently published report on OMB's 
PART[Footnote 7] in which we reviewed the first year of the PART 
process--fiscal year 2004--and changes in the PART process initiated 
for fiscal year 2005. We have not reviewed or analyzed the PART results 
for the fiscal year 2005 budget request. For this testimony, this 
subcommittee asked us to discuss our overall findings and 
recommendations concerning PART to help frame today's hearing. We 
conducted our work in accordance with generally accepted government 
auditing standards.

Strengths and Weaknesses of PART in Its First Year of Implementation:

Through its development and use of PART, OMB has more explicitly 
infused performance information into the budget formulation process; 
increased the attention paid to performance information and program 
evaluations; and ultimately, we hope, increased the value of this 
information to decision makers and other stakeholders. By linking 
performance information to the budget process, OMB has provided 
agencies with a powerful incentive for improving both the quality and 
availability of performance information. The level of effort and 
involvement by senior OMB officials and staff clearly signals the 
importance of this strategy in meeting the priorities outlined in the 
PMA. OMB should be credited with opening up for scrutiny--and potential 
criticism--its review of key areas of federal program performance and 
then making its assessments available to a potentially wider audience 
through its Web site.

As OMB and others recognize, performance is not the only factor in 
funding decisions. Determining priorities--including funding 
priorities--is a function of competing values and interests. 
Accordingly, we found that while PART scores were generally positively 
related to proposed funding changes in discretionary programs, the 
scores did not automatically determine funding changes. That is, for 
some programs rated "effective" or "moderately effective" OMB 
recommended funding decreases, while for several programs judged to be 
"ineffective" OMB recommended additional funding in the President's 
budget request with which to implement changes. In fact, the more 
important role of PART was not its use in making resource decisions, 
but in its support for recommendations to improve program design, 
assessment, and management. Our analysis of the fiscal year 2004 PART 
found that 82 percent of the recommendations addressed program 
assessment, design, and management issues; only 18 percent of the 
recommendations had a direct link to funding matters.[Footnote 8]

OMB's ability to use PART to identify and address future program 
improvements and measure progress--a major purpose of PART--depends on 
its ability to oversee the implementation of PART recommendations. As 
OMB has recognized, following through on these recommendations is 
essential for improving program performance and ensuring 
accountability. Currently, OMB plans to assess an additional 20 percent 
of all federal programs annually. As the number of recommendations from 
previous years' evaluations grows, a system for monitoring their 
implementation will become more critical. However, OMB does not have a 
centralized system to oversee the implementation of such 
recommendations or evaluate their effectiveness.

The goal of PART is to evaluate programs systematically, consistently, 
and transparently. OMB went to great lengths to encourage consistent 
application of PART in the evaluation of government programs, including 
pilot testing the instrument, issuing detailed guidance, and conducting 
consistency reviews. Although there is undoubtedly room for continued 
improvement, any tool is inherently limited in providing a single 
performance answer or judgment on complex federal programs with 
multiple goals.

OMB recognized the complexity inherent in evaluating federal programs 
by differentiating its rating tool for seven mechanisms or approaches 
used to deliver services, ranging from block grants to research and 
development. However, judgment is involved in classifying programs by 
these categories since many programs fit into more than one of these 
groupings. OMB guidance, for instance, acknowledges that some research 
and development programs can also be evaluated as competitive grants 
and capital assets.

Performance measurement challenges in evaluating complex federal 
programs make it difficult to meaningfully interpret a bottom-line 
rating. OMB published both a single, bottom-line rating for PART 
results and individual section scores. It is these latter scores that 
are potentially more useful for identifying information gaps and 
program weaknesses. For example, in the fiscal year 2004 PART, one 
program that was rated "adequate" overall got high scores for purpose 
(80 percent) and planning (100 percent), but poor scores in being able 
to show results (39 percent) and in program management (46 percent). In 
a case like this, the individual section ratings provided a better 
understanding of areas needing improvement than the overall rating 
alone. In addition, bottom-line ratings may force raters to choose 
among several important but disparate goals and encourage a 
determination of program effectiveness even when performance data are 
unavailable, the quality of those data is uneven, or they convey a 
mixed message on performance.

Any tool that is sophisticated enough to take into account the 
complexity of the U.S. government will always require some 
interpretation and judgment. Therefore it is not surprising that OMB 
staff were not fully consistent in interpreting complex questions about 
agency goals and results. Many PART questions contain subjective terms 
that are open to interpretation. Examples include terminology such as 
"ambitious" in describing sought-after performance measures. Because 
the appropriateness of a performance measure depends on the program's 
purpose, and because program purposes can vary immensely, an ambitious 
goal for one program might be unrealistic for a similar but more 
narrowly defined program. Without further guidance, it is unclear how 
OMB staff can be expected to be consistent.

We also found inconsistencies in how the definition of acceptable 
performance measures was applied. Our review of the fiscal year 2004 
PART surfaced several instances in which OMB staff inconsistently 
defined appropriate measures--outcome versus output--for programs. 
Agency officials also told us that OMB staff used different standards 
to define measures as outcome-oriented. Outputs are the products and 
services delivered by the program whereas outcomes refer to the results 
of outputs. For example, in the employment and training area, OMB 
accepted short-term outcomes, such as obtaining high school diplomas or 
employment, as a proxy for long-term goals for the Department of Health 
and Human Services' Refugee Assistance program, which aims to help 
refugees attain economic self-sufficiency as soon as possible. However, 
OMB did not accept the same employment measure as a proxy for long-term 
goals for the Department of Education's Vocational Rehabilitation 
program because it had not set long-term targets beyond a couple of 
years. In other words, although neither program contained long-term 
outcomes, such as participants gaining economic self-sufficiency, OMB 
accepted short-term outcomes in one instance but not the other.

The yes/no format employed throughout most of the PART questionnaire 
resulted in oversimplified answers to some questions. Although OMB 
believes it helped standardization, the yes/no format was particularly 
troublesome for questions containing multiple criteria for a "yes" 
answer. Agency officials have commented that the yes/no format can 
oversimplify reality, in which progress in planning, management, or 
results is more likely to resemble a continuum than an on/off switch. 
Our review of the fiscal year 2004 PART found several instances in 
which some OMB staff gave a "yes" answer for successfully achieving 
some but not all of the multiple criteria, while others gave a "no" 
answer when presented with a similar situation. For example, OMB judged 
the Department of the Interior's (DOI) Water Reuse and Recycling 
program "no" on whether a program has a limited number of ambitious, 
long-term performance goals, noting that although DOI set a long-term 
goal of 500,000 acre-feet per year of reclaimed water, it failed to 
establish a time frame for when it would reach the target. However, OMB 
judged the Department of Agriculture's and DOI's Wildland Fire programs 
"yes" on this question even though the programs' long-term goals of 
improved conditions in high-priority forest acres are not accompanied 
by specific time frames.

The lack of program performance information also creates challenges in 
effectively assessing program performance. According to OMB, about half 
of the programs assessed for fiscal year 2004 lacked "specific, 
ambitious long-term performance goals that focus on outcomes" and 
nearly 40 percent lacked sufficient "independent, quality evaluations." 
Nearly 50 percent of programs assessed for fiscal year 2004 received 
ratings of "results not demonstrated" because OMB decided that program 
performance information, performance goals, or both were insufficient 
or inadequate. While the validity of these assessments may be subject 
to interpretation and debate, our previous work[Footnote 9] has raised 
concerns about the capacity of federal agencies to produce evaluations 
of program effectiveness as well as credible data.

In our report on PART, we note that several factors have limited the 
availability of performance data and evaluations of federal programs, 
including the lack of statutory mandates and funding to support data 
collection and analysis. Our work has recognized that research programs 
pose particular and long-standing challenges for performance 
assessments and evaluations.[Footnote 10] For instance, in both applied 
and basic research, projects take several years to complete and require 
more time before their meaning for the field can be adequately 
understood and captured in performance reporting systems. These 
challenges can and have been addressed by federal and private research 
organizations. One evaluation approach we have identified in our review 
of leading practices is the use of peer review to evaluate the quality 
of research outcomes.[Footnote 11] For example, the National Science 
Foundation (NSF) convenes panels of independent experts as external 
advisers--a Committee of Visitors (COV)--to peer review the technical 
and managerial stewardship of a specific program or cluster of programs 
periodically. The COV compares research plans with progress made, and 
evaluates outcomes to determine whether the research contributes to NSF 
mission and goals.

The Relationship between GPRA and PART:

PART was designed for and is used in the executive branch budget 
preparation and review process. As a result, the goals and measures 
used in PART must meet OMB's needs. By comparison, GPRA--the current 
statutory framework for strategic planning and reporting--is a broader 
process involving the development of strategic and performance goals 
and objectives to be reported in strategic and annual plans and 
reports. OMB said that GPRA plans were organized at too high a level to 
be meaningful for program-level budget analysis and management review. 
OMB acknowledges that GPRA was the starting point for PART, but as I 
will explain, it appears that OMB's emphasis is shifting such that over 
time the performance measures developed for PART and used in the budget 
process may also come to drive agencies' strategic planning processes.

The fiscal year 2004 PART process came to be a parallel competing 
structure to the GPRA framework as a result of OMB's desire to collect 
performance data that better align with budget decision units. OMB's 
most recent Circular A-11 guidance clearly requires both that each 
agency submit a performance budget for fiscal year 2005 and that this 
should replace the annual GPRA performance plan.[Footnote 12] These 
performance budgets are to include information from the PART 
assessments, where available, including all performance goals used in 
the assessment of program performance done under the PART process. 
Until all programs have been assessed using PART, the performance 
budget will also include performance goals for agency programs that 
have not yet been assessed. OMB's movement from GPRA to PART is further 
evident in the fiscal year 2005 PART guidance stating that while 
existing GPRA performance goals may be a starting point during the 
development of PART performance goals, the GPRA goals in agency GPRA 
documents are to be revised, as needed, to reflect OMB's instructions 
for developing the PART performance goals. Lastly, this same guidance 
states that GPRA plans should be revised to include any new performance 
measures used in PART and that unnecessary measures should be deleted 
from GPRA plans. In its comments to another recently issued GAO report, 
OMB stated that it will revise its guidance for both GPRA and PART to 
clarify the integrated and complementary relationship between the two 
initiatives.[Footnote 13]

Although there is potential for complementary approaches to GPRA and 
PART, the following examples clearly illustrate the importance of 
carefully considering the implications of selecting a unit of analysis, 
including its impact on the availability of performance data. They also 
reveal some of the unresolved tensions between the President's budget 
and performance initiative--a detailed budget perspective--and GPRA--a 
more strategic planning view. Experience with PART highlighted the fact 
that defining a "unit of analysis" useful for both program-level budget 
analysis and agency planning purposes can be difficult. For example, 
disaggregating programs for PART purposes could ignore the 
interdependence of programs recognized by GPRA by artificially 
isolating programs from the larger contexts in which they operate. 
Agency officials described one program assessed with the fiscal year 
2004 PART--Projects for Assistance in Transition from Homelessness--
that was aimed at a specific aspect of homelessness, that is, referring 
persons with emergency needs to other agencies for housing and needed 
services. OMB staff wanted the agency to produce long-term outcome 
measures for this program to support the PART review process. Agency 
officials argued that chronically homeless people require many services 
and that this federal program often supports only some of the services 
needed at the initial stages of intervention. GPRA--with its focus on 
assessing the relative contributions of related programs to broader 
goals--is better designed to consider crosscutting strategies to 
achieve common goals. Federal programs cannot be assessed in isolation. 
Performance also needs to be examined from an integrated, strategic 
perspective.

One way of improving the links between PART and GPRA would be to 
develop a more strategic approach to selecting and prioritizing areas 
for assessment under the PART process. Targeting PART assessments based 
on such factors as the relative priorities, costs, and risks associated 
with related clusters of programs and activities addressing common 
strategic and performance goals not only could help ration scarce 
analytic resources but also could focus decision makers' attention on 
the most pressing policy and program issues. Moreover, such an approach 
could facilitate the use of PART assessments to review the relative 
contributions of similar programs to common or crosscutting goals and 
outcomes established through the GPRA process.

The Importance of Congressional and Other Stakeholder Involvement:

We have previously reported[Footnote 14] that stakeholder involvement 
appears critical for getting consensus on goals and measures. In fact, 
GPRA requires agencies to consult with Congress and solicit the views 
of other stakeholders as they develop their strategic plans.[Footnote 
15] Stakeholder involvement can be particularly important for federal 
agencies because they operate in a complex political environment in 
which legislative mandates are often broadly stated and some 
stakeholders may strongly disagree about the agency's mission and 
goals.

The relationship between PART and its process and the broader GPRA 
strategic planning process is still evolving. As part of the executive 
branch budget formulation process, PART must clearly serve the 
President's interests. Some tension about the amount of stakeholder 
involvement in the internal deliberations surrounding the development 
of PART measures and the broader consultations more common to the GPRA 
strategic planning process is inevitable. Compared to the relatively 
open-ended GPRA process, any budget formulation process is likely to 
seem closed.

Yet, we must ask whether the broad range of congressional officials 
with a stake in how programs perform will use PART assessments unless 
they believe the reviews reflect a consensus about performance goals 
among a community of interests, target performance issues that are 
important to them as well as the administration, and are based on an 
evaluation process in which they have confidence. Similarly, the 
measures used to demonstrate progress toward a goal, no matter how 
worthwhile, cannot serve the interests of a single stakeholder or 
purpose without potentially discouraging use of this information by 
others.

Congress has a number of opportunities to provide its perspective on 
performance issues and performance goals, such as when it establishes 
or reauthorizes a new program, during the annual appropriations 
process, and in its oversight of federal operations. In fact, these 
processes already reflect GPRA's influence. Reviews of language in 
public laws and committee reports show an increasing number of 
references to GPRA-related provisions. What is missing is a mechanism 
to systematically coordinate a congressional perspective and promote a 
dialogue between Congress and the President in the PART review process.

In our report, we have suggested steps for both OMB and the Congress to 
take to strengthen the dialogue between executive officials and 
congressional stakeholders. We have recommended that OMB reach out to 
key congressional committees early in the PART selection process to 
gain insight about which program areas and performance issues 
congressional officials believe warrant PART review. Engaging Congress 
early in the process may help target reviews with an eye toward those 
areas most likely to be on the agenda of Congress, thereby better 
ensuring the use of performance assessments in resource allocation 
processes throughout government. We have also suggested that Congress 
consider the need to develop a more systematic vehicle for 
communicating its top performance concerns and priorities; develop a 
more structured oversight agenda to prompt a more coordinated 
congressional perspective on crosscutting performance issues; and use 
this agenda to inform its authorization, appropriations, and oversight 
processes.

Concluding Observations:

The PART process is the latest initiative in a long-standing series of 
reforms undertaken to improve the link between performance information 
and budget decisions. Although each of the initiatives of the past 
appears to have met with an early demise, in fact, subsequent reforms 
were strengthened by building on the legacy left by their predecessors. 
Prior reforms often failed because they were not relevant to resource 
allocation and other decision-making processes, thereby eroding the 
incentives for federal agencies to improve their planning, data, and 
evaluations.

Unlike many of those past initiatives, GPRA has been sustained since 
its passage 10 years ago, and evidence exists that it has become more 
relevant than its predecessors. PART offers the potential to build on 
the infrastructure of performance plans and information ushered in by 
GPRA and the law's intent to promote the use of these plans in resource 
allocation decision making. GPRA improved the supply of plans and 
information, while PART can prompt greater demand for this information 
by decision makers. Enhancing interest and use may bring about greater 
incentives for agencies to devote scarce resources to improving their 
information and evaluations of federal programs as well.

Increasing the use and usefulness of performance data is not only 
important to sustain performance management reforms, but to improve the 
processes of decision making and governance. Many in the United States 
believe there is a need to establish a comprehensive portfolio of key 
national performance indicators. This will raise complex issues ranging 
from agreement on performance areas and indicators to getting and 
sharing reliable information for public planning, decision making, and 
accountability. In this regard, the entire agenda of management reform 
at the federal level has been focused on shifting the attention of 
decision makers and agency management from process to results. Although 
PART is based on changing the orientation of budgeting, other 
initiatives championed by Congress and embodied in the PMA are also 
devoted to improving the accountability for performance goals in agency 
human capital management, financial management, competitive sourcing, 
and other key management areas.

In particular, we have reported that human capital--or people--is at 
the center of any serious change management initiative. Thus, strategic 
human capital management is at the heart of government transformation. 
High-performing organizations strengthen the alignment of their GPRA 
strategic and performance goals with their daily operations. In that 
regard, performance management systems can be a vital tool for aligning 
an organization's operations with individual day-to-day activities, but 
they are currently largely unused. As we move forward to strengthen 
government performance and accountability, effective performance 
management systems can be a strategic tool to drive internal change and 
achieve desired results.

The question now is how to enhance the credibility and use of the PART 
process as a tool to focus decisions on performance. In our report, we 
make seven recommendations to OMB and a suggestion to Congress to 
better support the kind of collaborative approach to performance 
budgeting that very well may be essential in a separation of powers 
system like ours. Our suggestions cover several key issues that need to 
be addressed to strengthen and help sustain the PART process. We 
recommend that the OMB Director take the following actions:

* Centrally monitor agency implementation and progress on PART 
recommendations and report such progress in OMB's budget submission to 
Congress. Governmentwide councils may be effective vehicles for 
assisting OMB in these efforts.

* Continue to improve the PART guidance by (1) expanding the discussion 
of how the unit of analysis is to be determined to include trade-offs 
made when defining a unit of analysis, implications of how the unit of 
analysis is defined, or both; (2) clarifying when output versus outcome 
measures are acceptable; and (3) better defining an "independent, 
quality evaluation.":

* Clarify OMB's expectations to agencies regarding the allocation of 
scarce evaluation resources among programs, the timing of such 
evaluations, as well as the evaluation strategies it wants for PART, 
and consider using internal agency evaluations as evidence on a case-
by-case basis--whether conducted by agencies, contractors, or other 
parties.

* Reconsider plans for 100 percent coverage of federal programs and, 
instead, target for review a significant percentage of major and 
meaningful government programs based on such factors as the relative 
priorities, costs, and risks associated with related clusters of 
programs and activities.

* Maximize the opportunity to review similar programs or activities in 
the same year to facilitate comparisons and trade-offs.

* Attempt to generate, early in the PART process, an ongoing, 
meaningful dialogue with congressional appropriations, authorization, 
and oversight committees about what they consider to be the most 
important performance issues and program areas that warrant review.

* Seek to achieve the greatest benefit from both GPRA and PART by 
articulating and implementing an integrated, complementary 
relationship between the two.

In its comments on our report, OMB outlined actions it is taking to 
address several of these recommendations, including refining the 
process for monitoring agencies' progress in implementing the PART 
recommendations, seeking opportunities for dialogue with Congress on 
agencies' performance, and continuing to improve executive branch 
implementation of GPRA plans and reports.

Our recommendations to OMB are partly directed at fortifying and 
enhancing the credibility of PART itself and the underlying data used 
to make the judgments. Decision makers across government are more 
likely to rely on PART data and assessments if the underlying 
information and the rating process are perceived as being credible, 
systematic, and consistent. Enhanced OMB guidance and improved 
strategies for obtaining and evaluating program performance data are 
vital elements.

The PART process can be made more sustainable if the use of analytic 
resources at OMB and the agencies is rationalized by reconsidering the 
goal of 100 percent coverage of all federal programs. Instead, we 
suggest a more strategic approach to target assessments on related 
clusters of programs and activities. A more targeted approach stands a 
better chance of capturing the interest of decision makers throughout 
the process by focusing their attention on the most pressing policy and 
program issues and on how related programs and tools affect broader 
crosscutting outcomes and goals. Unfortunately, the governmentwide 
performance plan required by GPRA has never been engaged to drive 
budgeting in this way.

Improving the integration of inherently separate but interrelated 
strategic planning and performance budgeting processes can help support 
a more strategic focus for PART assessments. GPRA's strategic planning 
goals could be used to anchor the selection and review of programs by 
providing a foundation to assess the relative contribution of related 
programs and tools to broader performance goals and outcomes.

Finally, refining the PART questionnaire and review process and 
improving the quality of data are important, but the question of whose 
interests drive the process is perhaps paramount in our system. 
Ultimately, the impact of PART on decision making will be a function 
not only of the President's decisions, but of congressional decisions 
as well.

Much is at stake in the development of a collaborative performance 
budgeting process. Not only might the PART reviews ultimately come to 
be disregarded absent congressional involvement, but more important, 
Congress will lose an opportunity to use the PART process to improve 
its own decision-making and oversight processes.

This is an opportune time for the executive branch and Congress to 
carefully consider how agencies and committees can best take advantage 
of and leverage the new information and perspectives coming from the 
reform agenda under way in the executive branch. Ultimately, the 
specific approach or process is not important. We face a long-term 
fiscal imbalance, which will require us to reexamine our existing 
policies and programs. It is all too easy to accept "the base" as given 
and to subject only new proposals to scrutiny and analysis. The norm 
should be to reconsider the relevance or "fit" of any federal program, 
policy, or activity in today's world and for the future.

Mr. Chairman, this concludes my prepared statement. I would be pleased 
to answer any questions you or the other Members of the Subcommittee 
may have at this time.

For future contacts regarding this testimony, please contact Paul L. 
Posner, Managing Director, Federal Budget Issues, at (202) 512-9573. 
Individuals making key contributions to this testimony included Denise 
M. Fantone, Kristeen McLain and Tiffany Tanner.

FOOTNOTES

[1] Pub. L. No. 103-62 (1993).

[2] U.S. General Accounting Office, Performance Budgeting: Past 
Initiatives Offer Insights for GPRA Implementation, GAO/AIMD-97-46 
(Washington, D.C.: Mar. 27, 1997).

[3] See Pub. L. No. 103-62 § 2 (1993), 5 U.S.C. § 306 (2003), and 31 
U.S.C. §§ 1115-1116 (2003).

[4] In addition to budget and performance integration, the other four 
priorities under the PMA are strategic management of human capital, 
expanded electronic government, improved financial performance, and 
competitive sourcing. 

[5] There is no standard definition for the term "program." For 
purposes of PART, OMB described the unit of analysis (program) as (1) 
an activity or set of activities clearly recognized as a program by the 
public, OMB, or Congress; (2) having a discrete level of funding 
clearly associated with it; and (3) corresponding to the level at which 
budget decisions are made.

[6] The seven major categories are competitive grants, block/formula 
grants, capital assets and service acquisition programs, credit 
programs, regulatory-based programs, direct federal programs, and 
research and development programs. 

[7] U.S. General Accounting Office, Performance Budgeting: Observations 
on the Use of OMB's Program Assessment Rating Tool for the Fiscal Year 
2004 Budget, GAO-04-174 (Washington, D.C.: Jan. 30, 2004).

[8] The 234 programs assessed for fiscal year 2004 contained a total of 
612 recommendations.

[9] U.S. General Accounting Office, Program Evaluation: Agencies 
Challenged by New Demand for Information on Program Results, GAO/
GGD-98-53 (Washington, D.C.: Apr. 24, 1998).

[10] U.S. General Accounting Office, Transportation Research: Actions 
Needed to Improve Coordination and Evaluation of Research, GAO-03-500 
(Washington, D.C.: May 1, 2003).

[11] U.S. General Accounting Office, Program Evaluation: An Evaluation 
Culture and Collaborative Partnerships Help Build Agency Capacity, GAO-
03-454 (Washington, D.C.: May 2, 2003).

[12] OMB Circular A-11, Preparation, Submission, and Execution of the 
Budget.

[13] U.S. General Accounting Office, Results-Oriented Government: GPRA 
Has Established a Solid Foundation for Achieving Greater Results, 
GAO-04-38 (Washington, D.C.: March 10, 2004).

[14] U.S. General Accounting Office, Agencies' Strategic Plans Under 
GPRA: Key Questions to Facilitate Congressional Review (Version 1), 
GAO/GGD-10.1.16 (Washington, D.C.: May 1997).

[15] 5 U.S.C. § 306(d) (2003).