This is the accessible text file for GAO report number GAO-15-25 entitled 'Program Evaluation: Some Agencies Reported that Networking, Hiring, and Involving Program Staff Help Build Capacity' which was released on November 13, 2014. This text file was formatted by the U.S. Government Accountability Office (GAO) to be accessible to users with visual impairments, as part of a longer term project to improve GAO products' accessibility. Every attempt has been made to maintain the structural and data integrity of the original printed product. Accessibility features, such as text descriptions of tables, consecutively numbered footnotes placed at the end of the file, and the text of agency comment letters, are provided but may not exactly duplicate the presentation or format of the printed version. The portable document format (PDF) file is an exact electronic replica of the printed version. We welcome your feedback. Please E-mail your comments regarding the contents or accessibility features of this document to Webmaster@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. Because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. United States Government Accountability Office: GAO: Report to Congressional Committees: November 2014: Program Evaluation: Some Agencies Reported that Networking, Hiring, and Involving Program Staff Help Build Capacity: GAO-15-25: GAO Highlights: Highlights of GAO-15-25, a report to congressional committees. Why GAO Did This Study: To improve federal government performance and accountability, GPRAMA aims to ensure that agencies use performance information in decision making and holds them accountable for achieving results. The Office of Management and Budget (OMB) has encouraged agencies to strengthen their program evaluations–-systematic studies of program performance–- and expand their use in management and policy making. This report is one of a series in which GAO, as required by GPRAMA, examines the act's implementation. GAO examined federal agencies' capacity to conduct and use program evaluations and the activities and resources, including some related to GPRAMA, agencies found useful for building that capacity. GAO reviewed the literature to identify the key components and measures of evaluation capacity. GAO surveyed the PIOs of the 24 federal agencies subject to the Chief Financial Officers Act regarding their organizations' characteristics, expertise, and policies, and their observations on the usefulness of various resources and activities for building evaluation capacity. All 24 responded. GAO also interviewed OMB and Office of Personnel Management (OPM) staff about their capacity-building efforts. What GAO Found: In a governmentwide survey of agency Performance Improvement Officers (PIO), GAO found uneven levels of evaluation expertise, organizational support within and outside the organization, and use across the government. The Government Performance and Results Act of 1993 (GPRA) is a key component of the enabling environment for federal evaluation capacity, having established a solid foundation of agency performance reporting and leadership commitment to using evidence in decision making. However, only half the agencies reported congressional interest in or requests for program evaluation studies. Eleven of the 24 agencies reported committing resources to obtain evaluations by establishing a central office responsible for evaluation of agency programs, operations, or projects, although only half these offices were reported to have a stable source of funding. Seven agencies reported having a high-level official responsible for oversight of evaluation. A quarter of agencies reported having agency- wide policies or guidance concerning key issues in study design, evaluator independence and objectivity, report transparency, or implementing findings. Two-thirds of the agencies reported evaluation coverage of less than half their performance goals. Over a third reported using evaluations to a moderate or greater extent as evidence in support of budget or policy changes or program management. Those agencies with centralized evaluation authority reported greater evaluation coverage and use of the results in decision making. Since the GPRA Modernization Act of 2010 (GPRAMA) was passed, 2 to 4 agencies established a central evaluation office or leader. Half the agencies reported increased efforts to improve their evaluation capacity through hiring, training, conference participation, and consulting experts, but 4 to 5 reported declines in hiring and conference participation. About half reported increased use of evaluations as supporting evidence for management and policy decisions. About a quarter of PIOs were not familiar with their agencies' various capacity-building activities but many of those that did respond rated hiring, professional networking, consulting with experts, reviewing progress on priority goals, and holding goal leaders accountable under GPRAMA most useful for building capacity to conduct evaluations. They rated engaging program staff in evaluation design, conduct, and reporting, and the GPRAMA priority goal review and accountability provisions most useful for building capacity to use evaluation. Based on our survey results, GAO observes that: * Promoting information sharing in professional networks and engaging program managers and staff in evaluation studies and priority goal reviews offer promise for building capacity in a constrained budget environment. * Engaging congressional and other stakeholders in evaluation planning might increase their interest in and adoption of evaluation recommendations. * Congressional committees can communicate their interest in evaluation by consulting with agencies on their strategic plans and priority goals, reviewing agency annual evaluation plans to ensure they address issues that will inform congressional decision making, and requesting evaluations to address specific questions of interest. What GAO Recommends: GAO is not making recommendations. OMB staff provided technical comments on a draft of this report that were incorporated as appropriate. OPM provided no comments. View [hyperlink, http://www.gao.gov/products/GAO-15-25]. For more information, contact Nancy Kingsbury, (202) 512-2700, KingsburyN@gao.gov. [End of section] Contents: Letter: Background: Federal Agencies' Capacity to Conduct and Use Program Evaluation Is Uneven: Some Agencies Improved Their Evaluation Capacity after GPRAMA Was Enacted: Some Agencies Reported that Professional Networking, Hiring, Engaging Program Staff, and Some GPRAMA Provisions Were Useful for Building Evaluation Capacity: Concluding Observations: Agency Comments: Appendix I: Methodology for the Survey of Performance Improvement Officers in CFO Act Agencies: Appendix II: Results from 2014 Survey of Performance Improvement Officers: Appendix III: GAO Contacts and Staff Acknowledgments: References: Related GAO Products: Figures: Figure 1: Number of Agencies Citing Evaluation Evidence to Support Various Decisions: Figure 2: Agencies Reporting Change since 2010 in Efforts to Improve Their Capacity to Conduct Evaluations: Figure 3: Agencies Reporting Change since 2010 in Citing Evaluation as Supporting Evidence in Decisions: Figure 4: PIOs' Views on Usefulness of Activities and Resources for Improving Agency Capacity to Conduct Credible Evaluations: Figure 5: PIOs' Views on Usefulness of Activities and Resources for Improving Agency Capacity to Use Evaluations in Decision Making: Abbreviations: AAAS: American Association for the Advancement of Science: AEA: American Evaluation Association: CFO: Chief Financial Officer: COO: Chief Operating Officer: CAP: Cross-Agency Priority goal: GAO: Government Accountability Office: GPRA: Government Performance and Results Act: GPRAMA: GPRA Modernization Act: HHS: Department of Health and Human Services: OMB: Office of Management and Budget: OPM: Office of Personnel Management: PIC: Performance Improvement Council: PIO: Performance Improvement Officer: [End of section] United States Government Accountability Office: GAO: 441 G St. N.W. Washington, DC 20548: November 13, 2014: The Honorable Thomas R. Carper: Chairman: The Honorable Tom Coburn, M.D. Ranking Member: Committee on Homeland Security and Governmental Affairs: United States Senate: The Honorable Elijah E. Cummings: Ranking Member: Committee on Oversight and Government Reform: House of Representatives: The federal government faces a number of significant fiscal, financial management, and performance management challenges in responding to the diverse and increasingly complex issues it seeks to address. The reporting requirements of the Government Performance and Results Act of 1993 (GPRA) were intended to provide both congressional and executive decision makers with more objective information on the relative effectiveness and efficiency of federal programs and spending.[Footnote 1] Although GPRA helped improve the availability of agency performance information, federal managers reported limited use of performance data for decision making.[Footnote 2] The GPRA Modernization Act of 2010 (GPRAMA, or the act) aims to ensure that agencies use performance information in decision making and holds them accountable for achieving results and improving government performance.[Footnote 3] The Office of Management and Budget (OMB), too, has encouraged agencies to strengthen their program evaluations-- systematic studies of program performance--and expand their use of evidence and evaluation in budget, management, and policy decisions to improve government effectiveness. However, in our 2013 survey of federal managers, we found that their use of performance information had stagnated since our 2007 survey and that, among other things, their use was hindered by inadequate staff expertise in performance measurement and analysis as well as a widespread lack of program evaluations.[Footnote 4] This report is one of a series responding to GPRAMA's mandate that we examine implementation of the act.[Footnote 5] In this report, we assess federal agencies' current evaluation capacity and identify how some GPRAMA provisions and other activities have contributed to its improvement. Specifically, our objectives were to learn: 1. What are the key elements and extent of agency evaluation capacity-- that is, the ability to obtain and use evaluations in decision making? 2. What progress, if any, has been made since 2010 across the government in improving evaluation capacity? 3. What activities, if any, especially those related to GPRAMA provisions, have agencies found useful in building their evaluation capacity? To answer our first objective, we reviewed published domestic and international research and commentary on key components of organizational capacity for program evaluation, including GAO reports and recommendations of national and international evaluation organizations. We identified the key organizational characteristics, expertise, and policies believed either to be required for or to indicate the ability to obtain credible evaluations of agency programs and policies and to use the results in management and policy decisions. We also identified strategies used or proposed for building an organization's evaluation capacity. To answer all three objectives, we surveyed the Performance Improvement Officers (PIO) of the 24 executive branch agencies covered by the Chief Financial Officers (CFO) Act of 1990, as amended. [Footnote 6] The survey questionnaire was designed to obtain information on their agencies' elements of evaluation capacity as described above, and their observations and perceptions of the usefulness of various resources and activities for building their agencies' capacity to produce evaluations and use the results in decision making. We administered the web-based survey from May through June 2014, receiving responses from all 24 agencies. Throughout this report except where specifically noted, when we refer to agencies, we are referring to both cabinet departments and independent agencies. (More information on the survey is in appendix I. The survey questions and summarized results are in appendix II.) In addition, we reviewed examples of agency evaluation plans and policies that the survey respondents provided. We also interviewed OMB and Office of Personnel Management (OPM) staff about their capacity-building efforts, reviewed agency guidance and memorandums, and attended related interagency information-sharing forums. We conducted this performance audit from September 2013 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Background: Program evaluations are systematic studies that use research methods to address specific questions about program performance. Evaluation is closely related to performance measurement and reporting. Whereas performance measurement entails the ongoing monitoring and reporting of program progress toward preestablished goals; program evaluation typically assesses the achievement of a program's objectives and other aspects of performance in the context in which the program operates. In particular, evaluations can be designed to isolate the causal impacts of programs from other external economic or environmental conditions in order to assess a program's effectiveness. Thus, an evaluation study can provide a valuable supplement to ongoing performance reporting by measuring results that are too difficult or expensive to assess annually, explaining the reasons why performance goals were not met, or assessing whether one approach is more effective than another.[Footnote 7] Evaluation can play a key role in program planning, management, and oversight by providing feedback on both program design and execution to program managers, legislative and executive branch policy officials, and the public. In our 2013 survey of federal managers, we found that while only about a third had recent evaluations of their programs or projects, the majority of those who had evaluations reported that they contributed to understanding program performance, sharing what works with others, and making changes to improve program management or performance.[Footnote 8] GPRAMA Established an Expectation That Evidence Would Have a Greater Role in Agency Decision Making: GPRAMA made changes to agency performance management roles, planning and review processes, and reporting intended to ensure that agencies used performance information in decision making and were held accountable for achieving results and improving government performance.[Footnote 9] The act required the 24 CFO Act agencies and OMB to establish agency and governmentwide cross-agency priority goals, review progress on those goals quarterly, and report publicly on their progress and strategies to improve performance, as needed, on a governmentwide performance website. It also encouraged a more detailed and comprehensive understanding of those strategies by requiring agencies to identify and coordinate the program activities, organizations, regulations, policies, and other activities--both internal and external--that contribute to each agency priority goal. GPRAMA, along with related OMB guidance, established and defined performance management responsibilities for agency officials in key management roles: the Chief Operating Officer (COO), the PIO, and a goal leader responsible for coordinating efforts to achieve the cross- agency and agency priority goals. The PIO role was created in 2007 by executive order.[Footnote 10] GPRAMA established the role in law and specified that it be given to a "senior executive" at each agency who reports directly to the agency's COO or to its deputy agency head. The PIO is to advise the head of the agency and the COO on goal setting, measurement, and reviewing progress on the agency priority goals. OMB guidance gave PIOs a central role in promoting agency use of evaluation and other evidence to improve program performance, describing their roles as: * "... driving performance improvement efforts across the organization, by using goal-setting, measurement, analysis, evaluation and other research, data-driven performance reviews on progress, cross- agency collaboration, and personnel performance appraisals aligned with organizational priorities." * "Help components, program office leaders and goal leaders to identify and promote adoption of effective practices to improve outcomes, responsiveness and efficiency, by supporting them in ... securing evaluations and other research as needed ... and creating a network for learning and knowledge sharing about successful outcome- focused, data-driven performance improvement methods across all levels of the organization and with delivery partners."[Footnote 11] The act also charged the Performance Improvement Council (PIC), the Office of Personnel Management (OPM), and OMB with responsibilities to improve agency performance management capacity. The PIC is an interagency council that was created by executive order, but GPRAMA established it in law and specified that it would be chaired by the OMB Deputy Director for Management and that membership would include the PIOs from all 24 CFO Act agencies, as well as any others. The PIC's duties include facilitating agencies' exchange of successful practices and the development of tips and tools to strengthen agency performance management, and assisting OMB in implementing certain GPRAMA requirements. The PIC holds "principals only" and broader meetings open to other agency staff, has formed several working groups that focus on issues relating to implementing GPRAMA and related guidance, and provides a networking forum for staff from different agencies who are working on similar issues. In 2012 through 2014, OMB and the PIC supported several interagency forums on evaluation and evidence that were open to all federal agency staff. The act charged OPM with (1) identifying key skills and competencies needed by federal employees for developing goals, evaluating programs, and analyzing and using performance information for improving governmental efficiency and effectiveness; (2) incorporating those skills and competencies into relevant position classifications; and (3) working with agencies to incorporate these skills and competencies into agency training. OPM identified core competencies for performance management staff, PIOs, and goal leaders and published them in a January 2012 memorandum.[Footnote 12] OPM identified relevant existing position classifications that are related to the competencies and worked with the PIC Capacity Building working group to develop related guidance and tools for agencies. In December 2012, the PIC released a draft Performance Analyst position design, recruitment, and selection toolkit. OPM worked with the Chief Learning Officers Council and the PIC Capacity Building working group to develop a website--the Training and Development Policy wiki--that lists some resources for personnel performance management and implementing GPRAMA. OPM is currently conducting pilot studies through 2015, in collaboration with the Chief Human Capital Officers Council, of how to build staff capacity in several competencies identified as mission critical across government, including data analysis. OPM officials also noted that they make databases, such as the Federal Employee Viewpoint Survey, available to agencies for their staff to use in program evaluations.[Footnote 13] OMB Efforts to Improve Agency Evaluation Capacity: OMB has taken several steps to help agencies develop evaluation capacity by issuing guidance, promoting the exchange of evaluation expertise through the PIC, and working selectively with certain agencies. Since 2009, OMB has issued several memorandums urging efforts to strengthen the use of rigorous impact evaluation, and demonstrate the use of evidence and evaluation in budget submissions, strategic plans, and performance plans.[Footnote 14] In May 2012, OMB encouraged agencies to designate a high-level official responsible for evaluation who could develop and manage the agency's research agenda and provide independent input to agency policymakers on resource allocation and to program leaders on program management. In July 2013, the Directors of OMB, the Domestic Policy Council, the Office of Science and Technology Policy, and the Chairman of the Council of Economic Advisers, jointly issued a memorandum encouraging agencies to adopt an "evidence and innovation agenda": applying existing evidence on what works, generating new knowledge, and using experimentation and innovation to test new approaches to program delivery. In particular, the memorandum encouraged agencies to exploit existing administrative data to conduct low-cost experiments, and implement outcome-focused grant designs and research clearinghouses to catalyze innovation and learning.[Footnote 15] OMB staff established an interagency group to promote sharing of evaluation expertise, and organized a series of workshops and interagency collaborations. The workshops addressed issues such as potential procedural barriers to evaluation (e.g., the Paperwork Reduction Act information collection reviews) and promising practices for collecting evidence (e.g., developing a common evidence framework). OMB staff facilitated the collaboration of staff from the Department of Education and the National Science Foundation in developing common standards of evidence for reviewing research proposals, and another group of agencies in developing a common framework of standards for reviewing completed evaluations. Federal Agencies' Capacity to Conduct and Use Program Evaluation Is Uneven: Studies of organization or government evaluation capacity have found that it requires analytic expertise and access to credible data as well as organizational support both within and outside the organization to ensure that credible, relevant evaluations are produced and used. Our survey found levels of evaluation expertise, support, and use uneven across the government. For example, 7 of the 24 agencies have central leaders responsible for evaluation; in contrast, 7 agencies reported having no recent evaluations for any of their performance goals. An Agency's Evaluation Capacity Depends on Both Policy Makers' Requests for Information and the Agency's Ability to Produce Credible, Relevant Information: To address our first objective and guide our assessment of agency evaluation capacity, we reviewed the research and policy literature on evaluation capacity, including assessments of agencies in Canada and the United Kingdom, and guidance from the American Evaluation Association (AEA) and the United Nations Evaluation Group.[Footnote 16] While the details vary, these frameworks commonly emphasize three general categories of elements of organizational, especially national, evaluation capacity: * An enabling environment supporting the use of evidence in management and policymaking: credible information and statistical systems, legislation or policies to institutionalize monitoring and evaluation, public interest in evidence of government performance, and senior leadership commitment to transparency, accountability, and managing for results. * Organizational resources to support the supply and use of credible evaluations: a senior evaluation leader; an evaluation office with clearly defined roles and responsibilities, a stable source of funding, and independence; an evaluation agenda, policies and tools to ensure study credibility and utility; staff expertise and access to experts; and collaboration with program managers and stakeholders. * Evaluation results and use: evaluation quality and credibility; coverage of the agency's key programs or goals; transparent reporting and public dissemination of reports; recommendation follow-up; and the use of evaluation results in program management, policy making, and budgeting. Our Survey Respondents: To learn about federal agencies' evaluation capacity, we surveyed the PIOs or their deputies at the 24 CFO Act agencies because of the central role GPRAMA and OMB assigned them to promote agency performance assessment and improvement efforts.[Footnote 17] Our 2012 survey of PIOs found that they held senior leadership positions and that most of them were involved in the central aspects of agency performance management to a large extent.[Footnote 18] Although the PIO position was created in 2007, only one of the initial PIOs continued to hold this position at the time of our 2014 survey. Half had started serving in this position within the past 2 years. Many of our survey respondents held key senior leadership positions in their agencies: 8 PIOs served as the agency's Chief Financial Officer, another 4 as Assistant Secretary or Deputy for Administration or Management. Seventeen reported to their agency's COO, 2 to the agency's administrator or commissioner, and 3 to the agency's CFO. In order to report on the policies and practices of offices throughout these agencies, we encouraged the PIOs to consult with others when completing the survey and several indicated that they did so. Most Federal Agencies Reported Access to and Commitment to Using Evidence but Fewer Reported Congressional Requests for Program Evaluation: GPRA represents a central component of the enabling environment for U.S. government evaluation capacity by providing, for over 20 years, a statutory framework for performance management and accountability across the government. Accordingly, most PIOs reported that their senior leadership demonstrated a commitment to using evidence in management and policy making through agency guidance (17), internal agency memorandums (12), congressional hearings (9), and speeches (8). Other avenues offered in comments included budget justifications (10) and town hall meetings or videos for agency managers and staff (2). Moreover, as we have noted previously, GPRA has produced a solid foundation of generating and reporting performance information. Three- quarters of the agencies (18) said that reliable performance data are available on outcomes for all their priority goals, 3 more said data are available for more than half their priority goals. (One of the independent agencies was exempt from developing priority goals.) However, our survey respondents indicated that congressional interest in and requests for program evaluation are not widespread. Although the federal government has long invested in evaluation, about half the agencies (13) reported having explicit agency-wide authority to use appropriated funds for evaluation. Some pointed to specific legislative authorities, while one PIO commented, "Evaluation is considered inherent to responsible management and programs use appropriated fund[s] for this purpose." Less than half the agencies (10) indicated that they had congressional mandates to evaluate specific programs. However, one-third (7) indicated that they had neither explicit agency-wide authority nor a program-specific requirement to conduct evaluations. The importance of this is that in a prior study agency evaluators told us that not having explicit evaluation authority represented a barrier to the use of program funds for evaluation.[Footnote 19] Half of Federal Agencies Report Committing Resources to Obtain Credible, Relevant Program Evaluations: Our survey asked the PIOs about the agency resources and policies committed to obtaining credible, relevant evaluations. Their responses indicated uneven levels of development across the agencies. About half the agencies (11) reported committing resources to obtain evaluations by establishing a central office responsible for evaluating agency programs, operations, or projects. However, less than a third of agencies have an evaluation plan or agency-wide policies or guidance for ensuring study credibility. Central Evaluation Leadership and Resources: About one-third of the agencies (7) reported having assigned responsibility to a single high-level official to oversee their evaluation studies. Although agencies do not need a central evaluation leader in order to conduct credible evaluations, establishing such a position with clear responsibilities sends a message about the importance of evaluation to agency managers. Almost all these individuals (6) were responsible for setting these agencies' evaluation agendas but only half (3) were responsible for following-up evaluation recommendations. Similar numbers of departments and independent agencies reported having such officials with titles such as Chief Evaluation Officer, Chief Strategic Officer, and Assistant Secretary. According to AEA guidance, a central evaluation office can promote an agency's evaluation capacity and provide a stable organizational framework for planning, conducting, or procuring evaluation studies. All the agencies with a single official responsible for overseeing evaluations also reported having a central office responsible for evaluating agency programs, operations, or projects, but only about half the agencies in total (11) had a central office. The central offices could have other responsibilities as well, such as strategic planning. Most of these offices were said to be independent of program offices in making decisions about evaluation design, conduct, and reporting and to have access to analytic expertise through external experts or contractors, but about half were reported to have a stable source of funding (6). Funding generally came through regular appropriations, although two agencies reported having evaluation set- asides--that is, the ability to tap a percentage of operating divisions' appropriations for evaluation. A larger proportion of independent agencies (5 of 9) than departments (6 of 15) reported having central offices. As discussed earlier, having analytic expertise is a critical element of evaluation capacity. Most agencies with a central office responsible for evaluations (7--8 of 11) reported that the evaluation staff had training and experience to a great or very great extent in each of the following areas: research design and methods, data management and statistical analysis, performance measurement and monitoring, and translating evaluation results into actionable recommendations. Slightly fewer reported that central evaluation office staff had great or very great subject matter expertise (5). Three survey respondents also volunteered that their staff had additional expertise, including economic analysis, geographical information systems and Lean cost reduction analysis. Few Agencies Have Policies to Ensure Credible and Relevant Evaluations: Organizations, whether government agencies or professional societies, develop written policies or standards in order to provide benchmarks for ensuring the quality of their processes and products. AEA has published guides for the individual evaluator's practice and for developing and implementing U.S. government evaluation programs. [Footnote 20] About one-quarter of agencies reported having agency- wide written policies or guidance for key issues addressed in those guides: * selecting and prioritizing evaluation topics; * consulting program staff and subject matter experts; * ensuring internal or external evaluator independence and objectivity; * selecting evaluation approaches and methods; * ensuring completeness and transparency of evaluation reports; * timely, public dissemination of evaluation findings and recommendations; or: * tracking implementation of evaluation findings. A few more agencies, but less than half, reported having policies on ensuring quality of data collection and analysis, which could apply to research as well as program evaluation. Central evaluation leadership was not required to adopt evaluation policies; as only about half of the agencies with agency-wide evaluation policies had a central evaluation office. Agencies provided us with examples of guidance on information quality or scientific integrity as well as program evaluations specifically.[Footnote 21] We, along with OMB and AEA, have all noted that developing an evaluation agenda is important for ensuring that an agency's often scarce research and evaluation resources are targeted to the most important issues and can shape budget and policy priorities and management practices.[Footnote 22] Less than a third of the agencies (7) reported having an agency-wide evaluation plan. Most such plans were reported to cover multiple years and programs across all major agency components. Senior agency officials and program managers were said to have been consulted in developing all these plans, but few agencies reported consulting congressional stakeholders or researchers. All but 1 of the 7 agencies that had a plan also had a central evaluation office. Because we found in a previous report that stakeholder involvement facilitates the use of evaluation studies, we asked whether stakeholders were consulted in designing and conducting evaluation studies, either formally or informally.[Footnote 23] Almost all the PIOs reported consulting senior agency officials (20) and program managers (21) and three-quarters consulted researchers, but few (5) reported consulting congressional staff, less than local program providers or regulated entities. Agency Components' Resources and Policies: Agency evaluation offices are located at different organizational levels, which we have previously found affects the scope of their program and analytic responsibilities as well as the range of issues they consider. In a previous study, we found that evaluators in central research and evaluation offices described having a broader and more flexible choice of topics than did evaluators in program offices. [Footnote 24] In our 2014 survey, half the federal agencies (12) reported that some agency components (such as an administration or bureau) had a central office responsible for evaluation and that the number of such components ranged from 1 to 12 within a department or independent agency. These offices generally existed in addition to, rather than instead of an agency-wide office responsible for evaluation; as a result, 10 agencies had neither type of office. As might be expected, component offices were less likely than central offices to be considered independent of program offices (6 of 12 agencies reported that all or many of their offices had independence in decision making), but 10 of 12 reported that all or many of these offices had access to external experts, and, like the central offices, few reported having a stable source of funds. About half the agencies with component central offices for evaluation reported that the evaluation staff had training and experience to a great or very great extent in research design and methods, data management and statistical analysis, performance measurement and monitoring, and translating evaluation results into actionable recommendations. These were slightly lower than the ratings for the central office staff's training. As one might expect, staff were characterized as having great to very great subject matter expertise more often in component offices (9 of 12) than in central evaluation offices (5 of 11). Only a few PIOs (2 to 4) reported that many or all component central offices for evaluation had written evaluation policies or guidance for any of the issues we listed. More often PIOs (2 to 6) reported not knowing if they had those specific policies. Agencies Report Moderate Use of Evaluations in Managing Programs, Setting Policy, or Allocating Resources: To assess the results or outcomes of agency evaluation activity, our survey asked the PIOs about the characteristics of the evaluations they produced and their use in decision making. In line with the level of resources they committed to evaluation, the availability and use of program evaluations were uneven across the 24 federal agencies. Even though agencies may not have many evaluations, more than a third report using them from a moderate to a very great extent to support several aspects of program management and policy making. Because agencies use the term "program" in different ways, we chose to assess agencies' evaluation coverage of key programs and missions by the proportion of performance goals for which evaluations had been completed in the past 5 years or were in progress. The number of performance goals may vary across agencies but, per OMB guidance, they are supposed to be specific, near-term, realistic targets that an agency seeks to influence to advance its mission, and publicly reports. Only four agencies reported full evaluation coverage of their performance goals. Two-thirds of the agencies reported evaluation coverage of less than half their performance goals; including 7 that reported having evaluations for none of their performance goals. Evaluation coverage was greater in agencies that established centralized authority for evaluation. Three of the 4 agencies with full coverage of their performance goals had both a central evaluation leader and central evaluation office, while all 7 agencies with no coverage had neither. Interestingly, 2 of the 7 agencies that reported having no evaluations of their performance goals did report having component evaluation offices, so they might have had some evaluations that simply did not address topics considered key to advancing their mission. GAO guidance notes that strong evaluations rely on sufficient and appropriate evidence; document their assumptions, procedures, and modes of analysis; and rule out competing explanations.[Footnote 25] Thus, transparent reporting of data sources and analyses are critical for ensuring that evaluations are considered credible and trustworthy. About half the PIOs (10) reported that their evaluation reports are transparent to a great or very great extent in describing the data sources used and the analyses performed forming the basis of conclusions; another 7 indicated that they did not know or did not respond to the question. According to the evaluation capacity literature, timely, public dissemination of evaluation findings is important to support government accountability for results to the legislature and the public and to ensure that findings are available to inform decision making. Half the agencies (11) reported publicly disseminating their evaluation results by posting reports to a searchable database on their websites; fewer reported presenting findings at professional conferences (9), sending a notice and link to the report through electronic mailing lists (7), or conducting webinars on findings for the policy community (6). A couple of the PIOs commented that they post some, but not all, reports on the agency website. Of the 11 agencies posting evaluation reports to a website, half reported that they did so within 3 months of completion, although 1 indicated it can take from 6 months to a year. In addition, a few agencies sponsor research clearinghouses that review evaluations of social interventions and provide the results in searchable databases on their websites to help managers and policy makers identify and adopt effective practices. Responding to or Using Evaluation Findings: If program evaluations or any form of performance information are to lead to performance improvement, they must be acted on. Seven agencies reported that they had procedures for obtaining management's response to evaluation recommendations, 8 for obtaining follow-up action on those recommendations. In their comments, a few PIOs noted that they had policies for responding to reports or recommendations from GAO or the Inspector General. Another PIO reported that a number of internal briefings are held to ensure management awareness of evaluation findings as well as a cross-agency research utilization committee composed of staff from program, public affairs, and congressional and intergovernmental relations offices that decides on the appropriate level of publicity effort for the report. Over a third of the agencies (9 to 10) reported that evaluations were used to a moderate or greater extent to support policy changes, budget changes, or internal proposals for change in resource allocation or management, or to award competitive grants (figure 1). Five agencies reported using evaluation to support all these activities to a moderate or greater extent on average. In comments, PIOs described a variety of ways in which evaluation evidence could be used in awarding competitive grants: reviewing the merit of research proposals, evaluating grantee prior performance and outcomes, assessing credit worthiness, and allocating tiered evidence-based funding, which varies the level of funding based on the extent and quality of the evaluation evidence supporting a program's effectiveness. Figure 1: Number of Agencies Citing Evaluation Evidence to Support Various Decisions: [Refer to PDF for image: stacked horizontal bar graph] Number of agencies: Budget changes: Don't know/no response: 5; Little or no extent: 2; Some extent: 7; Moderate extent: 5; Great to very great extent: 5. Policy changes: Don't know/no response: 6; Little or no extent: 1; Some extent: 8; Moderate extent: 4; Great to very great extent: 5. Resource allocation or program management: Don't know/no response: 5; Little or no extent: 5; Some extent: 4; Moderate extent: 5; Great to very great extent: 5. Competitive grant awards: Don't know/no response: 7; Little or no extent: 3; Some extent: 4; Moderate extent: 5; Great to very great extent: 5. Source: GAO. GAO-15-25. Note: Survey items were abbreviated. [End of figure] Agencies with centralized evaluation authority, independence, and expertise reported greater evaluation use in management and policy making, demonstrating its importance. More than half of the 7 agencies that reported great use of evaluation had a senior evaluation leader or a central evaluation office. Moreover, the agencies whose central offices were independent of the program office, those with access to external experts or contractors, and those whose staff were rated as having great or better expertise in research methods and subject matter reported greater use of evaluation in decision making. Some Agencies Improved Their Evaluation Capacity after GPRAMA Was Enacted: GPRAMA was enacted in January 2011, revising existing GPRA provisions and adding new reporting requirements. Around the same time, OMB increased its outreach to agencies to encourage them to conduct program evaluations. We assessed change in agency evaluation capacity in this period through survey questions about when an office started conducting evaluations and whether the frequency of certain activities had changed. While organizational changes in evaluation capacity were few during this period, half the agencies reported a greater use of evaluation in decision making since 2010. A Few Agencies Centralized Their Evaluation Responsibilities after GPRAMA Was Enacted: Organizational evaluation capacity has grown some since 2010. One- third of the agencies have a high-level official responsible for oversight of the agency's evaluation studies, and 2 of those 7 positions were created after 2010, both in 2013. In fact, in its May 2012 memorandum, OMB encouraged agencies to designate a high-level official responsible for evaluation who can: * "Develop and manage the agency's research agenda; * Conduct or oversee rigorous and objective studies; * Provide independent input to agency policymakers on resource allocation and to program leaders on program management; * Attract and retain talented staff and researchers, including through flexible hiring authorities such as the Intergovernmental Personnel Act; and: * Refine program performance measures, in collaboration with program managers and the Performance Improvement Officer."[Footnote 26] In addition, 4 of 11 agencies with a central office responsible for evaluation reported that this office started conducting evaluations after 2010. One agency added both a central leader and a central office in 2013; 3 others just added a central office. Of the 12 agencies that reported having evaluation offices in their major components, most existed before GPRAMA was enacted, but 5 agencies have established new component evaluation offices since then. Half the Agencies Reported Increasing Their Capacity Building Activities after GPRAMA Was Enacted: Presumably in response to greater administration attention to program evaluation, half the agencies reported that efforts to improve their capacity to conduct credible evaluations had increased at least somewhat since GPRAMA was enacted in January 2011. About half the PIOs reported increases in staff participation in evaluation conferences and knowledge sharing forums, hiring staff with research and analysis expertise, training staff in research and evaluation skills, and consultation with external research and evaluation specialists. Nine agencies reported increases in all these activities. Most of the remaining agencies reported no change in training or consultation with specialists (4 to 5), or decreases in hiring or participating in conferences (4 to 5) in this period. These decreases may reflect federal budget constraints and the general decline in federal hiring in recent years. Figure 2: Agencies Reporting Change since 2010 in Efforts to Improve Their Capacity to Conduct Evaluations: [Refer to PDF for image: stacked horizontal bar graph] Number of agencies: Participation in evaluation conferences and forums: Don't know/no response: 3; Decreased: 5; Remained the same: 2; Increased: 13. Hiring staff with research and evaluation expertise: Don't know/no response: 4; Decreased: 4; Remained the same: 3; Increased: 12. Training in research and evaluation skills: Don't know/no response: 4; Decreased: 3; Remained the same: 4; Increased: 12. Consultation with research and evaluation experts: Don't know/no response: 5; Decreased: 2; Remained the same: 5; Increased: 11. Source: GAO. GAO-15-25. Note: Survey items were abbreviated. [End of figure] Half the Agencies Reported Increasing Their Use of Evaluations as Evidence in Decision Making after 2010: In line with the increases reported in capacity building activities and organizational resources, about half the agencies reported that their use of evaluation as supportive evidence had increased at least somewhat since 2010 (only a few reported great increases). About half the PIOs reported that the use of evaluation had increased for implementing changes in program management or performance, designing or supporting program reforms, sharing what works or other lessons learned with others, allocating resources within a program, or supporting program budget requests. The rest reported that their use of evaluation evidence remained about the same in this period, with none reporting a decline in use of evaluation as evidence. Eight agencies reported increased use in all these activities, and an equal number reported that their use remained the same on all. Since, in a separate question, 5 agencies either provided no opinion or reported little or no current use of evaluation evidence to support budget, policy, or program management, we conclude that this group has continued to make little or no use of evaluations since 2010. Figure 3: Agencies Reporting Change since 2010 in Citing Evaluation as Supporting Evidence in Decisions: [Refer to PDF for image: stacked horizontal bar graph] Number of agencies: Improve program management or performance: Don't know/no response: 4; Decreased: 0; Remained the same: 8; Increased: 12. Design or support progam reforms: Don't know/no response: 4; Decreased: 0; Remained the same: 8; Increased: 12. Support budget requests: Don't know/no response: 4; Decreased: 0; Remained the same: 10; Increased: 10. Share what works with others: Don't know/no response: 4; Decreased: 0; Remained the same: 10; Increased: 10. Allocate resources within the program: Don't know/no response: 4; Decreased: 0; Remained the same: 10; Increased: 10. Source: GAO. GAO-15-25. Note: Survey items were abbreviated. [End of figure] Some Agencies Reported that Professional Networking, Hiring, Engaging Program Staff, and Some GPRAMA Provisions Were Useful for Building Evaluation Capacity: Our survey asked the PIOs how useful various activities or resources were for improving their agency's capacity to conduct credible evaluations. Several PIOs did not answer these questions, in part because they were not familiar with such activities. Many of those who did respond found that hiring, professional networking, consulting with experts, and training as well as some of the GPRAMA accountability provisions were very useful for improving capacity to conduct evaluations. Our survey also asked about the usefulness of various activities or resources for improving an agency's capacity to use evaluations in decision making. Again, several agencies did not respond, but most of those that did reported that engaging program staff, conducting quarterly progress reviews, and holding goal leaders accountable for progress on agency priority goals were very useful in improving agency capacity to make use of evaluation information. Some other GPRAMA-related activities were not found as useful for enhancing evaluation use. In addition, agencies had not taken full advantage of available technology to disseminate evaluation results, thus potentially limiting their influence on decision making. PIOs Reported Hiring, Professional Networking, and Consulting with Experts Were Very Useful in Improving the Capacity to Conduct Evaluations: Our survey asked the PIOs about the usefulness of 14 different actions or resources for improving their capacity to conduct evaluations, drawn from the literature and some GPRAMA provisions related to building agency capacity. About a third of the respondents indicated either that they had no opinion or did not respond to these questions, similar to the number not responding or reporting no change in the use of capacity-building activities since 2010. About two-thirds of agencies (15) reported hiring staff with research and analysis expertise, and 11--nearly half of the PIOs--thought it was very useful for improving agency capacity to conduct credible evaluations. Almost half the agencies used special hiring authorities, such as the Presidential Management Fellows, Intergovernmental Personnel Act, or American Association for the Advancement of Science (AAAS) fellows program, and generally found them useful for improving agency evaluation capacity. Other agency-specific means of obtaining staff were mentioned in comments--for example, an Evaluation Fellowship Program at the Centers for Disease Control and Prevention. Figure 4 summarizes agencies' reports on the usefulness of the full range of activities and resources posed for building capacity to conduct evaluations. Figure 4: PIOs' Views on Usefulness of Activities and Resources for Improving Agency Capacity to Conduct Credible Evaluations: [Refer to PDF for image: stacked horizontal bar graph] Number of agencies: Hire staff: No opinion/no response: 7; Did not use: 2; Somewhat to not useful: 0; Moderately useful: 4; Very useful: 11. Professional conferences: No opinion/no response: 7; Did not use: 1; Somewhat to not useful: 2; Moderately useful: 5; Very useful: 9. Consult with external experts: No opinion/no response: 9; Did not use: 0; Somewhat to not useful: 4; Moderately useful: 2; Very useful: 9. Hold goal leaders accountable: No opinion/no response: 7; Did not use: 1; Somewhat to not useful: 2; Moderately useful: 7; Very useful: 7. Quarterly progress reviews: No opinion/no response: 5; Did not use: 2; Somewhat to not useful: 3; Moderately useful: 8; Very useful: 6. Agency training: No opinion/no response: 7; Did not use: 5; Somewhat to not useful: 3; Moderately useful: 3; Very useful: 6. Special hiring authorities: No opinion/no response: 8; Did not use: 5; Somewhat to not useful: 3; Moderately useful: 3; Very useful: 5. External training: No opinion/no response: 8; Did not use: 4; Somewhat to not useful: 3; Moderately useful: 6; Very useful: 3. Exchange tips through network: No opinion/no response: 8; Did not use: 2; Somewhat to not useful: 5; Moderately useful: 8; Very useful: 1. Provide non-program funds: No opinion/no response: 12; Did not use: 5; Somewhat to not useful: 2; Moderately useful: 2; Very useful: 3. OMB forums: No opinion/no response: 9; Did not use: 2; Somewhat to not useful: 8; Moderately useful: 3; Very useful: 2. Consult with stakeholders: No opinion/no response: 10; Did not use: 11; Somewhat to not useful: 1; Moderately useful: 1; Very useful: 1. Performance Analyst toolkit: No opinion/no response: 8; Did not use: 9; Somewhat to not useful: 4; Moderately useful: 3; Very useful: 0. OPM competencies: No opinion/no response: 10; Did not use: 7; Somewhat to not useful: 6; Moderately useful: 1; Very useful: 0. Source: GAO. GAO-15-25. Note: Survey items were abbreviated. [End of figure] The PIO survey respondents also gave high marks to professional networking for building staff capacity. Two-thirds of the PIOs reported that staff participation in professional conferences or evaluation interest groups for knowledge sharing was useful, with 9 PIOs citing these activities as very useful in improving agency capacity to conduct credible evaluations. Examples mentioned included the Association for Public Policy Analysis and Management research conference and an Evaluation Day conference sponsored by the U.S. Department of Health and Human Services (HHS) Office of the Assistant Secretary for Planning and Evaluation. The exchange of evaluation tips and leading practices through the PIC or other network was considered moderately useful for capacity building by a third of the PIOs. PIOs provided examples of information- sharing networks besides the PIC, such as OMB's Evaluation Working Group, which holds governmentwide meetings on government performance topics; Federal Evaluators, an informal association of evaluation officials across government; Washington Evaluators, a local affiliate of the American Evaluation Association; and the National Academy of Public Administration. Some agencies have established informal networks to share information internally, such as HHS and the U.S. Department of Labor. Also mentioned were communities of practice that engage both public and private sectors but are focused on a specific domain--for example, the Organisation for Economic Co-operation and Development's EvalNet, which focuses on international development, and the Environmental Evaluators Network. Consultation with external experts for conceptual or technical support was rated as very useful for improving the capacity to conduct evaluations by most using it (9 of 15). However, this did not apply to other forms of external consultation. Seven agencies reported having an annual or multi-year evaluation agenda, and 3 of them reported consulting with congressional or other external stakeholders on their plan. These 3 found consultation useful to varying degrees for building their agency's evaluation capacity to conduct evaluation. PIOs Reported Training Is Needed to Build Skills: Training in specific skills and knowledge--for example, types of evidence, assessing evidence quality, report writing, and communication--is frequently cited in the evaluation literature as a way to build organizational or individual evaluation capacity. Besides asking about participating in professional conferences and networks, our survey asked about the usefulness of training in evaluation skills-- for example, describing program logic models, choosing appropriate evaluation designs, and collecting and analyzing data. Half the agencies reported engaging in internal or external training--whether delivered in a classroom, online, or in webinars. Half the agencies using internal training reported that it was very useful for improving capacity to conduct credible evaluations. PIOs who reported on agency experience with external evaluation training were less enthusiastic, but still considered the training useful for developing evaluation skills overall. OMB, in addition to encouraging agencies to conduct evaluations through guidance, sponsored a number of governmentwide open forums on performance issues. About half the PIOs reported a range of opinions on the usefulness of the OMB forums on the Paperwork Reduction Act, procurement, data sharing, and related rules and procedures to help improve agency capacity for conducting credible evaluations. Nevertheless, 7 or more of the agencies identified training or guidance in several skills as still needed to a great or very great extent to improve their agencies' capacity to conduct credible evaluations. These skills included: translating evaluation results into actionable recommendations--a requirement for getting evaluation results used--data management and statistical analysis, and performance measurement and monitoring. Few reported that more training in research design and methods or subject matter expertise was greatly needed. Our survey asked what other types of training or guidance might be needed to improve agency capacity. A few PIOs commented that training is needed in preparing statements of work for evaluation contracts, data analytics and visualization of information, and learning how to effectively use evidence and evaluation information. PIOs Reported Some GPRAMA-Related Activities Useful for Building the Capacity to Conduct Evaluation: Our survey asked about several activities and resources related to GPRAMA provisions linked to creating an enabling environment for agency evaluation capacity. Majorities of PIOs stated that conducting quarterly progress reviews on their priority goals, and holding goal leaders accountable for progress on those goals, were moderately to very useful in improving their agency's ability to conduct credible evaluations. In response to GPRAMA provisions to improve agency performance management capacity, the PIC and OPM developed a Performance Analyst position design, recruitment, and selection toolkit to assist agencies' hiring. Seven PIOs reported that their agencies used the toolkit, and 3 did not find it useful for building agency evaluation capacity. About a third of the PIOs reported that their agencies made an effort to incorporate the core competencies that OPM identified for performance management staff into internal agency training. However, 2 of the 7 agencies did not find the effort useful for improving staff evaluation capacity. The competencies primarily address general management skills and define planning and evaluating fairly simply--as setting and monitoring progress on performance goals--so they do not address some of the specific analytic skills PIOs reported were still needed for conducting evaluations. GAO previously recommended that OPM, in coordination with the PIC and the Chief Learning Officer Council, identify performance management competency areas needing improvement and work with agencies to share information about available agency training in those areas.[Footnote 27] OPM agreed with those recommendations and has embarked on a 2-year pilot program to test how to build capacity in several mission critical competencies identified across government, such as strategic thinking, problem solving, and data analysis, to ensure that both program staff and management can use evaluation and analysis of program performance. OMB senior officials also engaged with agency officials on the Performance Improvement Council to collaborate on improving program performance. Eight of the 14 agencies that responded considered the exchange of evaluation tips and leading practices through the PIC or other networks as at least moderately useful for improving their evaluation capacity. For example, the PIC developed a guide to best practices for setting milestones and a guide and evaluation tool to help agencies set their agency priority goals. PIOs Reported Engaging Program Staff and Holding Goal Leaders Accountable Were Very Useful for Building Capacity to Use Evaluations in Decision Making: Previously, we found that experienced evaluators emphasized three basic strategies to facilitate evaluation's influence on program management and policy: demonstrate leadership support of evaluation for accountability and program improvement, build a strong body of evidence, and engage stakeholders throughout the evaluation process. [Footnote 28] Accordingly, our survey asked the PIOs how useful various activities or resources were for improving their agency's capacity to use evaluations in decision making. Several did not answer these questions because they did not use the particular activity or resource or had no opinion. The PIOs who responded mainly cited engaging program staff, conducting quarterly progress reviews, and holding goal leaders accountable for progress on agency priority goals as very useful for improving agency capacity to make use of evaluation information in decision-making. Over two-thirds of the PIOs responded that involving program staff in planning and conducting evaluation studies was useful for improving agency use of evaluation; 11 saw it as very useful. Engaging staff throughout the process can gain their buy-in on the relevance and credibility of evaluation findings; providing program staff with interim results or lessons learned from early program implementation can help ensure timely data for program decisions. Majorities of PIOs affirmed that other forms of program staff engagement were also very useful: providing program staff and grantees with technical assistance on evaluation and its use and agency peer-to-peer presentations of evaluation studies to discuss methods and findings: Figure 5: PIOs' Views on Usefulness of Activities and Resources for Improving Agency Capacity to Use Evaluations in Decision Making: [Refer to PDF for image: stacked horizontal bar graph] Number of agencies: Involve program staff: Don't know/no response: 6; Little or no extent: 1; Some extent: 0; Moderate extent: 6; Great to very great extent: 11. Provide technical assistance: Don't know/no response: 6; Little or no extent: 2; Some extent: 4; Moderate extent: 1; Great to very great extent: 11. Hold goal leaders accountable: Don't know/no response: 5; Little or no extent: 0; Some extent: 6; Moderate extent: 5; Great to very great extent: 8. Peer-to-peer presentations: Don't know/no response: 6; Little or no extent: 2; Some extent: 4; Moderate extent: 1; Great to very great extent: 11. Quarterly progress reviews: Don't know/no response: 6; Little or no extent: 1; Some extent: 6; Moderate extent: 5; Great to very great extent: 6. Exchange tips through network: Don't know/no response: 8; Little or no extent: 3; Some extent: 5; Moderate extent: 3; Great to very great extent: 5. Coordinate CAP goals: Don't know/no response: 6; Little or no extent: 3; Some extent: 10; Moderate extent: 2; Great to very great extent: 3. Share data on Data.gov: Don't know/no response: 7; Little or no extent: 8; Some extent: 6; Moderate extent: 0; Great to very great extent: 3. Post reports to website: Don't know/no response: 7; Little or no extent: 5; Some extent: 6; Moderate extent: 4; Great to very great extent: 2. Electronic mailing lists: Don't know/no response: 9; Little or no extent: 6; Some extent: 4; Moderate extent: 3; Great to very great extent: 2. Consult with stakeholders: Don't know/no response: 12; Little or no extent: 7; Some extent: 4; Moderate extent: 0; Great to very great extent: 1. Report on Performance.gov: Don't know/no response: 5; Little or no extent: 2; Some extent: 13; Moderate extent: 4; Great to very great extent: 0. Source: GAO. GAO-15-25. Note: Survey items were abbreviated. [End of figure] As mentioned earlier, majorities of PIOs viewed the new GPRAMA activities of conducting quarterly reviews and holding goal leaders accountable as moderately to very useful for improving agency capacity to conduct credible evaluations. Majorities of the responding PIOs also viewed those same activities as moderately to very useful for improving agency capacity to use evaluations in decision making. However, another GPRAMA provision--coordinating with OMB and other agencies to review progress on cross-agency priority (CAP) goals--met with a range of opinions. Equal numbers reported that it was moderately to very useful, somewhat useful, or not useful at all for improving an agency's use of evaluation. Because the 14 CAP goals for this period cover 5 general management improvement and 9 cross-cutting but specific policy areas, some of the 24 PIOs may have been more involved than others in those reviews. Other activities potentially useful for improving the capacity to use information from evaluations rely on leveraging resources. A third of the PIOs reported that exchanging leading practices, tips, and tools for using evidence to improve program or agency performance through the PIC or other network was moderately or very useful in improving agency capacity to use evaluation results in decision making. Many of the same networks named as helping to improve their capacity to conduct credible evaluations were also named with regard to improving capacity to use evaluations in decision making. These included the Environmental Evaluators Network, Federal Evaluators, the National Academy of Public Administration, and the OMB Evaluation Working Group. Seven agencies reported having an agency-wide annual or multi-year evaluation plan or agenda of planned studies, and 6 PIOs reported consulting with congressional and other external stakeholders on that plan. However, these consultations were not viewed as useful for improving their agency's capacity to use evaluations in decision making. The absence of consultation may miss an opportunity to ensure that evaluations will address the questions of greatest interest to congressional decision makers and will be perceived as credible support for proposed policy or budget changes. In previous work, we found that dialog between congressional committees and executive branch agencies was necessary to achieve a mutual understanding that would allow agencies to provide useful information for oversight. [Footnote 29] Agencies May Not Be Taking Full Advantage of Technology to Disseminate Evaluation Results: Previously we found that a key strategy for promoting the use of evaluation findings was to make them digestible and usable and to proactively disseminate them. Our survey posed various options that agencies could take to publicly disseminate their evaluation findings. Half the respondents reported posting evaluation reports in a searchable database on their websites, and half of them viewed this practice as moderately to very useful for improving their agency's capacity to use evaluations in decision making. However, 3 did not find the practice useful. Electronic mailing lists are more proactive than posting a report to a website and permit tailoring the message to different audiences. A third of all respondents disseminated evaluation reports by electronic mailing lists, which most saw as somewhat to very useful for facilitating the use of evaluations in decision making. Tailoring messages for particular audiences--for example, federal policy makers, state and local agencies, and local program affiliates--may, however, increase the applicability and use of evaluation findings by these other audiences. GPRAMA requires OMB to provide quarterly updates on agency and cross- agency priority goals on a central, government-wide website, Performance.gov, to make federal program and performance information more accessible to the Congress and the public. In our survey, PIO reviews were mixed about the utility of this website to improve agency capacity to use evaluations in decision making. Almost half the agencies found the practice somewhat to moderately useful for improving the agencies' use of evaluation findings in decision making, but one-fourth of the agencies did not. In 2013, GAO reviewed Performance.gov and recommended that OMB work with the General Services Administration and the PIC to clarify specific ways that intended audiences could use the website and specify changes to support these uses.[Footnote 30] OMB staff agreed with our recommendations, and Performance.gov continues to evolve. Currently each agency has a home page that provides links to the agency's strategic plan, annual performance plans and reports, and other progress reviews. Data.gov is a federal government website that provides descriptions of datasets generated or held by the federal government in order to increase the ability of the public to locate, download, and use those datasets. A third of the PIOs reported that sharing databases in public repositories such as Data.gov for researchers and the public to use helped in improving agency capacity to use evaluations in decision making, but 1 thought it was not useful. However, a third of PIOs stated that the agency did not use this vehicle. Vehicles such as Data.gov and Performance.gov are primarily intended to improve government transparency and expand information's use by the Congress and the public, but they can also help support agency requests for budget and policy changes to improve government performance. Concluding Observations: Although OMB and several agencies have taken steps since 2010 to expand federal evaluation efforts, most agencies demonstrate rather modest evaluation capacity. Those with centralized evaluation authority reported greater evaluation coverage and use in decision making, but additional effort will be required to expand agencies' evaluation capacity beyond those that already possess evaluation expertise. In addition to hiring and training staff and consulting experts, promoting information sharing through informal and formal evaluation professionals' networks offers promise for building agencies' capacity to conduct evaluation in a constrained budget environment. Engaging program staff, regularly reviewing progress on agency priority goals, and holding goal leaders accountable can help build agency use of evaluation in decision making, as our survey results show. While timely, public dissemination of performance and evaluation results may not directly influence agency decision making, it is important to support government transparency and accountability for results to the Congress and the public. Directly engaging intended users (for example, involving program staff in planning and conducting evaluations and holding regular progress reviews) was strongly associated with increasing evaluation use in internal agency decision making. In contrast, few agencies reported consulting congressional and other external stakeholders in conducting their evaluation studies or developing their evaluation agendas. However, some program reforms require program partners and legislators to take action. Engaging congressional and other stakeholders in evaluation planning might increase their interest in evaluation as well as their adoption of evaluation findings and recommendations. In the absence of explicit authority or congressional request, agencies may be reluctant to spend increasingly scarce funds on evaluation studies that are perceived as resource intensive. A stable source of evaluation funding could help maintain a viable evaluation program that produced a steady stream of information to guide program management and policy making. Even so, only a quarter of the agencies in our survey reported that their evaluation offices had a stable source of funding. Congressional appropriators could direct the use of program or agency funds for evaluating federal programs and policies. As we have noted before, congressional committees can also communicate their interest in evaluation in a variety of ways to encourage agencies to produce credible, relevant studies that inform decision making:[Footnote 31] * consult with agencies on proposed revisions to their strategic plans and priority goals, as GPRAMA requires them to do every 2 years, to ensure that agency missions are focused, goals are specific and results-oriented, and strategies and funding expectations are appropriate and reasonable; * request agency evaluations to address specific questions about the implementation and results of major program or policy reforms, in time to consider their results in program reauthorization; and: * review agencies' annual evaluation plans or agendas to ensure that they address issues that will inform budgeting, reauthorization, and ongoing program management. Agency Comments: We requested comments on a draft of this report from the Director of the Office of Management and Budget, whose staff provided technical comments that we incorporated as appropriate, and from the Director of the Office of Personnel Management, who provided none. We are sending copies of this report to other interested congressional committees, and the Director of the Office of Management and Budget and the Director of the Office of Personnel Management. In addition, the report will be available on our web site at [hyperlink, http://www.gao.gov]. If you or your staff have any questions about this report, please contact me at (202) 512-2700 or by e-mail at kingsburyn@gao.gov. Contacts for our Office of Congressional Relations and Office of Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Signed by: Nancy Kingsbury, Ph.D. Managing Director: Applied Research and Methods: [End of section] Appendix I: Methodology for the Survey of Performance Improvement Officers in CFO Act Agencies: We administered a web-based questionnaire from May 2, 2014, to June 19, 2014, on federal agency evaluation capacity resources and activities to the Performance Improvement Officers (PIO) or their deputies at the 24 agencies covered by the Chief Financial Officers Act of 1990 (CFO act).[Footnote 32] We received responses from all 24 agencies (listed at the end of this appendix.) The survey gave us information about agencies' evaluation resources, policies, and activities, and the activities and resources they have found useful in building their evaluation capacity. (The survey questions and summarized results are in appendix II.) We sent respondents an e-mail invitation to complete the survey on a secure GAO web server. Each e- mail contained a unique username and password. During the data collection period, we sent follow-up e-mails and, if necessary, called nonresponding agencies on the telephone. Because this was not a sample survey, it has no sampling errors. In practice, however, any survey may introduce nonsampling errors that stem from differences in how a particular question is interpreted, the availability of sources of information, or how the survey data are analyzed. All can introduce unwanted variability into the survey results. We took a number of steps to minimize these nonsampling errors. A social science survey specialist designed the questionnaire, in collaboration with our staff who had subject matter expertise. In addition, we pretested the questionnaire in person with PIOs at three federal agencies to make sure that the questions were relevant, clearly stated, easy to comprehend, and unbiased. We also affirmed that data and information the PIOs would need to answer the survey were readily obtainable and that answering the questionnaire did not place an undue burden on them. Additionally, a senior methodologist within our agency independently reviewed a draft of the questionnaire before we administered it. We made appropriate revisions to its contents and format after the pretests and independent review. When we analyzed data from the completed survey, an independent analyst reviewed all computer programs used in our analysis. Since this was a web-based survey, respondents entered their answers directly into the electronic questionnaire; thus, we did not key the data into a database, avoiding data entry errors. Additionally, in reviewing the agencies' answers, we confirmed that the PIOs had correctly bypassed inapplicable questions (such as questions we expected them to skip). We concluded from our review that the survey data were sufficiently reliable for the purposes of this report. The 24 agencies subject to the CFO Act include: * Agency for International Development: * Department of Agriculture: * Department of Commerce: * Department of Defense: * Department of Education: * Department of Energy: * Department of Health and Human Services: * Department of Homeland Security: * Department of Housing and Urban Development: * Department of the Interior: * Department of Justice: * Department of Labor: * Department of State: * Department of Transportation: * Department of the Treasury: * Department of Veterans Affairs: * Environmental Protection Agency: * General Services Administration: * National Aeronautics and Space Administration: * National Science Foundation: * Nuclear Regulatory Commission: * Office of Personnel Management: * Small Business Administration: * Social Security Administration. [End of section] Appendix II: Results from 2014 Survey of Performance Improvement Officers: [End of section] United States Government Accountability Office: GAO: Survey of Federal Evaluation Capacity: The GPRA Modernization Act (GPRAMA), enacted in January 2011, modifies the Government Performance and Results Act of 1993, and mandates GAO to review how GPRAMA is influencing agency performance management. As a part of the response to this mandate, GAO is studying improvements in agency evaluation capacity--the ability to obtain and use evaluations in decision making--and the activities and resources agencies have found useful in building their evaluation capacity. To address these objectives, we are surveying Performance Improvement Officers (PIOs) in the 24 agencies covered by the Chief Financial Officers Act of 1990. Most of the questions in this survey can be answered by checking boxes or filling in blanks, and several ask for documents or their hyperlinks to be uploaded or emailed. This survey should take no longer than one hour to complete; however, additional time may be needed to consult with others. You do not need to complete the survey in one sitting, as the survey will allow you at any point to save your responses so that you can consult with others and log in again and complete the rest of it at a later time. To learn more about completing the questionnaire, printing your responses, and whom to contact if you have questions, click here. (Link directed respondent to instructions popup.) The results of this survey generally will be provided in summary form in a GAO report. Individual answers may be discussed in our reporting, but we will not include any information that could be used to identify individuals' names. We will not release individually identifiable data outside of GAO, unless compelled by law or requested by the Congress. Thank you for your time and assistance. Performance Improvement Officer (PIO): 1. Besides Performance Improvement Officer, what other job titles do you have? data intentionally not reported. 2. When did you start serving as Performance Improvement Officer (PIO) in your agency? (Select month and year): 0 to less than 1 year: 8; 1 to less than 2 years: 5; 2 to less than 3 years: 2; 3 to less than 4 years: 3; 4 years or more: 5; Number of respondents: 23. 3. Who do you report to in your role as PIO? (Check all that apply.) 1. Chief Operating Officer (e.g., Deputy Secretary or equivalent): Not checked: 5; Checked: 17; Number of respondents: 22. 2. Chief Financial Officer: Not checked: 19; Checked: 3; Number of respondents: 22. 3. Other: Not checked: 20; Checked: 2; Number of respondents: 22. If you selected other, please specify: data intentionally not reported. Definition of Program Evaluation and Providing Documents to GAO: Definition of Program Evaluation: Program evaluations are individual, systematic studies using research methods to assess how well; a program, operation, or project is achieving its objectives and the reasons why it may, or may not, be performing as expected. Program evaluation as defined here does not include your agency's routine program monitoring activities or quarterly performance reviews. Agency program evaluation capacity is defined here as the ability to obtain credible program; evaluations and use their results in decision making. Capacity involves organizational resources, policies and practices, and a supportive environment. Providing Documents to GAO: In the questions that follow where we request a document, you may either upload the document; directly to the Web survey, provide a hyperlink to it, or email the document to us at; (respondents were provided a GAO e-mail address to send documents). Supportive Environment for Agency Evaluation: 4. What legislative provisions, if any, does your agency have to conduct program evaluations? (Check all that apply): 1. Agency-wide authorization to use appropriated funds for evaluation: Not checked: 10; Checked: 13; Number of respondents: 23. 2. Mandate(s) to evaluate specific program(s): Not checked: 13; Checked: 10; Number of respondents: 23. 3. Other: Not checked: 20; Checked: 3; Number of respondents: 23. 4. None of the above: Not checked: 16; Checked: 17; Number of respondents: 23. Please provide the names of the programs mandated for evaluation. data intentionally not reported. If "Other," please specify the legislative provisions: data intentionally not reported. 5. In the past 3 years, in what ways has your agency's senior leadership (Secretary, Administrator or Deputy) demonstrated their commitment to using evidence in management and policy making? (Check all that apply and provide an example.) 1. Internal agency memos: Not checked: 10; Checked: 12; Number of respondents:22. 2. Agency guidance: Not checked: 5; Checked: 17; Number of respondents: 22. 3. Public statements or speeches: Not checked: 14; Checked: 8; Number of respondents: 22. 4. Congressional hearings: Not checked: 13; Checked: 9; Number of respondents: 22. 5. Other: Not checked: 14; Checked: 8; Number of respondents: 22. If "Other," please specify: data intentionally not reported. Please provide an example of where your agency's senior leadership demonstrated their commitment to using evidence in management and policy making. data intentionally not reported. 6. For how many of your agency's priority goals are systems currently in place to provide reliable performance information on outcomes? (Select one): For all: 18; For more than half: 3; For about half: 1; For less than half: 1; For none: 0; Not applicable, our agency does not have priority goals: 1; Number of respondents:24. Evaluation Resources - Agency-Wide: 7. Does your agency have a single high-level official (e.g., a senior coordinator or the head of a central analysis office) responsible for oversight of the agency's evaluation studies? (Select one.) Yes: 7; No; 17; Number of respondents: 24. 7a. (If yes) Is that official also responsible for the following? (Check all that apply): 1. The agency's evaluation agenda: Not checked: 1; Checked: 6; Number of respondents: 7. 2. Follow-up of evaluation recommendations: Not checked: 4; Checked: 3; Number of respondents: 7. 3. None of the above: Not checked: 6; Checked: 1; Number of respondents: 7. 7b. What is this person's job title? data intentionally not reported. [End of table] 7c. When was this position created? (Select one): 2010 or earlier: 5; 2011: 0; 2012: 0; 2013: 2; 2014: 0; Number of respondents: 7. 8. Does your department or agency have a central office that is responsible for evaluation of agency programs, operations, or projects? (The office may have other responsibilities as well.) Yes: 11; No (Skip to section: Evaluation Resources - Agency Components): 13; Number of respondents: 24. 8a. (If yes) When did this office start conducting evaluations? 2010 or earlier: 7; 2011: 1; 2012: 1; 2013: 1; 2014: 1; Number of respondents: 11. 9. Does this central office for evaluation have the following attributes? (Select one answer in each row.) a. Independence from program offices with respect to making decisions about evaluation design, conduct, and reporting; ; Yes: 9; No: 2; Don't know: 0; Number of respondents: 11. b. A stable source of funding through either a program fund set-aside or regular appropriations; Yes: 6; No: 5; Don't know: 0; Number of respondents: 11. c. Access to analytic expertise through use of external experts or contractors; Yes: 9; No: 2; Don't know: 0; Number of respondents: 11. If you answered "Yes" to item b above, please identify the funding source(s). data intentionally not reported. 10. To what extent, if at all, does the evaluation staff in this central office for evaluation have training and experience in each of the following competencies? (Select one answer in each row.) a. Research design and methods (e.g., conducting surveys); Very great extent: 4; Great extent: 4; Moderate extent: 1; Some extent: 0; Little or no extent: 1; No opinion/Don't know: 0; Number of respondents: 10. b. Data management or statistical analysis; Very great extent: 5; Great extent: 2; Moderate extent: 2; Some extent: 1; Little or no extent: 0; No opinion/Don't know: 0; Number of respondents: 10. c. Performance measurement and monitoring; Very great extent: 4; Great extent: 3; Moderate extent: 1; Some extent: 1; Little or no extent: 0; No opinion/Don't know: 0; Number of respondents: 9. d. Translating evaluation results into actionable; recommendations; Very great extent: 4; Great extent: 4; Moderate extent: 1; Some extent: 0; Little or no extent: 1; No opinion/Don't know: 0; Number of respondents: 10. e. Subject matter expertise; Very great extent: 4; Great extent: 1; Moderate extent: 3; Some extent: 2; Little or no extent: 0; No opinion/Don't know: 0; Number of respondents: 10. f. Other expertise - Please select an answer and; specify below; Very great extent: 4; Great extent: 1; Moderate extent: 0; Some extent: 2; Little or no extent: 0; No opinion/Don't know: 2; Number of respondents: 9. [End of table] Please specify the other expertise. data intentionally not reported. Evaluation Resources - Agency Components: For the following questions (questions 11-15), please answer for your agency's major; components. * For Departments, the term "component" means major agencies within a Department. For example, the Internal Revenue Service within the Treasury Department. * If you are located in a non-departmental agency, the term "component" means any; major Division, Office, Service, Bureau, etc. within the agency. 11. How many major agency components currently have a central office that is responsible for evaluation of agency programs, operations, or projects? (The office may have other responsibilities as well.) (Enter number. If none, enter 0): One: 1; Two: 1; Three: 2; Four: 3; Six: 2; Eleven: 2; Twelve: 1' Number of respondents: 12. Instruction: If you entered "0" (zero) in question 11, skip to; the next section (Agency Plans and Policies) of this questionnaire. 12 respondents skipped to the next section. 12. Since January 2011, have any major agency components created central offices for evaluation? Yes: 5; No: 7; Number of respondents: 12. 13. Taken as a whole, about how many of the component central offices for evaluation have the following attributes? (Select one answer in each row): a. Independence from program offices with respect to making; decisions about evaluation design, conduct, and reporting; All: 3; Many: 3; Some: 3; A few: 2; None: 1; Don't know: 0; Number of respondents: 12. b. A stable source of funding through either a program fund; set-aside or regular appropriations; All: 3; Many: 1; Some: 2; A few: 3; None: 3; Don't know: 0; Number of respondents: 12. c. Access to analytic expertise through use of external experts; or contractors; All: 6; Many: 4; Some: 1; A few: 1; None: 0; Don't know: 0; Number of respondents: 12. If you answered all, many, some, or a few to item b above, please identify the funding source(s). data intentionally not reported. 14. Taken as a whole, to what extent, if at all, does the evaluation staff in the component central offices for evaluation have training and experience in each of the following competencies? (Select one answer in each row): a. Research design and methods (e.g., conducting surveys); Very great extent: 2; Great extent: 4; Moderate extent: 5; Some extent: 1; Little or no extent: 0; No opinion/Don't know: 0; Number of respondents: 12. b. Data management or statistical analysis; Very great extent: 2; Great extent: 3; Moderate extent: 6; Some extent: 1; Little or no extent: 0; No opinion/Don't know: 0; Number of respondents: 12. c. Performance measurement and monitoring; Very great extent: 2; Great extent: 3; Moderate extent: 5; Some extent: 1; Little or no extent: 1; No opinion/Don't know: 0; Number of respondents: 12. d. Translating evaluation results into actionable recommendations; Very great extent: 2; Great extent: 4; Moderate extent: 4; Some extent: 2; Little or no extent: 0; No opinion/Don't know: 0; Number of respondents: 12. e. Subject matter expertise; Very great extent: 3; Great extent: 6; Moderate extent: 3; Some extent: 0; Little or no extent: 0; No opinion/Don't know: 0; Number of respondents: 12. f. Other expertise - Please select an answer and specify below; Very great extent: 0; Great extent: 1; Moderate extent: 0; Some extent: 0; Little or no extent: 0; No opinion/Don't know: 7; Number of respondents: 8. [End of table] Please specify the other expertise. data intentionally not reported. 15. Taken as a whole, about how many of the component central offices for evaluation have written evaluation policies or guidance that focus on each of the following activities? (Select one answer in each row.) a. Selecting and prioritizing evaluation topics; All: 1; Many: 2; Some: 1; A few: 4; None: 2; Don't know: 2; Number of respondents: 12. b. Consulting program staff and subject matter experts; All: 2; Many: 1; Some: 2; A few: 3; None: 1; Don't know: 3; Number of respondents: 12. c. Ensuring internal evaluator (e.g. staff) independence and objectivity; All: 1; Many: 1; Some: 3; A few: 3; None: 0; Don't know: 4; Number of respondents: 12. d. Ensuring external evaluator (e.g. contractor) independence and objectivity; All: 1; Many: 1; Some: 2; A few: 2; None: 0; Don't know: 6; Number of respondents: 12. e. Selecting evaluation approaches and methods; All: 1; Many: 1; Some: 2; A few: 4; None: 0; Don't know: 4; Number of respondents: 12. f. Ensuring quality of data collection and analysis; All: 1; Many: 3; Some: 4; A few: 3; None: 0; Don't know: 1; Number of respondents: 12. g. Ensuring completeness and transparency of evaluation reports; All: 1; Many: 1; Some: 3; A few: 3; None: 0; Don't know: 4; Number of respondents: 12. h. Timely, public dissemination of evaluation findings and recommendations; All: 2; Many: 1; Some: 1; A few: 3; None: 1; Don't know: 4; Number of respondents: 12. i. Tracking implementation of evaluation findings; All: 1; Many: 2; Some: 1; A few: 4; None: 0; Don't know: 4; Number of respondents: 12. j. Other activity; All: 0; Many: 0; Some: 1; A few: 1; None: 1; Don't know: 5; Number of respondents: 8. Please specify the other activity. data intentionally not reported. If you answered all, many, some, or a few in any row of question 15 above, please provide examples of the written evaluation policies or guidance for the component central offices for evaluation. data intentionally not reported. Note: Survey respondents were able to upload a file directly to the web-based survey, provide hyperlinks to the information, and/or e-mail the information to a specified GAO e-mail address as attachments. Agency Plans and Policies: The questions that follow refer to your agency as a whole. [End of table] 16. Does your agency have an agency-wide evaluation plan or agenda of planned studies? Yes: 7; No: 17; Number of respondents: 24. (If yes) Please provide a copy of the agency-wide evaluation plan or agenda of planned studies. data intentionally not reported. Note: Survey respondents were able to upload a file directly to the web-based survey, provide hyperlinks to the information, and/or e-mail the information to a specified GAO e-mail address as attachments. 16a. (If yes) Does this agency-wide evaluation plan address programs from across all major agency components? Yes: 5; No: 1; Number of respondents: 6. 16b. How many years does this agency-wide plan cover? One year: 2; Multiple years: 4; Number of respondents: 6. 16c. Were the following program stakeholders consulted in developing this agency-wide evaluation plan? (Select one answer in each row.) a. Senior agency officials; Yes: 6; No: 0; Not sure: 0; Number of respondents: 6. b. Program managers; Yes: 6; No: 0; Not sure: 0; Number of respondents: 6. c. Congressional staff; Yes: 2; No: 3; Not sure: 1; Number of respondents: 6. d. Researchers; Yes: 3; No: 3; Not sure: 0; Number of respondents: 6. e. Local program providers; Yes: 1; No: 5; Not sure: 0; Number of respondents: 6. f. Regulated entities; Yes: 1; No: 5; Not sure: 0; Number of respondents: 6. g. Other stakeholders; Yes: 1; No: 4; Not sure: 1; Number of respondents: 6. Please specify other stakeholder(s). data intentionally not reported. 17. Does your agency have agency-wide written evaluation policies or guidance that focus on each of the following activities? (Select one answer in each row): a. Selecting and prioritizing evaluation topics; Yes: 6; No: 18; Number of respondents: 24. b. Consulting program staff and subject matter experts; Yes: 6; No: 18; Number of respondents: 24. c. Ensuring internal evaluator (e.g. staff) independence and objectivity; Yes: 7; No: 17; Number of respondents: 24. d. Ensuring external evaluator (e.g. contractor) independence and objectivity; Yes: 7; No: 17; Number of respondents: 24. e. Selecting evaluation approaches and methods; Yes: 6; No: 18; Number of respondents: 24. f. Ensuring quality of data collection and analysis; Yes: 10; No: 14; Number of respondents: 24. g. Ensuring completeness and transparency of; evaluation reports; Yes: 7; No: 17; Number of respondents: 24. h. Timely, public dissemination of evaluation findings and recommendations; Yes: 6; No: 18; Number of respondents: 24. i. Tracking implementation of evaluation findings; Yes: 5; No: 18; Number of respondents: 23. j. Other activity; Yes: 1; No: 15; Number of respondents: 16. Please specify other activity. data intentionally not reported. If you answered yes in any row of question 17 above, please provide examples of the agency-wide written evaluation policies or guidance. data intentionally not reported. Note: Survey respondents were able to upload a file directly to the web-based survey, provide hyperlinks to the information, and/or e-mail the information to a specified GAO e-mail address as attachments. 18. Are the following stakeholders consulted in designing or conducting evaluation studies? (Select one answer in each row): (Please note: Consultation could be conducted formally, through advisory boards, or informally): a. Senior agency officials; Yes: 20; No: 2; Number of respondents: 22. b. Program managers; Yes: 21; No: 1; Number of respondents: 22. c. Congressional staff; Yes: 5; No: 16; Number of respondents: 21. d. Researchers; Yes: 18; No: 4; Number of respondents: 22. e. Local program providers; Yes: 12; No: 9; Number of respondents: 21. f. Regulated entities; Yes: 8; No: 12; Number of respondents: 20. g. Other stakeholders; Yes: 8; No: 10; Number of respondents: 18. Please specify other stakeholder(s). data intentionally not reported. 19. For how many of your agency's performance goals have evaluations been completed within the past 5 years or are ongoing? For every agency performance goal: 4; For more than half of the agency performance goals: 2; For about half of the agency performance goals: 2; For less than half of the agency performance goals: 9; For none of the agency performance goals: 7; Number of respondents: 24. Use of Evaluation Results: 20. Does your agency have procedures in place to obtain management response to, and follow-up action on, evaluation recommendations? (Select one answer in each row): a. Procedures in place to obtain management response; to evaluation recommendations; Yes: 7; No: 16; Number of respondents: 23. b. Procedures in place to obtain follow-up action on; evaluation recommendations; Yes: 8; No: 15; Number of respondents: 23. If you answered yes in any row of question 20 above, please provide a document describing the procedure(s). data intentionally not reported. Note: Survey respondents were able to upload a file directly to the web-based survey, provide hyperlinks to the information, and/or e-mail the information to a specified GAO e-mail address as attachments. 21. To what extent, if at all, do agency evaluation reports describe the data sources used and the analyses performed forming the basis of conclusions? (Select one answer in each row.) a. Describe data sources used; Very great extent: 8; Great extent: 2; Moderate extent: 5; Some extent: 1; Little or no extent: 1; Don't know: 5; Number of respondents: 22. b. Describe analyses performed; Very great extent: 8; Great extent: 2; Moderate extent: 4; Some extent: 2; Little or no extent: 1; Don't know: 5; Number of respondents: 22. 22. To what extent, if at all, is evaluation evidence cited to support proposed policy and budget changes submitted as part of your agency's annual budget process? (Select one answer in each row.) a. Cited to support policy changes; Very great extent: 2; Great extent: 3; Moderate extent: 4; Some extent: 8; Little or no extent: 1; Don't know: 4; Number of respondents: 22. b. Cited to support budget changes; Very great extent: 2; Great extent: 3; Moderate extent: 5; Some extent: 7; Little or no extent: 2; Don't know: 3; Number of respondents: 22. 23. To what extent, if at all, is evaluation evidence cited to support internal proposals for changes in resource allocations or program management? Very great extent: 1; Great extent: 4; Moderate extent: 5; Some extent: 4; Little or no extent: 5; Don't know: 4; Number of respondents: 23. 24. To what extent, if at all, do competitive grant programs in your agency use evaluation evidence in awarding grants? Very great extent: 1; Great extent: 4; Moderate extent: 5; Some extent: 4; Little or no extent: 3; Don't know: 4; Not applicable, our agency does not have competitive grant programs: 2; Number of respondents: 23. Please provide an example of a competitive grant program that uses evaluation evidence in awarding grants. data intentionally not reported. 25. How does your agency publicly disseminate evaluation findings? (Check all that apply): 1. Post report in a searchable database on the agency's website: Not checked: 7; Checked: 11; Number of respondents:18. 2. Send notice and link to report through electronic mailing lists: Not checked: 11; Checked: 7; Number of respondents:18. 3. Present findings at professional conferences: Not checked: 9; Checked: 9; Number of respondents:18. 4. Conduct webinars on findings for the policy community: Not checked: 12; Checked: 6; Number of respondents:18. 5. Other: Not checked: 10; Checked: 8; Number of respondents:18. Please specify the other way(s) your agency disseminates evaluation findings. data intentionally not reported. 25a. On average, how long does it take for an evaluation report to be posted on the website after completion? From 1 to 3 months: 6; More than 3 to 6 months: 3; More than 6 months to 1 year: 1; More than 1 year: 0; Number of respondents: 10. 26. Since January 2011, has the use of evaluation as supportive evidence for the following activities increased, decreased, or remained about the same in your agency? (Select one answer in each row.) a. Implementing changes to improve program management or performance; Increased greatly: 1; Increased somewhat: 11; Remained about the same: 8; Decreased somewhat: 0; Decreased greatly: 0; Don't know: 2; Number of respondents: 22. b. Allocating resources within a program; Increased greatly: 1; Increased somewhat: 9; Remained about the same: 10; Decreased somewhat: 0; Decreased greatly: 0; Don't know: 2; Number of respondents: 22. c. Sharing what works or other lessons learned with others; Increased greatly: 4; Increased somewhat: 6; Remained about the same: 10; Decreased somewhat: 0; Decreased greatly: 0; Don't know: 2; Number of respondents: 22. d. Designing or supporting program reforms; Increased greatly: 1; Increased somewhat: 11; Remained about the same: 8; Decreased somewhat: 0; Decreased greatly: 0; Don't know: 2; Number of respondents: 22. e. Supporting program budget requests; Increased greatly: 2; Increased somewhat: 8; Remained about the same: 10; Decreased somewhat: 0; Decreased greatly: 0; Don't know: 2; Number of respondents: 22. Building Agency Evaluation Capacity: 27. Since January 2011, has the use of the following activities to improve your agency's capacity to conduct credible evaluations increased, remained the same, or decreased? (Select one answer in each row): a. Hiring staff with research and analysis expertise; Increased greatly: 3; Increased somewhat: 9; Remained about the same: 3; Decreased somewhat: 3; Decreased greatly: 1; No opinion/Don't know: 4; Number of respondents: 23. b. Training staff in research and evaluation skills; Increased greatly: 4; Increased somewhat: 8; Remained about the same: 4; Decreased somewhat: 2; Decreased greatly: 1; No opinion/Don't know: 4; Number of respondents: 23. c. Staff participation in evaluation conferences and knowledge sharing forums; Increased greatly: 5; Increased somewhat: 8; Remained about the same: 2; Decreased somewhat: 4; Decreased greatly: 1; No opinion/Don't know: 3; Number of respondents: 23. d. Consultation with external research and evaluation experts; Increased greatly: 3; Increased somewhat: 8; Remained about the same: 5; Decreased somewhat: 2; Decreased greatly: 0; No opinion/Don't know: 5; Number of respondents: 23. 28. How useful have the following activities or resources been for improving your agency's capacity to conduct credible evaluations? (Select one answer in each row. If not used, select column 1 "Did not use."): a. Hiring staff with research and analysis expertise; Did not use: 2; Very useful: 11; Moderately useful: 4; Somewhat useful: 0; Not useful: 0; No opinion/No basis to judge: 6; Number of respondents: 23. b. The PIC-OPM Performance Analyst design, recruitment and selection toolkit, including position descriptions, for hiring evaluation staff; Did not use: 9; Very useful: 0; Moderately useful: 3; Somewhat useful: 1; Not useful: 3; No opinion/No basis to judge: 7; Number of respondents: 23. c. Special hiring authorities (e.g., Presidential Management Fellows, Intergovernmental, Personnel Act, AAAS Fellows program); Did not use: 5; Very useful: 5; Moderately useful: 3; Somewhat useful: 2; Not useful: 1; No opinion/No basis to judge: 7; Number of respondents: 23. d. Agency training (classroom, online, webinars) in evaluation skills (e.g., describing program logic models, choosing appropriate evaluation designs, data collection and analysis, etc.); Did not use: 5; Very useful: 6; Moderately useful: 3; Somewhat useful: 2; Not useful: 1; No opinion/No basis to judge: 6; Number of respondents: 23. e. Incorporating OPM performance management competencies into internal agency training; Did not use: 7; Very useful: 0; Moderately useful: 1; Somewhat useful: 4; Not useful: 2; No opinion/No basis to judge: 9; Number of respondents: 23. f. External training (classroom, online, webinars) in evaluation skills (e.g., developing program logic models, choosing appropriate evaluation designs, data collection and analysis, etc.); Did not use: 4; Very useful: 3; Moderately useful: 6; Somewhat useful: 3; Not useful: 0; No opinion/No basis to judge: 7; Number of respondents: 23. g. Staff participation in professional conferences or evaluation interest groups for knowledge sharing; Did not use: 1; Very useful: 9; Moderately useful: 5; Somewhat useful: 2; Not useful: 0; No opinion/No basis to judge: 5; Number of respondents: 22. h. Exchange of evaluation tips and leading practices through the PIC or other network; Did not use: 2; Very useful: 1; Moderately useful: 8; Somewhat useful: 4; Not useful: 1; No opinion/No basis to judge: 7; Number of respondents: 23. i. OMB forums on the Paperwork Reduction Act, procurement, data- sharing and related rules and procedures; Did not use: 2; Very useful: 2; Moderately useful: 3; Somewhat useful: 3; Not useful: 5; No opinion/No basis to judge: 8; Number of respondents: 23. j. Consulting with external experts for conceptual or technical support; Did not use: 0; Very useful: 9; Moderately useful: 2; Somewhat useful: 4; Not useful: 0; No opinion/No basis to judge: 8; Number of respondents: 23. k. Providing non-program funds for evaluation contracts; Did not use: 5; Very useful: 3; Moderately useful: 2; Somewhat useful: 2; Not useful: 0; No opinion/No basis to judge: 11; Number of respondents: 23. l. Consulting with congressional and other external stakeholders on an annual or multi-year evaluation agenda; Did not use: 11; Very useful: 1; Moderately useful: 1; Somewhat useful: 1; Not useful: 0; No opinion/No basis to judge: 9; Number of respondents: 23. m. Conducting quarterly progress reviews on your agency's priority goals; Did not use: 2; Very useful: 6; Moderately useful: 8; Somewhat useful: 3; Not useful: 0; No opinion/No basis to judge: 4; Number of respondents: 23. n. Holding goal leaders accountable for progress on agency priority goals; Did not use: 1; Very useful: 7; Moderately useful: 7; Somewhat useful: 2; Not useful: 0; No opinion/No basis to judge: 6; Number of respondents: 23. o. Other activities or resources; Did not use: 5; Very useful: 3; Moderately useful: 1; Somewhat useful: 0; Not useful: 0; No opinion/No basis to judge: 9; Number of respondents: 18. If you use special hiring authorities (see item c), please specify which program(s). data intentionally not reported. If you use a network other than the PIC to exchange evaluation tips and leading practices (see item h), please specify. data intentionally not reported. If you use other activities or resources (see item o), please specify. data intentionally not reported. 29. To what extent, if at all, do you think that the following types of additional agency training or guidance is needed to improve your agency's capacity to conduct credible evaluations? (Select one answer in each row): a. Research design and methods (e.g., conducting surveys); Very great extent: 0; Great extent: 4; Moderate extent: 7; Some extent: 6; Little or no extent: 3; No opinion/No basis to judge: 3; Number of respondents: 23. b. Data management and statistical analysis; Very great extent: 1; Great extent: 7; Moderate extent: 6; Some extent: 5; Little or no extent: 1; No opinion/No basis to judge: 3; Number of respondents: 23. c. Performance measurement and monitoring; Very great extent: 1; Great extent: 6; Moderate extent: 7; Some extent: 4; Little or no extent: 1; No opinion/No basis to judge: 3; Number of respondents: 22. d. Translating evaluation results into actionable recommendations; Very great extent: 3; Great extent: 8; Moderate extent: 4; Some extent: 2; Little or no extent: 3; No opinion/No basis to judge: 3; Number of respondents: 23. e. Subject matter expertise; Very great extent: 0; Great extent: 4; Moderate extent: 7; Some extent: 3; Little or no extent: 6; No opinion/No basis to judge: 3; Number of respondents: 23. f. Other training or guidance; Very great extent: 0; Great extent: 4; Moderate extent: 1; Some extent: 3; Little or no extent: 3; No opinion/No basis to judge: 11; Number of respondents: 22. If you believe other types of training or guidance are needed, please specify. data intentionally not reported. 30. Since 2011, how useful have the following activities or resources been for improving your agency's capacity to use evaluations in decision making? (Select one answer in each row. If not used, select column 1 "Did not use."): a. Agency peer-to-peer presentations of evaluation studies to discuss methods and findings; Did not use: 2; Very useful: 7; Moderately useful: 3; Somewhat useful: 6; Not useful: 0; No opinion/No basis to judge: 5; Number of respondents: 23. b. Involving program staff in planning and conducting evaluation studies; Did not use: 1; Very useful: 11; Moderately useful: 6; Somewhat useful: 0; Not useful: 0; No opinion/No basis to judge: 5; Number of respondents: 23. c. Providing program staff and grantees with technical assistance on evaluation and its use; Did not use: 2; Very useful: 11; Moderately useful: 1; Somewhat useful: 4; Not useful: 0; No opinion/No basis to judge: 5; Number of respondents: 23. d. Exchange of leading practices, tips and tools for using evidence to improve program or agency performance through the PIC or other network; Did not use: 3; Very useful: 5; Moderately useful: 3; Somewhat useful: 4; Not useful: 1; No opinion/No basis to judge: 7; Number of respondents: 23. e. Consulting with congressional and other external stakeholders on an annual or multi-year evaluation agenda; Did not use: 7; Very useful: 1; Moderately useful: 0; Somewhat useful: 1; Not useful: 3; No opinion/No basis to judge: 11; Number of respondents: 23. f. Conducting quarterly progress reviews on your agency's priority goals; Did not use: 1; Very useful: 6; Moderately useful: 5; Somewhat useful: 6; Not useful: 0; No opinion/No basis to judge: 5; Number of respondents: 23. g. Holding goal leaders accountable for progress on agency priority goals; Did not use: 0; Very useful: 8; Moderately useful: 5; Somewhat useful: 5; Not useful: 1; No opinion/No basis to judge: 4; Number of respondents: 23. h. Coordinating with OMB and other agencies to review progress on cross agency priority (CAP) goals; Did not use: 3; Very useful: 3; Moderately useful: 2; Somewhat useful: 5; Not useful: 5; No opinion/No basis to judge: 5; Number of respondents: 23. i. Posting evaluation reports to a searchable database on your agency's website; Did not use: 5; Very useful: 2; Moderately useful: 4; Somewhat useful: 3; Not useful: 3; No opinion/No basis to judge: 6; Number of respondents: 23. j. Reporting on agency and program performance through Performance.gov; Did not use: 2; Very useful: 0; Moderately useful: 4; Somewhat useful: 7; Not useful: 6; No opinion/No basis to judge: 4; Number of respondents: 23. k. Disseminating evaluation reports through electronic mailing lists; Did not use: 6; Very useful: 2; Moderately useful: 3; Somewhat useful: 3; Not useful: 1; No opinion/No basis to judge: 8; Number of respondents: 23. l. Sharing databases in public repositories (e.g., Data.gov) for use by researchers and the public; Did not use: 8; Very useful: 3; Moderately useful: 0; Somewhat useful: 5; Not useful: 1; No opinion/No basis to judge: 6; Number of respondents: 23. m. Other activities or resources; Did not use: 3; Very useful: 1; Moderately useful: 0; Somewhat useful: 1; Not useful: 0; No opinion/No basis to judge: 12; Number of respondents: 17. If you exchange leading practices, tips, and tools through a network other than the PIC (see item d), please specify. data intentionally not reported. If you use other activities or resources (see item m), please specify. data intentionally not reported. Comments and Survey Completion Question: 31. If you have any other comments about any of the topics covered in this questionnaire, please use the space below. data intentionally not reported. 32. Are you ready to submit your final completed survey to GAO? (This is equivalent to mailing a completed paper survey to us. It tells us that your answers are official and final.) Yes, my survey is complete: 24; No, my survey is not complete: 0; Number of respondents: 24. [End of section] Appendix III: GAO Contacts and Staff Acknowledgments: GAO Contact: Nancy Kingsbury, (202) 512-2700 or kingsburyn@gao.gov: Staff Acknowledgments: In addition to the contact named above, Stephanie Shipman (Assistant Director), Thomas Beall, Valerie Caracelli, Timothy Carr, Joanna Chan, Stuart Kaufman, and Penny Pickett made key contributions to this report. [End of section] References: Administration for Children and Families. Evaluation Policy. Washington, D.C.: Department of Health and Human Services, November 2012. Accessed September 24, 2014. [hyperlink, http://www.acf.hhs.gov/programs/opre/resource/acf-evaluation-policy]. America Achieves. "Investing in What Works Index: Better Results for Young People, Their Families, and Communities." Results for America, Washington, D.C., May 2014. Accessed September 11, 2014. [hyperlink, http://www.InvestInWhatWorks.org/policy-hub]. American Evaluation Association. An Evaluation Roadmap for a More Effective Government. N.p.: Revised October 2013. Accessed September 22, 2014. [hyperlink, http://www.eval.org/d/do/472]. Auditor General of Canada. 2013 Spring Report of the Auditor General of Canada. Ch. 1. "Status Report on Evaluating the Effectiveness of Programs." Ottawa: 2013. Accessed September 15, 2014. [hyperlink, http://www.oag- bvg.gc.ca/internet/English/parl_oag_201304_01_e_38186.html]. Bourgeois, Isabelle, and J. Bradley Cousins. "Understanding Dimensions of Organizational Evaluation Capacity," American Journal of Evaluation, 34: 3 (2013): 299--319. Chapel, Thomas. "Building and Sustaining Evaluation Capacity in a Diverse Federal Agency." Paper presented at Federal Evaluators Conference, Washington, D.C., November 1, 2012. Accessed September 11, 2014. [hyperlink, http://www.fedeval.net/presen.htm]. Clapp-Wincek, Cindy. "The Complexity of Building Capacity at USAID." Paper presented at Federal Evaluators Conference, Washington, D.C., November 1, 2012. Accessed September 11, 2014. [hyperlink, http://www.fedeval.net/presen.htm]. Cousins, J. Bradley, Swee C. Goh, Catherine J. Elliott, and Isabelle Bourgeois. "Framing the Capacity to Do and Use Evaluation," New Directions for Evaluation, 133 (Spring 2014): 7--24. Dawes, Katherine. "Program Evaluation at EPA." Paper presented at Federal Evaluators Conference, Washington, D.C., November 1, 2012. Accessed September 11, 2014. [hyperlink, http://www.fedeval.net/presen.htm]. Goldman, Ian. "Developing a National Evaluation System in South Africa," eVALUatiOn Matters: A quarterly knowledge publication of the African Development Bank, 2(3) (September 2013): 42--49. Labin, Susan N., Jennifer L. Duffy, Duncan C. Meyers, Abraham Wandersman, and Catherine A. Lesesne. "A Research Synthesis of the Evaluation Capacity Building Literature," American Journal of Evaluation, 33: 307 (2012). National Audit Office. Cross-Government: Evaluation in Government. Report by the National Audit Office. London, Eng. December 2013. Accessed September 24, 2014. [hyperlink, http://www.nao.org.uk]. Partnership for Public Service and Grant Thornton. A Critical Role at a Critical Time: A Survey of Performance Improvement Officers. Washington, D.C.: April 2011. Accessed September 16, 2014. [hyperlink, http://ourpublicservice.org/OPS/publications/viewcontentdetails.php?id=1 60]. Partnership for Public Service and Grant Thornton. Taking Measure: Moving from Process to Practice in Performance Management. Washington, D.C.: September 2013. Accessed September 16, 2014. [hyperlink, http://www.ourpublicservice.org/OPS/publications/viewcontentdetails.php? id=232]. Partnership for Public Service and IBM Center for the Business of Government. From Data to Decisions III: Lessons from Early Analytics Programs. Washington, D.C.: November 2013. Accessed September 16, 2014. [hyperlink, http://ourpublicservice.org/OPS/publications/viewcontentdetails.php?id=2 33]. Pew Charitable Trusts and MacArthur Foundation. States' Use of Cost- Benefit Analysis: Improving Results for Taxpayers. Philadelphia: Pew- MacArthur Results First Initiative, July 29, 2013. Accessed October 31, 2014. [hyperlink, http://www.pewtrusts.org/en/research-and- analysis/reports/2013/07/29/states-use-of-costbenefit-analysis]. Rist, Ray C., Marie-Helene Boily, and Frederic Martin. Influencing Change: Building Evaluation Capacity to Strengthen Governance. Washington, D.C.: The World Bank, 2011. Accessed September 24, 2014. [hyperlink, https://openknowledge.worldbank.org/]. Segone, Marco, Caroline Heider, Riitta Oksanen, Soma de Silva, and Belen Sanz. "Towards a Shared Framework for National Evaluation Capacity Development," eVALUatiOn Matters: A Quarterly Knowledge Publication of the African Development Bank, 2(3) (September 2013): 7-- 25. Segone, Marco, and Jim Rugh (eds.). Evaluation and Civil Society: Stakeholders' Perspectives on National Evaluation Capacity Development. New York: UNICEF, EvalPartners, IOCE, 2013. Accessed September 24, 2014. [hyperlink, http://www.mymande.org/Evaluation_and_Civil_Society]. Treasury Board of Canada. 2011 Annual Report on the Health of the Evaluation Function. Ottawa: 2012. Accessed September 24, 2014. [hyperlink, http://www.tbs-sct.gc.ca/report/orp/2012/arhef-raefetb- eng.asp]. United Nations Evaluation Group. National Evaluation Capacity Development: Practical Tips on How to Strengthen National Evaluation Systems. A report for the United Nations Evaluation Group Task Force on National Evaluation Capacity Development. New York: 2012. Accessed September 24, 2014. [hyperlink, www.uneval.org/document/detail/1205. U.S. Agency for International Development. Evaluation: Learning from Experience. USAID Evaluation Policy. Washington, D.C.: January 2011. Accessed September 25, 2014. [hyperlink, http://www.usaid.gov/evaluation]. U.S. Department of Health and Human Services, Centers for Disease Control and Prevention. Improving the Use of Program Evaluation for Maximum Health Impact: Guidelines and Recommendations. Atlanta: November 2012. Accessed September 24, 2014. [hyperlink, http://www.cdc.gov/eval]. U.S. Department of Labor. U.S. Department of Labor Evaluation Policy. Washington, D.C.: November 2013. Accessed September 25, 2014. [hyperlink, http://www.dol.gov/asp/evaluation/EvaluationPolicy.htm]. U.S. Department of State. Department of State Program Evaluation Policy. Washington, D.C.: February 23, 2012. Accessed September 24, 2014. [hyperlink, http://www.state.gov/s/d/rm/rls/evaluation/]. [End of section] Related GAO Products: Managing for Results: Agencies' Trends in the Use of Performance Information to Make Decisions. [hyperlink, http://www.gao.gov/products/GAO-14-747]. Washington, D.C.: September 26, 2014. Managing for Results: Enhanced Goal Leader Accountability and Collaboration Could Further Improve Agency Performance. [hyperlink, http://www.gao.gov/products/GAO-14-639]. Washington, D.C.: July 22, 2014. Managing for Results: OMB Should Strengthen Reviews of Cross-Agency Goals. [hyperlink, http://www.gao.gov/products/GAO-14-526]. Washington, D.C.: June 10, 2014. Education Research: Further Improvements Needed to Ensure Relevance and Assess Dissemination Efforts. [hyperlink, http://www.gao.gov/products/GAO-14-]. Washington, D.C.: December 5, 2013. Managing for Results: Executive Branch Should More Fully Implement the GPRA Modernization Act to Address Pressing Governance Challenges. [hyperlink, http://www.gao.gov/products/GAO-13-518]. Washington, D.C.: June 26, 2013. Program Evaluation: Strategies to Facilitate Agencies' Use of Evaluation in Program Management and Policy Making. [hyperlink, http://www.gao.gov/products/GAO-13-570]. Washington, D.C.: June 26, 2013. Managing for Results: Leading Practices Should Guide the Continued Development of Performance.gov. [hyperlink, http://www.gao.gov/products/GAO-13-517]. Washington, D.C.: June 6, 2013. Managing for Results: Agencies Have Elevated Performance Management Leadership Roles, but Additional Training Is Needed. [hyperlink, http://www.gao.gov/products/GAO-13-356]. Washington, D.C.: April 16, 2013. Managing for Results: Data-Driven Performance Reviews Show Promise but Agencies Should Explore How to Involve Other Relevant Agencies. [hyperlink, http://www.gao.gov/products/GAO-13-228]. Washington, D.C.: February 27, 2013. Managing for Results: A Guide for Using the GPRA Modernization Act to Help Inform Congressional Decision Making. [hyperlink, http://www.gao.gov/products/GAO-12-621SP]. Washington, D.C.: June 15, 2012. President's Emergency Plan for AIDS Relief: Agencies Can Enhance Evaluation Quality, Planning and Dissemination. [hyperlink, http://www.gao.gov/products/GAO-12-673]. Washington, D.C.: May 31, 2012. Designing Evaluations: 2012 Revision. [hyperlink, http://www.gao.gov/products/GAO-12-208G]. Washington, D.C.: January 2012. Employment and Training Administration: More Actions Needed to Improve Transparency and Accountability of Its Research Program. [hyperlink, http://www.gao.gov/products/GAO-11-285]. Washington, D.C.: March 15, 2011. Program Evaluation: Experienced Agencies Follow a Similar Model for Prioritizing Research. [hyperlink, http://www.gao.gov/products/GAO-11-176]. Washington, D.C.: January 14, 2011. Employment and Training Administration: Increased Authority and Accountability Could Improve Research Program. [hyperlink, http://www.gao.gov/products/GAO-10-243]. Washington, D.C.: January 29, 2010. Program Evaluation: A Variety of Rigorous Methods Can Help Identify Effective Interventions. [hyperlink, http://www.gao.gov/products/GAO-10-30]. Washington, D.C.: November 23, 2009. Program Evaluation: An Evaluation Culture and Collaborative Partnerships Help Build Agency Capacity. [hyperlink, http://www.gao.gov/products/GAO-03-454]. Washington, D.C.: May 2, 2003. Program Evaluation: Improving the Flow of Information to Congress. [hyperlink, http://www.gao.gov/products/GAO/PEMD-95-1]. Washington, D.C.: January 30, 1995. [End of section] Footnotes: [1] Pub. L. No. 103-62, 107 Stat. 285 (Aug. 3, 1993). [2] GAO, Managing for Results: Executive Branch Should More Fully Implement the GPRA Modernization Act to Address Pressing Governance Challenges, [hyperlink, http://www.gao.gov/products/GAO-13-518] (Washington, D.C.: June 26, 2013). [3] Pub. L. No. 111-352, 124 Stat. 3866 (Jan. 4, 2011). [4] [hyperlink, http://www.gao.gov/products/GAO-13-518] and Program Evaluation: Strategies to Facilitate Agencies' Use of Evaluation in Program Management and Policy Making, [hyperlink, http://www.gao.gov/products/GAO-13-570] (Washington, D.C.: June 26, 2013). [5] In [hyperlink, http://www.gao.gov/products/GAO-13-518] we summarize our work in 2013 on implementation of the act. Other reports we issued pursuant to this mandate are listed at the end of this report. [6] 31 U.S.C. § 901(b). The 24 CFO Act agencies, generally the largest federal agencies, are listed in appendix I. [7] GAO, Designing Evaluations: 2012 Revision, [hyperlink, http://www.gao.gov/products/GAO-12-208G] (Washington, D.C.: January 2012). [8] [hyperlink, http://www.gao.gov/products/GAO-13-570]. [9] For additional information on the GPRAMA requirements, see our web page on leading practices for results-oriented management at [hyperlink, http://www.gao.gov/key_issues/managing_for_results_in_government]. For more information on these roles, see GAO, Managing for Results: Enhanced Goal Leader Accountability and Collaboration Could Further Improve Agency Performance, [hyperlink, http://www.gao.gov/products/GAO-14-639] (Washington, D.C.: July 22, 2014). [10] Executive Order No. 13,450, Improving Government Program Performance, (Nov. 13, 2007). [11] OMB, Preparation, Submission, and Execution of the Budget. Part 6. Strategic Plans, Annual Performance Plans, Performance Reviews, and Annual Program Performance Reports, OMB Circular No. A-11 (Washington, D.C.: August 2012, updated July 2014). [12] OPM, "Memorandum for Chief Human Capital Officers: Government Performance and Results Act Modernization Act of 2010 Functional Competencies" (Washington, D.C.: Jan. 3, 2012). [13] The Federal Employee Viewpoint Survey measures employees' perceptions of whether, and to what extent, conditions characterizing successful organizations are present in their agencies. See [hyperlink, http://www.fedview.opm.gov]. [14] OMB, Increased Emphasis on Program Evaluations, M-10-01. Memorandum for the Heads of Executive Departments and Agencies (Washington, D.C.: Oct. 27, 2009); and Use of Evidence and Evaluation in the 2014 Budget, M-12-14. Memorandum for the Heads of Executive Departments and Agencies (Washington, D.C.: May 18, 2012). [15] OMB, Next Steps in the Evidence and Innovation Agenda, M-13-17. Memorandum for the Heads of Executive Departments and Agencies (Washington, D.C.: July 26, 2013). [16] The reference list names the sources we reviewed. [17] The number of agency PIOs who responded to survey questions depended on skip instructions contained in the survey, and some PIOs chose not to answer certain questions. See appendix II for the actual number of respondents for each question. [18] GAO, Managing for Results: Agencies Have Elevated Performance Management Leadership Roles, but Additional Training Is Needed, [hyperlink, http://www.gao.gov/products/GAO-13-356] (Washington, D.C.: Apr. 16, 2013). [19] GAO, Program Evaluation: Experienced Agencies Follow a Similar Model for Prioritizing Research, [hyperlink, http://www.gao.gov/products/GAO-11-176] (Washington, D.C.: Jan. 14, 2011), p. 19. [20] American Evaluation Association, An Evaluation Roadmap for a More Effective Government (Oct. 2013); Guiding Principles for Evaluators (2004) accessible at [hyperlink, http://www.eval.org]. [21] For an example of a comprehensive set of evaluation policies, see U.S. Agency for International Development, Evaluation: Learning from Experience. USAID Evaluation Policy (Washington, D.C.: Jan. 2011). [hyperlink, http://www.usaid.gov/evaluation]. [22] [hyperlink, http://www.gao.gov/products/GAO-11-176]; OMB, Evaluating Programs for Efficacy and Cost-Efficiency, M-10-32. Memorandum for the Heads of Executive Departments and Agencies (Washington, D.C.: July 29, 2010); American Evaluation Association (2013). [23] [hyperlink, http://www.gao.gov/products/GAO-13-570]. [24] [hyperlink, http://www.gao.gov/products/GAO-11-176], p. 20. [25] [hyperlink, http://www.gao.gov/products/GAO-12-208G]. [26] OMB M-12-14, p. 4 [27] [hyperlink, http://www.gao.gov/products/GAO-13-356]. [28] [hyperlink, http://www.gao.gov/products/GAO-13-570]. [29] GAO, Program Evaluation: Improving the Flow of Information to the Congress, [hyperlink, http://www.gao.gov/products/GAO/PEMD-95-1] (Washington, D.C.: Jan. 30, 1995), and Managing for Results: A Guide for Using the GPRA Modernization Act to Help Inform Congressional Decision Making, [hyperlink, http://www.gao.gov/products/GAO-12-621SP] (Washington, D.C.: June 15, 2012). [30] GAO, Managing for Results: Leading Practices Should Guide the Continued Development of Performance.gov, [hyperlink, http://www.gao.gov/products/GAO-13-517] (Washington, D.C.: June 6, 2013). [31] [hyperlink, http://www.gao.gov/products/GAO-12-621SP], and [hyperlink, http://www.gao.gov/products/GAO-13-570]. [32] The CFO Act agencies are listed at 31 U.S.C. § 901(b). [End of section] GAO's Mission: The Government Accountability Office, the audit, evaluation, and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony: The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO's website [hyperlink, http://www.gao.gov]. Each weekday afternoon, GAO posts on its website newly released reports, testimony, and correspondence. To have GAO e-mail you a list of newly posted products, go to [hyperlink, http://www.gao.gov] and select "E-mail Updates." Order by Phone: The price of each GAO publication reflects GAO's actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO's website, [hyperlink, http://www.gao.gov/ordering.htm]. Place orders by calling (202) 512-6000, toll free (866) 801-7077, or TDD (202) 512-2537. Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information. Connect with GAO: Connect with GAO on facebook, flickr, twitter, and YouTube. Subscribe to our RSS Feeds or E mail Updates. Listen to our Podcasts. Visit GAO on the web at [hyperlink, http://www.gao.gov]. To Report Fraud, Waste, and Abuse in Federal Programs: Contact: Website: [hyperlink, http://www.gao.gov/fraudnet/fraudnet.htm]; E-mail: fraudnet@gao.gov; Automated answering system: (800) 424-5454 or (202) 512-7470. Congressional Relations: Katherine Siggerud, Managing Director, siggerudk@gao.gov: (202) 512-4400: U.S. Government Accountability Office: 441 G Street NW, Room 7125: Washington, DC 20548. Public Affairs: Chuck Young, Managing Director, youngc1@gao.gov: (202) 512-4800: U.S. Government Accountability Office: 441 G Street NW, Room 7149: Washington, DC 20548. [End of document]