This is the accessible text file for GAO report number GAO-14-75 entitled 'Clinical Data Registries: HHS Could Improve Medicare Quality and Efficiency through Key Requirements and Oversight' which was released on December 16, 2013. This text file was formatted by the U.S. Government Accountability Office (GAO) to be accessible to users with visual impairments, as part of a longer term project to improve GAO products' accessibility. Every attempt has been made to maintain the structural and data integrity of the original printed product. Accessibility features, such as text descriptions of tables, consecutively numbered footnotes placed at the end of the file, and the text of agency comment letters, are provided but may not exactly duplicate the presentation or format of the printed version. The portable document format (PDF) file is an exact electronic replica of the printed version. We welcome your feedback. Please E-mail your comments regarding the contents or accessibility features of this document to Webmaster@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. Because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. United States Government Accountability Office: GAO: Report to Congressional Committees: December 2013: Clinical Data Registries: HHS Could Improve Medicare Quality and Efficiency through Key Requirements and Oversight: GAO-14-75: GAO Highlights: Highlights of GAO-14-75, a report to congressional committees. Why GAO Did This Study: The American Taxpayer Relief Act of 2012 instructed HHS to establish a new program to designate “qualified” CDRs—-entities that would work with physicians treating Medicare patients to collect clinical information and use it to improve the quality and efficiency of care. The act also mandated GAO to report on the potential for CDRs to improve quality and efficiency. This report examines (1) improvements demonstrated by CDRs in quality and efficiency of care, (2) HHS’s plans for requirements and oversight for qualified CDRs, (3) actions HHS could take to facilitate the development of qualified CDRs, and (4) actions HHS could take to facilitate CDRs’ use of health IT. GAO reviewed relevant studies and documents, and interviewed HHS and CDR officials. GAO also convened an expert meeting with the assistance of the Institute of Medicine and synthesized input from experts and other sources to assess the likely effect of potential program requirements, approaches to oversight, and other actions HHS could take. What GAO Found: Clinical data registries (CDR) have demonstrated a particular strength in assessing physician performance through their capacity to track and interpret trends in health care quality over time. Studies examining results reported by several long-established CDRs demonstrate the utility of CDR data sets for analyzing trends in both outcomes and treatments. CDR efforts to improve outcomes typically involve a combination of performance improvement activities including feedback reports to participating physicians, benchmarking physician performance relative to that of their peers, and related educational activities designed to stimulate changes in clinical practice. Studies GAO reviewed provided less insight on ways to improve the efficiency of care. The Department of Health and Human Services’ (HHS) plans for implementing the qualified CDR program offer little specificity and provide substantial leeway for CDRs seeking to become qualified. According to officials, HHS plans to have its program requirements and structure evolve over time, and a key question is the extent to which this evolutionary process will focus on harnessing the potential of CDRs to promote quality and efficiency. GAO’s synthesis of input from experts and from other relevant sources identified several key requirements that would make it more likely that qualified CDRs promote improved quality and efficiency, which HHS’s current plans for the program would do little to address. These requirements include directing CDRs to focus data collection on performance measures that address the key opportunities for improvement in quality and efficiency for each CDR’s target population and requiring CDRs to demonstrate improvement over time on the quality and efficiency measures that they collect. In addition, effective oversight of these requirements depends on expert judgment to take account of variation among CDRs in their circumstances and opportunities for improvement. Experts indicated that HHS can also help qualified CDRs to improve the quality and efficiency of care provided to Medicare patients by taking actions that could reduce potential barriers to the development of qualified CDRs, such as concerns about complying with privacy regulations and the difficulty of funding CDRs. GAO’s synthesis of input from experts and from other relevant sources identified several specific actions that HHS could take. They include developing guidance to clarify federal privacy requirements for physicians participating in CDRs and testing one or more models of shared savings between Medicare and qualified CDRs that achieve reduced Medicare expenditures with improved quality of care. In addition, input from experts and other relevant sources suggests that HHS can take actions to facilitate CDRs’ use of health information technology (IT). According to CDR officials, some CDRs have developed approaches to electronically capture and transmit large amounts of detailed clinical data from a wide variety of electronic health record (EHR) systems. CDRs could benefit from new IT standard setting that focuses on data elements needed for the measures that CDRs collect. One way HHS can influence whether EHR vendors use IT standards to design EHR systems that are compatible with CDR needs is through its setting of meaningful use requirements in its EHR incentive programs. What GAO Recommends: GAO recommends that HHS (1) focus its requirements for qualified CDRs on improving quality and efficiency, (2) require qualified CDRs to demonstrate improvement in quality and efficiency, (3) draw on expert judgment to monitor qualified CDRs, (4) reduce barriers to the development of qualified CDRs, and (5) include, if feasible, key data elements needed by qualified CDRs in its requirements under the EHR incentive programs. HHS agreed with GAO’s recommendations. View [hyperlink, http://www.gao.gov/products/GAO-14-75]. For more information, contact Linda T. Kohn at (202) 512-7114 or kohnl@gao.gov. [End of section] Contents: Letter: Scope and Methodology: Background: Clinical Data Registries Have Enabled Sophisticated Assessments of Quality of Care, but Have Less to Report on Efficiency and Have Other Limitations: HHS's Plans for Requirements and Oversight for Qualified CDRs Need to Focus on Quality and Efficiency: HHS Actions Could Reduce Barriers to the Development of Qualified CDRs: Health IT, Including EHRs, Can Support the Collection and Sharing of CDR Data, and Those Activities Could Be Strengthened by HHS Actions to Align Health IT Policies with CDR Needs: Conclusions: Recommendations for Executive Action: Agency Comments and Our Evaluation: Appendix I: List of Participants at GAO's Expert Meeting Hosted by the Institute of Medicine, June 10-11, 2013: Appendix II: Comments from the Department of Health and Human Services: Appendix III: GAO Contact and Staff Acknowledgments: Abbreviations: ACC: American College of Cardiology: ATRA: American Taxpayer Relief Act of 2012: CDR: clinical data registry: CG-CAHPS: Consumer Assessment of Healthcare Providers and Systems- Clinician & Group Surveys: CMS: Centers for Medicare & Medicaid Services: CQM: clinical quality measure: EHR: electronic health record: HHS: Department of Health and Human Services: HIPAA: Health Insurance Portability and Accountability Act: HITECH: Health Information Technology for Economic and Clinical Health Act: IOM: Institute of Medicine: IRB: institutional review board: IT: information technology: LOINC: Logical Observation Identifiers Names and Codes: NQF: National Quality Forum: ONC: Office of the National Coordinator for Health Information Technology: PQRS: Physician Quality Reporting System: RFD: Retrieve Form for Data Capture: SNOMED CT: Systematized Nomenclature of Medicine Clinical Terms: STS: Society of Thoracic Surgeons: [End of section] GAO: United States Government Accountability Office: 441 G St. N.W. Washington, DC 20548: December 16, 2013: Congressional Committees: Both the federal government and private entities are attempting to enhance the quality and efficiency of health care by shifting from rewarding physicians and other providers based on the volume of their services to rewarding them based on the value of those services-- quality and efficiency of care. However, finding a practical way to accurately and credibly identify and then promote high quality and efficient care is a complex task. An approach adopted by some groups, such as medical specialty societies and regional health improvement collaboratives, has been to develop clinical data registries (CDR). CDRs are entities that collect and analyze detailed information on the therapies that patients receive and changes in their clinical condition over time in order to evaluate and improve care practices and outcomes. In recognition of the potential of CDRs to promote the quality and efficiency of care in the Medicare program,[Footnote 1] the American Taxpayer Relief Act of 2012 (ATRA) instructed the Department of Health and Human Services (HHS) to establish a new program to designate "qualified" CDRs. This program could encourage more physicians treating Medicare beneficiaries to participate in CDRs and thereby engage in the process of collecting detailed clinical information and using it to improve the quality and efficiency of care.[Footnote 2] CDRs can analyze variations in treatment and outcomes; examine factors that influence prognosis and quality of life; describe care patterns, including appropriateness of care and disparities in the delivery of care; assess effectiveness; measure the quality of care; and study quality improvement.[Footnote 3] Proponents contend that qualified CDRs would generate data of greater relevance, depth, and credibility to physicians--particularly specialist physicians--than current federal performance assessment programs, and thereby improve quality and efficiency. The new qualified CDR program will provide physicians with an alternative to participation in HHS's existing Physician Quality Reporting System (PQRS). PQRS allows physicians to report quality data on services they provide to Medicare beneficiaries using measures that physicians select from a menu of HHS-defined quality measures.[Footnote 4] HHS provides incentive payments to physicians who satisfactorily report quality data to PQRS, and has announced that physicians who do not satisfactorily meet PQRS submission requirements in 2013 will have their Medicare payments reduced by 1.5 percent for services provided in 2015.[Footnote 5] Under the new CDR program, physicians who satisfactorily participate in a qualified CDR would also receive the incentive payments and would avoid the penalties without submitting data to PQRS.[Footnote 6] HHS issued a final rule and preamble on December 10, 2013, that included information on how the qualified CDR program will function. In the preamble, HHS set out plans for implementing the ATRA requirements and the definition of, requirements for, and process for being designated a qualified CDR.[Footnote 7] The program is scheduled to take effect in January 2014. ATRA mandated that GAO report on the potential of CDRs to improve the quality and efficiency of care in the Medicare program and the role of health information technology (IT) in facilitating CDRs. For this report, we examined: 1. Ways CDRs have demonstrated the capacity to improve the quality and efficiency of physician care; 2. HHS's plans for requirements and oversight of qualified CDRs to maximize CDRs' potential impact on the quality and efficiency of care; 3. Barriers, if any, to the development of qualified CDRs, and actions HHS can take to minimize their potential impact; and: 4. The potential of health IT to enhance CDR operations and actions HHS can take to facilitate CDR use of health IT. Scope and Methodology: To examine ways CDRs have demonstrated the capacity to improve the quality and efficiency of physician care, we conducted internet searches and reviewed literature to identify studies assessing the impact of CDRs on quality and efficiency of physician care. We synthesized the results of these studies in terms of the type, scope, and magnitude of changes in quality and efficiency attributed to CDRs, and assessed the strength and limitations of the evidence produced by those studies. We also interviewed officials from seven organizations that operate existing CDRs, including regional health care collaboratives, and officials from a major health plan. These interviews generally included discussion of the type, scope, and magnitude of any changes in quality and efficiency of physician care that they have observed stemming from the activities of CDRs. We selected CDRs that cover patient care across a range of medical conditions, focusing on those that have operated for a number of years and on organizations that have extensive experience using CDR data. To examine HHS's plans for requirements and oversight of qualified CDRs to maximize CDRs' potential impact on the quality and efficiency of care, we interviewed HHS officials and reviewed HHS documents on the department's plans for implementing the qualified CDR program, including both a proposed and final rule and the accompanying preambles, which were published in the Federal Register during the period of our review. In addition, we convened an expert meeting with the assistance of the National Academies' Institute of Medicine (IOM) to discuss potential requirements for qualified CDRs, the advantages and disadvantages of these requirements, and the oversight that HHS could provide to qualified CDRs to promote the quality and efficiency of care. We worked with staff at IOM to identify experts to participate in the meeting. Generally, participants were chosen for their expertise in operating or launching a CDR, as health plan officials or other users of CDR data, as health IT professionals, or as clinical researchers. Representatives from the Centers for Medicare & Medicaid Services (CMS)--the HHS agency charged with implementing the program--and HHS's Office of the National Coordinator for Health Information Technology (ONC) also attended the meeting. To ensure that participants represented a broad range of views and interests and that we fully understood those interests, we required that participants complete a conflict of interest form. See appendix I for a list of experts who participated in the meeting. We synthesized the experts' comments at the meeting together with other relevant sources, including related published literature, to assess the potential impact of different program requirements and approaches to providing oversight of those requirements on the effectiveness of qualified CDRs in promoting quality and efficiency of care. To identify barriers, if any, to the development of qualified CDRs and actions HHS can take to minimize their impact, we interviewed officials and reviewed documents from CMS on its plans for implementing the qualified CDR program. We also obtained input from participants during the expert meeting on barriers to the development of qualified CDRs and on what support HHS could provide to reduce these barriers. We synthesized the experts' comments at the meeting together with other relevant sources, including related published literature, to assess the potential impact of selected actions HHS could take on overcoming the identified barriers to the development of qualified CDRs. To examine the potential of health IT to enhance CDR operations and actions HHS can take to facilitate CDR use of health IT, we reviewed documents and interviewed officials from CMS and ONC on the agencies' plans related to IT for the qualified CDR program. We also interviewed officials to determine how CDRs interact with electronic health record (EHR) systems used by many providers.[Footnote 8] In addition, we obtained input from participants during the expert meeting on the potential of health IT to facilitate CDR operations. In addition to receiving input from meeting participants, we also interviewed health IT experts who have been involved in applying health IT to the operations of existing CDRs. We conducted this performance audit from March 2013 to December 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Background: Over the past 25 years, a broad range of entities--encompassing the federal Medicare and Medicaid programs, private health insurers, and various provider organizations--have created different systems for assessing physician performance, of which PQRS and CDRs are examples. Early efforts largely focused on the quality of care (i.e., the extent to which patients received care that was likely to improve their health status). More recently, the focus of many of these systems has expanded to include the efficiency of care (i.e., the extent to which high-quality care was provided without using more resources than necessary). In concert with these performance assessment systems, some public and private payers have begun to provide incentives to physicians based on their performance to stimulate improvement over time. These physician performance assessment systems have developed a wide range of performance measures. Some are process measures, which assess the extent to which physicians effectively implement clinical practices (or treatments) that have been shown to result in high- quality or efficient care. Others are outcome measures, which track the results of physician care, such as mortality, infections, and how patients experience that care.[Footnote 9] To assess performance on such measures, these systems have collected information from administrative data sets, including billing data, as well as from patient medical records and patient surveys. Measures used to assess physician performance are composed of a number of clinical data elements, or pieces of data, that must be collected in order to determine performance. For example, the performance measure endorsed by the National Quality Forum (NQF) for acute stroke mortality rate comprises two data elements--the number of stroke patients treated and the number of deaths among those patients.[Footnote 10] Other measures are more complex and require more data elements. Many of these assessment systems evolved independently and therefore are very different from one another. For example, there is great variability among existing CDRs, which range from those developed by medical specialty societies to those developed by regional health care improvement collaboratives. One of the longest-standing CDRs focused on physician care is the Society of Thoracic Surgeons' (STS) Adult Cardiac Surgery Database, which was established in 1989 in response to HHS's publication of mortality rates for individual thoracic surgery programs. According to the STS, HHS's published rates were misleading because they had not been adjusted adequately for variations in the complexity of patients treated by different programs. The most complicated and highest-risk cases typically have the highest mortality rates, independent of the quality of the surgeon's performance. So the STS developed nationally benchmarked performance data with empirically tested risk adjustment models based on detailed clinical variables. Since then, additional CDRs have been developed by medical specialty societies, such as the American College of Cardiology (ACC) and the American College of Surgeons, as well as by regional health care improvement collaboratives, such as Minnesota Community Measurement and the Wisconsin Collaborative for Healthcare Quality. The profusion of these different systems has created difficulties for those involved in using and maintaining them. For example, physicians, along with other providers, have found it burdensome to provide data on multiple performance measures to multiple public and private physician performance assessment systems, which has led to efforts to align these systems. For example, CMS has announced its intention to maximize the extent to which physicians can satisfy its different performance assessment programs by submitting one set of data. There have also been efforts to develop a consensus among public and private groups concerning top priority objectives for improvement. For example, in 2011 the Secretary of HHS issued a National Quality Strategy based on input from major health care stakeholders, which established six broad priority domains. However, efforts to bring various systems into greater alignment based on specific national priorities are complicated by the diversity of care that physicians provide. For example, primary care physicians treat patients with conditions that fall under nearly 400 different diagnostic categories, making it difficult to assess their performance appropriately with a limited number of measures. Collectively, specialist physicians also encompass a broad range of conditions and treatments. While some dimensions of quality and efficiency may apply broadly across most physicians, such as the extent to which they coordinate their care effectively with a patient's other care providers, other important aspects of physician performance are distinct for different medical conditions. Physicians are increasingly using EHRs to collect and report the data needed for performance assessments, in part as a result of direct governmental encouragement. HHS's Medicare and Medicaid EHR programs provide incentive payments to eligible participants as they adopt, implement, or upgrade certified EHR technology and demonstrate its meaningful use.[Footnote 11] To receive these incentives, eligible providers, including physicians, must first adopt EHR technology that is certified specifically for the EHR Incentive Programs. Certified EHR systems must meet specific criteria, including having the ability to store data in a structured format to facilitate retrieval and use of the data by other systems, and having the capability to collect and report data on a large number of clinical quality measures defined by HHS. Physicians must then demonstrate that they are using their certified EHR systems in meaningful ways that can positively affect the care of their patients, including conducting quality assessments using some of the clinical quality measures. The EHR incentive programs, which began in 2011, are scheduled to be implemented in three stages. Stage 1 is focused primarily on data capture and sharing. Stage 2 is scheduled to begin in 2014 and will focus on improving selected clinical processes. Stage 3 is expected to begin in 2016 and to focus on improving quality, safety, and efficiency outcomes. Clinical Data Registries Have Enabled Sophisticated Assessments of Quality of Care, but Have Less to Report on Efficiency and Have Other Limitations: CDRs have demonstrated a particular strength in assessing physician performance through their capacity to track and interpret trends in health care quality over time. This strength derives from the fact that CDRs typically collect extensive, standardized clinical data on large numbers of patients that provide more clinical detail than can be obtained from administrative data. Those clinical data provide the basis for developing sophisticated risk models, which enable CDRs to compare physician performance with appropriate adjustments for variations in the severity level and other attributes of the patients they treat. CDRs also collect extensive, standardized data on treatments provided to large numbers of patients over extended periods of time, which permits CDRs to explore in depth how treatment variations affect patient outcomes. Moreover, CDRs collect information about many different types of patients encountered by physicians, including those with complex combinations of medical conditions, who are often excluded from clinical research studies. This enables CDRs to analyze trends for the full population of patients that participating physicians actually treat, and examine variations across many different subgroups of patients. Studies examining outcomes reported by several long-established CDRs demonstrate the utility of CDR data for analyzing trends in both outcomes and treatments. For example, CDR data on treatment of acute myocardial infarction over 15 years show major increases in guideline- recommended treatment and a 39 percent reduction in overall mortality. [Footnote 12] Similarly, results from the STS Adult Cardiac Surgery Registry show substantial improvements in mortality and morbidity rates for coronary artery bypass graft surgery--24.4 percent lower mortality and 26.4 percent lower postoperative stroke over 10 years-- that were linked to refinements in surgical techniques.[Footnote 13] Data from other CDRs have highlighted clinical areas where quality improvements have been more mixed, such as a medical oncology CDR that showed marked improvements over 5 years only on measures of whether providers had adopted new clinical practices, while other measures of clinical quality remained relatively high on some dimensions and low on others.[Footnote 14] Another study presented comparable results for a state-based CDR assessing primary care. It found major improvements over 6 years on some measures, such as kidney function monitoring for diabetic patients, but considerably less improvement on others, such as lipid testing and control for patients with coronary artery disease.[Footnote 15] CDR efforts to improve outcomes typically involve a combination of performance improvement activities, including feedback reports to participating physicians, benchmarking physician performance relative to that of their peers, and related educational activities designed to stimulate changes in clinical practice. Officials of the CDRs from which these results were reported described to us a range of educational materials and related activities that they have developed targeted to physicians who do relatively poorly on specific measures. They generally view these additional education activities as essential to achieving improved performance. Using a within-registry, national- level randomized controlled trial, one study of the STS registry demonstrated a positive impact from providing surgeons with educational materials targeted to key process measures related to cardiac surgery.[Footnote 16] Another study used CDR performance data to assess the effectiveness of a health plan's interventions to promote changes in practice. Specifically, it found that physician groups participating in the health plan's regional collaboratives had lower risk-adjusted mortality and better composite quality scores over time than physician groups in other states who participate in these CDRs.[Footnote 17] Compared with their efforts related to quality, CDRs have typically provided less insight on ways to improve the efficiency of care. For example, none of the studies of CDR results that we examined addressed changes in the cost of care directly. However, studies of CDRs focusing on surgical care reported improvements on several measures related directly or indirectly to the use of resources, such as rates of complication. The potential to draw inferences about costs from rates of complications was demonstrated by researchers affiliated with Michigan Blue Cross and Blue Shield. They used CDR data to estimate that their regional surgical collaborative had led to 2,500 fewer patients with surgical complications per year, which--when considered with the average cost of those complications--translated to annual savings of about $20 million.[Footnote 18] One reason why CDRs have not typically assessed the cost of care is that their data have usually been limited to information available from patient medical records recorded during the course of treatment, including patient risk factors, process measures concerning the treatments provided, and short-term outcomes such as inpatient mortality and morbidity. These data do not typically include information on costs as well as other information relevant to assessing longer term outcomes and changes in patient functioning (e.g., patient-reported outcomes). To obtain this information, CDRs need to turn to other data sources, as some have started to do. For example, the STS has begun, for specific research projects, to obtain Medicare claims data and merge those data with its own CDR data to examine costs and long-term outcomes. The results reported to date from existing CDRs are also limited in terms of their scope and capacity to determine the independent impact of CDRs on physician performance. Most of the studies examining the outcomes of CDRs that we found come from a relatively small number of CDRs run by certain medical specialty societies--including the STS for cardiac surgery, the American College of Surgeons for general and vascular surgery, and the American Society for Clinical Oncology for medical oncology--plus one ambulatory care CDR run by a regional health care improvement collaborative, the Wisconsin Collaborative for Healthcare Quality. Most of these are CDRs that have been in operation for close to a decade or more, and therefore have substantial longitudinal data from which to analyze trends over time. Even within the given specialty or region targeted by the CDR, the scope of care addressed by the studies we examined were typically limited. For example, the study of cardiac surgery focused on one procedure-- coronary artery bypass graft surgery--while the study of ambulatory care in Wisconsin examined diabetes, coronary artery disease, and hypertension management plus three cancer screenings and a vaccine. Moreover, CDRs by design collect observational data with no predetermined designation of treatment and control groups, as would be done in a randomized controlled trial. Therefore, it is difficult to use CDR data to assess the independent effect of the CDR on physician performance, relative to other factors. A few studies have compared CDR results to control groups and found that the CDRs had at least a modest impact on outcomes relative to the controls.[Footnote 19] However, these analyses were limited to measures where the same data were available from other sources for both CDR patients and control groups, which means that the data used in those analyses did not have the clinical detail (or validation) of the CDR-collected data. Moreover, patients in the control groups also differed to some extent from patients in the CDRs in ways that may affect the results reported. HHS's Plans for Requirements and Oversight for Qualified CDRs Need to Focus on Quality and Efficiency: HHS's plans for CMS's implementation of the qualified CDR program include little specificity on how CDRs will improve quality and efficiency. Setting key requirements, with greater specificity, for CDRs to become qualified could help promote improved quality and efficiency of care. In addition, effective oversight of these requirements depends on expert judgment to take account of variation among CDRs in their circumstances and opportunities for improvement. CMS's Plans for Implementing the Qualified CDR Program Include Little Specificity on How CDRs Will Improve Quality and Efficiency: CMS's plans for implementing the qualified CDR program, when it begins in 2014, offer little specificity concerning CDR objectives or results and provide substantial leeway for CDRs seeking to become qualified. CMS's plans include certain minimal attributes for entities to be considered, such as having been in existence with at least 50 participants for no less than one year. In addition, CMS's plans include a number of largely procedural performance requirements that qualified CDRs would have to satisfy. Among the most important are: * Data collection: A qualified CDR must collect and report to CMS data for at least nine quality measures, at least one of which must be an outcome measure. Collectively, these measures must cover at least three of the six domains of HHS's National Quality Strategy.[Footnote 20] The measures should include patient data from multiple payers and be risk-adjusted, where appropriate. * Data validation: A qualified CDR must attest to CMS that all the data it submitted were accurate and complete, submit a data validation strategy for verifying the accuracy and completeness of data it collected, perform the steps described in its validation strategy and provide CMS with evidence of successful results, and make available to CMS samples of patient data that CMS could use for its own audits of data validity. * Data security: A qualified CDR must have a plan to maintain data security and privacy, have appropriate business associate agreements with participating physicians to satisfy federal patient privacy requirements, and use specified methods to transmit quality data to CMS in one of two specified data formats. * Transparency: A qualified CDR must make publicly available information about its measures, including their supporting evidence or rationale, data elements, and criteria for including and excluding patients. * Improvement activities: A qualified CDR must provide feedback reports to participating physicians on their performance at least four times a year, with benchmarks derived from the CDR's own database or external sources. As a whole, CMS's plans provide substantial leeway in key areas regarding what could constitute satisfactory CDR performance. For example, CMS's plans grant CDRs wide latitude in selecting which measures they will collect, as long as they cover three of six National Quality Strategy domains and at least one is an outcome measure.[Footnote 21] Similarly, CMS's plans state that CDR feedback to participating physicians should include benchmark information, but CDRs will have discretion to determine the appropriate benchmark, either derived from the CDR's own data or drawn from an external source. The extensive leeway in CMS's plans would allow diverse registries to become qualified. Because registries are typically designed to focus on a particular set of patients, defined by medical condition, type of treatment received, or geographical location, they will inevitably vary substantially in their opportunities to promote quality and efficiency of care. Therefore, the broad parameters in CMS's plans are compatible with a wide range of CDRs potentially addressing diverse types of physician care. However, the flexible approach for qualifying CDRs may at the same time provide minimal impetus to CDRs to take full advantage of their specific opportunities to promote the quality and efficiency of care. For example, CMS's plans do not include a process or criteria for assessing the extent to which the measures selected by a CDR in fact address the key opportunities that could result in improved care for its particular target population. In addition, CMS has not provided any details on how it plans to interpret or enforce program requirements for CDRs. For example, CMS has not described what CDRs would need to do to make their data validation strategies acceptable to CMS. Nor has it described the minimal thresholds of accuracy and completeness that CDRs would need to attain, which could help CMS to audit CDR data as necessary in the future. CMS has also not described how it intends to provide oversight to ensure that CDRs comply with the requirements, beyond having CDRs submit a self-nomination statement, initially on an annual basis. Greater specificity in both the requirements for CDRs and the mechanisms for enforcing them is likely to develop with time. CMS has not yet implemented the qualified CDR program, but in the preamble that accompanied its final rule, CMS stated that, as it gains programmatic experience, it anticipates making changes in future rulemaking to the requirements for becoming a qualified CDR.[Footnote 22] However, CMS has not yet articulated the direction or ultimate goals that it seeks to accomplish through this evolution, except that, to the extent possible, it will seek to align the requirements for CDRs more closely over time with requirements for other federal quality programs. Key Requirements Could Improve the Quality and Efficiency of Care through Qualified CDRs: We identified several key requirements for qualified CDRs that, based on our synthesis of the input from experts at the meeting we convened with the assistance of IOM together with other relevant sources, would contribute to improved quality and efficiency of care for Medicare patients. Such requirements could affect quality and efficiency both by determining which entities are designated as qualified CDRs and by encouraging certain activities by CDRs after they are designated as qualified. We identified the following key requirements and assessed the extent to which they are addressed by CMS's plans for implementing the qualified CDR program: 1. Performance measures to address key opportunities: Having qualified CDRs focus their data collection on performance measures that address specific opportunities to improve quality and efficiency for each CDR's target population enhances their effectiveness in promoting quality and efficiency overall. Input from experts and other relevant sources indicates that appropriate performance measures would encompass broadly defined measures of patient outcomes, such as patient experience and function, and consider the appropriateness of the chosen treatment, compared to available alternatives. Rationale: For any given patient population, defined by medical condition, treatments received, geographic location, or other attribute, there are a wide range of existing or potential performance measures on which a CDR could focus. If those measures are not well selected, they may divert the attention of participating physicians to clinical issues that are overly narrow or fail to uncover actual differences in quality and efficiency. Every CDR faces the choice of where it should focus its data collection and analytical resources, though the specific clinical issues that offer the greatest opportunity for improved quality and efficiency will vary from one CDR to another, depending on their target populations and the depth of the evidence base currently established for its field of clinical practice. CDRs need to make strategic choices that make the most of existing knowledge and strategies for improving quality and efficiency while also helping to incrementally expand that evidence base over time. Comparison to CMS plans: CMS plans to leave measure selection to the discretion of each CDR, within the broad parameters of covering three National Quality Strategy domains and including at least one outcome measure. CMS has not described expectations regarding how well targeted those measures are relative to the specific quality or efficiency deficiencies of that CDR's target population. Nor has CMS required that CDRs collect information on patient experience and functional outcomes or address the appropriateness of treatments provided. 2. Core set of measures: Input from experts and other relevant sources indicates that having qualified CDRs collect data for a minimum set of core performance measures with standardized definitions and specifications as part of their overall data collection effort would enable CDRs to address broad, shared objectives regarding both quality and efficiency. Rationale: While CDRs are free to collect a wide range of measures reflecting quality and efficiency opportunities in their particular target populations, there are certain measures that apply across the patient populations covered by different CDRs. Some of these relate to national-level quality improvement objectives such as improving care coordination. In order for CDRs to contribute to these broader national priorities, CDRs could collect the relevant data for their patients in a standardized fashion that permits sharing and aggregating of the data across CDRs and other sources of quality data. A core measure set would align CDRs with key national level priorities on quality and efficiency, while still allowing for innovation and permitting CDRs to collect other data that address regional or specialty-specific concerns. Comparison to CMS plans: CMS has not established common measures across qualified CDRs. To the extent that registries report on different measures within the six National Quality Strategy domains, they would not produce results that could be aggregated to assess progress overall. In addition, results could not be compared across different CDRs, which may be useful, for example, to examine cardiac patients receiving medical or surgical treatments. 3. Data accuracy and completeness: Input from experts and other relevant sources indicates that the credibility of CDR results relies on having CDRs implement a systematic and rigorous process for ensuring the accuracy and completeness of the data they collect and analyze. Rationale: Assessing physician performance with inaccurate or incomplete data is likely to produce misleading and invalid results. Therefore several existing CDRs have instituted regular external audits of the data submitted to their databases. However, the appropriate form of systemic and rigorous checking of the data may vary depending on the CDR's focus and method of data collection. For example, one long-standing CDR has annual external audits conducted of the data it collects, auditing 8 percent of participating physicians in 2013, to ensure that reported data are accurate compared to the original records from which the data were collected. Auditors also check hospital logs to make sure that data on all eligible cases were submitted. By contrast, an official for a different CDR that relies on electronic data extraction from EHRs described the use of statistical methods to identify outliers in the data that may indicate a data collection error. Comparison to CMS plans: CMS's plans state that CDRs must submit a data validation strategy that is acceptable to CMS, but CMS has not described either the approach or the intensity of the CDR efforts expected. CMS has also not detailed how it would evaluate the strategies for acceptability, or how it might evaluate CDR data for validity. Because data validation tends to be a labor-intensive and expensive activity for CDRs, the absence of specific validation requirements once the program is implemented may cause some CDRs to curtail their validation efforts. 4. Participation levels: Input from experts and other relevant sources indicates that CDRs need to achieve a substantial level of participation to ensure that their results represent the physicians that make up their target population, but for newly established CDRs it often takes time to achieve this level of participation. Rationale: Registries that recruit a relatively low proportion of physicians within their target population may not have the data needed to support accurate risk adjustments and benchmarking. However, historically it has taken time for registries to become well established. Rather than setting a minimum proportion, a requirement to disclose the level of participation in a CDR may partially compensate for low levels of participation by alerting potential users of the data to take those limitations into account. Comparison to CMS plans: CMS has not addressed the issue of how well a CDR represents physicians treating its targeted patient population. The planned required minimum of 50 participants may constitute only a very small fraction of those physicians. However, CDRs may use benchmarks developed with data from external organizations, such as the National Committee for Quality Assurance, which could help registries with low participation to achieve more accurate benchmarking. 5. Performance improvement: Input from experts and other relevant sources indicates that CDRs improve quality and efficiency by supplementing timely feedback on physician performance with information that targets needed practice changes. Rationale: The potential for CDRs to promote quality and efficiency improvements depends in large part on their ability to provide physicians with "actionable information" that identifies not only where performance is deficient but also specific changes in behavior that a physician could take to improve their outcomes. For example, one CDR official told us that in addition to performance feedback and benchmarking, the CDR teaches, provides leadership, and supports hospitals and providers in quality improvement and change management. Another CDR official explained that they use CDR data to determine where additional tools are needed for physician development. The CDR provides virtual education programs and develops improvement tools for providers. The CDR officials we spoke with generally agreed that it is vital for CDRs to use data to inform quality improvement initiatives, rather than simply collecting the data. Comparison to CMS plans: CMS's plans would require that qualified CDRs provide participating physicians at least four feedback reports per year with benchmarks of some kind, but they do not require qualified CDRs to undertake any quality initiatives beyond feedback reports. 6. Public reporting: Input from experts and other relevant sources indicates that having CDRs provide some form of public reporting can promote greater quality and efficiency. However, to avoid unintended adverse effects the public reporting may be limited to selected measures that are particularly useful to patients and/or be phased in over time. Rationale: Public reporting can often help to motivate quality and efficiency improvement, but under some circumstances may also diminish physicians' receptivity to negative information and their willingness to participate in CDRs. For example, a CDR may encourage competing providers to collaboratively examine their performance data to identify patterns and sources of suboptimal care. Some of these providers may not be willing to participate in such quality improvement efforts if doing so involves publicly reporting data that could put them at a competitive disadvantage. In this way, differences across CDRs in the kind of data they collect and how they use them may affect the results available to be shared with the public and the possible ramifications of doing so. Comparison to CMS plans: CMS initially proposed that qualified CDRs have a plan to publicly report results for individual physicians, with benchmarks. In response to public comments that raised concerns about the cost and time associated with public reporting, CMS did not adopt this requirement. Instead, the preamble to the final rule states that CMS encourages qualified CDRs to move toward public reporting, and that it will revisit this proposed requirement in the future. 7. Demonstrating results: Input from experts and other relevant sources indicates that CDRs are more likely to achieve improvements in physician performance if they have specific incentives to do so. Therefore, requiring qualified CDRs to demonstrate improvement over time on the quality and efficiency measures that they collect would help to focus their attention on achieving results. Rationale: Both the financial incentives that CDRs will extend to participating physicians and the flexibility allowed in how they choose to operate are intended to promote improved quality and efficiency of care. Therefore, successful CDRs will begin to realize their potential to improve care by demonstrating results on key improvement opportunities for the CDR's target population. Because those opportunities vary across CDRs, the magnitude of improvement that can be expected of different CDRs will also vary. At a minimum, each CDR has the ability to identify its key targets for improvement and begin to make incremental progress toward them. Comparison to CMS plans: CMS has not described any expectations regarding the results of qualified CDR activities. Effective Oversight of CDRs Depends on Expert Judgment to Account for Variation among Them: To effectively implement requirements for qualified CDRs that focus on improving quality and efficiency, expert judgment is needed to interpret those requirements in accordance with the CDRs' differing circumstances and opportunities for improvement. In particular, according to experts and other relevant sources, assessing both potential and actual effects of individual CDRs on quality and efficiency of care requires an understanding of what those particular CDRs could do to change physician practice and achieve improved performance. This will depend on the state of clinical research and other factors that affect what is currently known about opportunities to improve quality and efficiency in each CDR's area of medical practice. For example, expert judgment is needed to determine whether the particular set of measures adopted by a CDR effectively addresses the key quality and efficiency opportunities for improvement for the target population of that CDR. In addition, expert judgment could help to determine what adjustments to make in performance expectations for CDRs that have only recently been established, which compared to CDRs that have been in operation over a longer period and have achieved a higher level of physician participation, may need time to build their capacity to promote improvements in quality and efficiency. Experts and other sources we consulted suggest a range of potential sources that CMS could draw on to provide this expert judgment for assessing qualified CDRs. They include relying on staff within CMS, contracting with outside experts, and delegating certain aspects of oversight to independent organizations. For example, one variation of the latter option might be to set up a deeming process to select one or more outside entities that meet CMS-determined criteria for carrying out all or part of this oversight function. Each of those options has strengths and limitations in terms of, for example, its resource requirements, adaptability to varying situations, and responsiveness to agency priorities (such as promoting alignment with other quality programs). CMS could consider these different strengths and limitations in building an organizational structure for monitoring qualified CDRs that draws on expertise from one or more of these sources. HHS Actions Could Reduce Barriers to the Development of Qualified CDRs: Based on our synthesis of the input from experts at the meeting we convened with the assistance of IOM together with other relevant sources, there are several actions that HHS could take that could help reduce potential barriers to the development of qualified CDRs. Reducing these barriers would make it easier for qualified CDRs to get started and expand the scope of their activities and thereby improve the quality and efficiency of physician care provided to Medicare beneficiaries, according to the input from experts and other relevant sources. Concerns about Complying with Privacy Regulations: Some CDR officials report that the recruitment of new participants is made more difficult by widespread concerns among physicians that submission of data to a CDR risks violation of the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule.[Footnote 23] Under the Privacy Rule, protected health information may be used or disclosed only for specified permitted purposes. Because CDR data are often used for the purposes of both quality improvement activities and clinical research, it may often not be clear whether, or which, permitted use or disclosure applies. This lack of clarity can make it more difficult for CDRs to collect and analyze clinical data for either purpose. CDR officials stated that a particular concern of potential CDR participants is the perceived need for individual patient authorization or approval by an Institutional Review Board (IRB) to ensure compliance with HIPAA requirements.[Footnote 24] CMS has indicated that CDRs must enter into an appropriate business associate agreement with participating physicians that provides for receipt of patient data and public disclosure of quality measure results. [Footnote 25] However, it has not addressed physician concerns regarding the perceived need to meet HIPAA Privacy Rule requirements for research uses and disclosures. The HHS Office for Civil Rights monitors compliance with HIPAA requirements and issues various types of guidance to explain how those requirements apply under different circumstances. Input from experts and other relevant sources suggests that physicians and CDRs could benefit from guidance that provides a detailed explanation of what CDRs need to include in their business associate agreements with participating physicians and what activities would trigger the need for individual patient authorization or IRB approval of their data collection and analysis activities. Limited Ability to Link Patient Data across Sources: CDRs often need to supplement the data that they collect themselves with data obtained from other sources, but their ability to do so is inhibited by limitations in existing processes for matching and linking data on individual patients from multiple sources. In order to link patients' data from multiple sources into a CDR database, the CDR must first be able to identify records for the same patient from each source--a process known as matching. Earlier attempts to implement a unique personal identifier as part of patients' records to enable matching were abandoned due to concerns about its potential impact on patient privacy. Alternative methods exist for matching patient data without using a unique patient identifier, including algorithms that make probabilistic matches based on several discrete data elements. However, these approaches often fall short of matching data from multiple sources for all patients, due in part to variations in the algorithms themselves and the data elements they use for performing these matches.[Footnote 26] Input from experts and other relevant sources suggests that HHS could work on developing a standardized process for matching and linking patient data that does not require the use of a unique patient identifier, including a uniform algorithm and associated data specifications. HHS could then work with other health care entities to adopt this standardized approach across the spectrum of relevant data sources to better address the need of CDRs to link data from other sources in order to perform a more complete assessment of physician performance. Lack of Patient Cost Data: Because CDRs derive most of their data from patient medical records, they typically lack information about the cost of patient care needed to address questions about the efficiency of care. The most fundamental problem with obtaining cost data is that cost data are fragmented among the various public and private payers for health care, including private health insurers as well as Medicare and Medicaid. Even when CDRs limit their focus to the Medicare population, they have had to negotiate with CMS for access to Medicare claims data for each particular research project. To facilitate and encourage CDR analysis of the efficiency of physician care, input from experts and other relevant sources suggests that CMS could make its cost data for Medicare and Medicaid patients generally available to qualified CDRs. In addition, although HHS has less direct control over the cost data collected by private health insurers, some health insurers have begun to work with states, HHS, and others to assemble "all-payer" claims databases that combine public and private health care spending data. HHS could examine the potential for making these "all-payer" claims databases available to qualified CDRs. Difficulty of Funding CDRs: CDRs frequently have difficulty finding a sustainable flow of funding from the participating physicians to maintain the resource-intensive activities necessary for their work, including collecting and validating detailed clinical data, which requires highly trained staff. Under the new program, participation in a qualified CDR will entitle physicians to receive benefits of the same incentive payments and exemption from penalties provided to PQRS participants, which could help to encourage physicians to participate in CDRs and to fund their operations. However, experts report that participation in PQRS remains a cheaper and easier way to obtain those benefits. HHS is looking into expanding incentives for physician participation in CDRs by coordinating with additional federal programs, such as the EHR incentive program, as well as possible coordination with related nongovernmental activities, such as maintenance of certification requirements established by various boards of medical specialties. An alternative approach for providing additional funding to qualified CDRs raised at our expert meeting would be to share with them some of the financial benefits that the CDRs may generate for the Medicare program. Doing so could benefit CDRs that are successful in producing these benefits while promoting program savings for Medicare. For example, HHS could consider testing models of "shared savings" programs--possibly through CMS's Center for Medicare and Medicaid Innovation--that would provide CDRs or their participating providers with a portion of any cost savings for the Medicare program that resulted from their activities. To do this, CMS would have to develop a credible methodology for determining the extent of savings that a qualified CDR's activities had produced for Medicare. Need for Technical Assistance: The first CDRs established by medical specialty societies reported taking many years to work out how best to accomplish the complex technical tasks needed to get a new CDR up and running. These include procedures for deciding what measures to collect, appropriate and feasible data collection and submission processes, implementation of risk adjustment, provisions for maintaining data security and protecting patient privacy, and effective data validation procedures. Several CDRs that have followed have turned to those first CDRs for informal guidance, to learn from their experience. Input from experts suggests that HHS could consider creating or facilitating the development of a CDR resource center that would offer qualified CDRs, or CDRs seeking to become qualified, technical assistance in the initial phases of setting up a CDR. Such a CDR resource center could draw on expertise from existing CDRs or other relevant sources and could help new registries launch successfully and more quickly achieve an adequate level of physician participation.[Footnote 27] Health IT, Including EHRs, Can Support the Collection and Sharing of CDR Data, and Those Activities Could Be Strengthened by HHS Actions to Align Health IT Policies with CDR Needs: In recent years, some CDRs have developed different approaches to electronically capture data from a wide variety of health IT applications, particularly EHR systems. Input from experts and other relevant sources suggests that HHS could help CDRs overcome barriers that impede the electronic collection and transmission of clinical data by supporting standard setting and adjusting meaningful use requirements. CDRs Can Benefit from Using Health IT to Electronically Collect and Transmit Data, but Face Limits in Implementation: Health IT applications, including EHRs, could offer CDRs substantial support in collecting and transmitting large amounts of detailed clinical data from participating physicians' medical records. CDR officials report that, without such IT support, data collection is a time-consuming process where data must be manually abstracted from medical records by specially trained staff and formatted for transmission to the CDR, a process that includes training staff to synthesize information from patient charts and other records. These trained data abstractors must often make judgments on how to interpret certain information in the record to meet the CDR's data specifications and definitions. For example, the word "pneumonia" may not appear in the medical record for all patients with the condition. Therefore, an abstractor may need to interpret the record's data on patient encounters, chest x-ray results, or stethoscope breath sounds to determine whether a patient had pneumonia as defined by the CDR. In addition, most data collection is performed days or weeks after care is provided, rather than at the time of the care, which can substantially delay feedback to physicians. Input from experts and other relevant sources suggest that EHR systems, if appropriately designed and implemented, have the potential to greatly increase the efficiency of extracting data from patient records and transmitting these data to CDRs. The use of EHR systems across the country is growing; the proportion of office-based physicians using any type of EHR system increased from 51 percent in 2010 to 72 percent in 2012.[Footnote 28] If CDRs could receive and aggregate electronically extracted data from EHR systems, the need for manual abstraction by trained professional staff could be reduced or eliminated. Reducing the burden of manual data abstraction could have a number of long-term benefits, including reducing costs for physicians to participate in the CDR, reducing the amount of time a practice spends on CDR data collection activities, and increasing overall participation of physicians in CDRs. Health IT experts also note that automated data collection from EHR systems makes it possible for CDRs to provide physicians with more timely feedback on care they have recently provided, compared to manual data collection. In addition, EHR systems as well as other health IT applications have the potential to facilitate information sharing among CDRs and other potential users of health care quality and efficiency data, allowing for comparison across CDRs and providing a more comprehensive and long- term view of the outcomes of patient care. Some CDRs have adopted IT approaches that allow them to automatically extract at least some information from their participating members' EHR systems into the CDR's database. However, these approaches have some important limitations. For example, experts reported that some CDRs use a method called Retrieve Form for Data Capture (RFD), which informs the physician through a trigger in their EHR system when a patient may be eligible for inclusion in the CDR database. The RFD uses data from the EHR system to automatically prepopulate the CDR's web-based data collection form. However, the RFD then requires that the physician interrupt work to enter the remaining information that was not automatically captured from the EHR. The RFD also works only with EHR systems from a few different vendors. Another example was described by an official from the ACC's Pinnacle registry, which has implemented a more comprehensive system for electronically capturing data from a wider variety of EHR systems. Within each physician practice, system integration software is installed on the same server that hosts the EHR system. The software is designed, developed, and implemented to automatically extract data directly from the physician's EHR system for transmission to the CDR database. After a period of testing and adjustment to adapt the software to the EHR system's specific data structure, it can automatically capture 75 to 90 percent of the desired information. The ACC has determined that this electronic data collection results in higher levels of physician participation, and therefore is worth the tradeoff of doing without the portion of data that cannot be captured electronically. However, according to an ACC official, the system does not work with EHR systems produced by certain vendors, has been costly to implement, and may not be feasible for CDRs in other fields of medicine, where there is less consistent use of clinical terminology than in cardiology. Variation among EHRs Affects CDR Ability to Use Health IT and Could Be Addressed through Health IT Standards and HHS Adjustments to the EHR Incentive Programs: Experts and other relevant sources indicate that variation in EHR systems on several key dimensions impairs CDRs' ability to collect electronic data from participating physicians. * EHRs can differ in which data elements they collect.[Footnote 29] Some EHR systems collect more information on some topics than others, because physicians in different specialties have different needs and interests. * EHR systems can differ in how they store data. In order to automatically extract data from the EHR, CDRs must develop methods for converting the data in each EHR to a format that the CDR's IT system can accept and accurately interpret. For example, an ACC official told us that one reason why their system has been costly to implement and does not work with EHR systems from certain vendors is because of differences in how data are stored in different EHR systems. * Finally, even if EHR systems collect the same basic content and use compatible storage methods, their data elements may be specified or defined differently. For example, an EHR may identify a smoker based on whether a person smoked any number of cigarettes in the last year, while another may count as a smoker anyone who has smoked at least 100 cigarettes in the past and still currently smokes. While both of these definitions may serve various purposes, the information collected from each EHR on smoking would not be fully comparable. These variations in EHR data content, storage, and specifications can impact a CDR's ability to extract data electronically from physician EHR systems. In order to assess physician performance, CDRs have to collect all the data elements needed for their performance measures and ensure that those data elements are consistent with the CDR's data specifications. Consequently, CDRs cannot take full advantage of EHR systems to facilitate data collection and transmission unless they can overcome these variations in content, storage, and specifications across existing EHR systems.[Footnote 30] One way to reduce variation across health IT applications, including EHR systems, and thereby facilitate collection and transmission of clinical data, is to develop and implement relevant health IT standards. According to experts, CDRs could benefit from health IT standards that reduce variation across EHR systems on the data elements needed for the measures used by CDRs. Where such standards are in place, they are available for vendors to use in designing and implementing EHR systems. As a result, different vendors would be more likely to develop EHR systems with consistent clinical data, in terms of their content and specification. Such consistency could make it easier for CDRs to collect these data from different EHR systems, as long as the standards aligned with the CDR's own data specifications and needs. However, standards may not always align with CDR needs. For example, one CDR official reported that the existing health IT standard for cancer staging does not provide the level of detail needed by the oncology CDR, Quality Oncology Practice Initiative, to assess physician compliance with treatment guidelines targeted by the CDR. Several independent organizations play a role in setting the health IT standards that apply to physician EHR systems. They include international standards setting groups, each of which creates detailed coding systems, such as SNOMED CT and LOINC, designed to provide a standard way to electronically record one or more categories of clinical information.[Footnote 31] According to agency officials, while HHS interacts with these groups and may be able to influence the development of new or revised health IT standards, the process for doing so can be long, taking as long as 2 years. Therefore, at any given time, the extent of existing health IT standards largely constrains what developers of EHR systems can do to implement standardized data elements. A second major factor that experts report affects the design and implementation of EHR systems used by physicians is the meaningful use requirements established by HHS for its EHR incentive programs. HHS establishes two sets of requirements for the EHR incentive programs that potentially affect CDRs: (1) a list of specific clinical quality measures (CQM) that physicians are required to collect using EHR- collected data elements, and (2) certification criteria that specify certain capabilities that EHR systems are required to demonstrate, including the ability to collect the data needed for physicians to report the specified CQMs. To receive an incentive payment, physicians must demonstrate, among other things, that they have used a certified EHR system to collect data for a minimum number of the specified CQMs. Through its setting of these meaningful use requirements, HHS could influence the extent to which EHR systems are designed and implemented to collect data needed by CDRs to assess physician performance. According to IT experts at our expert meeting, EHR vendors place a high priority on developing EHR systems that are able to collect CQMs prescribed by meaningful use requirements, without which the systems would not qualify for EHR incentive payments. The current set of 64 CQMs focuses predominantly on primary care and generally does not include measures relevant to CDRs, many of which focus on assessing specialty care. HHS has stated its intention to consider revisions to the meaningful use requirements under Stage 3 of the EHR incentive program implementation, scheduled to take effect in 2016. These revisions would give qualified CDRs greater flexibility in meeting the EHR programs' quality reporting requirements. By also including the data needed by CDRs in its revised meaningful use requirements, HHS could increase the motivation for vendors to include the capacity to collect data elements for measures relevant to CDRs in their EHR systems. Conclusions: Qualified CDRs have the potential to improve the quality and efficiency of care for Medicare beneficiaries by encouraging physicians to submit extensive, standardized data to CDRs, enabling the CDRs to provide feedback to physicians on their performance relative to that of their peers. Studies show that CDRs have great potential to improve quality, and to a lesser extent efficiency, but often that potential is not realized. While implementation of the program is just getting under way and HHS plans to have its program requirements and structure evolve over time, a key question is the extent to which that evolutionary process focuses on harnessing the potential of CDRs to promote quality and efficiency. The extent to which HHS's new program can help CDRs realize their potential to improve quality and efficiency will depend in large part on the content and oversight of the requirements that HHS sets for qualified CDRs and the support that HHS provides. To date, HHS plans have focused on largely procedural requirements for CDRs that collectively would do little to base qualification of CDRs on their potential to affect quality and efficiency or hold them accountable for achieving improvements in those domains. Our analysis identified certain key requirements that HHS could adopt that would make it substantially more likely that qualified CDRs actually would improve quality and efficiency. Some of these key requirements are more important than others to have in place as the program is implemented. From the beginning, the effectiveness of CDRs will depend on their selecting measures that focus the CDRs' assessment and performance improvement activities on the specific opportunities for improvement that exist for their particular target populations. At the same time, CDRs can also collect a limited core data set that contributes to achieving national quality and efficiency objectives. The credibility of those data depends on CDRs establishing from the start systematic and rigorous processes to validate their accuracy and completeness. HHS can most clearly ensure that each qualified CDR focus on improvements in quality and efficiency by requiring that each CDR demonstrate improvements in key measures of quality and efficiency for its target population. Effective monitoring of these requirements will depend on applying expert judgment that can take account of the variation across CDRs in their target opportunities for improvement. HHS can also enhance the effect of qualified CDRs on quality and efficiency by taking steps to reduce barriers to their development and, in particular, taking account of CDRs in its ongoing efforts to promote health IT. Certain steps would be particularly useful as the program gets under way, including clarifying the application of HIPAA privacy requirements to physicians participating in qualified CDRs, addressing the lack of access to multipayer cost data, expanding potential sources of funding to support sustained CDR operations, and providing technical assistance to newly established CDRs. Meanwhile, efforts by some CDRs to adapt health IT to make their data collection less costly and more timely have run into significant barriers related both to gaps in existing health IT standards and to the failure of many current EHRs to apply existing standards to collect data needed by CDRs in a structured format. Changes to EHR capabilities that would enable them to collect such data within existing standards are clearly feasible, but are not high priorities for providers and IT vendors because they are not included in the current set of meaningful use requirements for the EHR incentive program. As HHS determines what the next cycle of meaningful use requirements should comprise, identifying data elements for measures commonly needed by CDRs and including them in meaningful use requirements could substantially assist qualified CDRs in adapting health IT to make data collection less costly and more timely. Recommendations for Executive Action: To help ensure that qualified CDRs promote improved quality and efficiency of physician care for Medicare beneficiaries, we recommend that the Secretary of Health and Human Services take the following five actions: * Direct CMS to establish key requirements for qualified CDRs that focus on improving quality and efficiency. These requirements could include, for example, having CDRs (1) identify key areas of opportunity to improve quality and efficiency for their target populations and collect additional measures designed to address them, (2) collect a core set of measures established by CMS, and (3) demonstrate that their processes for auditing the accuracy and completeness of the data they collect are systematic and rigorous. * Direct CMS to establish a requirement for qualified CDRs to demonstrate improvement on key measures of quality and efficiency for their target populations. * Direct CMS to establish a process for monitoring compliance with requirements for qualified CDRs that draws on relevant expert judgment. This process should assess CDR performance on each requirement in a way that takes into account the varying circumstances of CDRs and their available opportunities to promote quality and efficiency improvement for their target populations. * Determine and implement actions to reduce barriers to the development of qualified CDRs, such as (1) developing guidance that clarifies HIPAA requirements to promote participation in qualified CDRs; (2) working with private sector entities to make relevant multipayer cost data available to qualified CDRs; (3) testing one or more models of shared savings between Medicare and qualified CDRs that achieve reduced Medicare expenditures with improved quality of care, and (4) providing technical assistance to qualified CDRs. * Determine key data elements needed by qualified CDRs--such as those relevant for a required core set of measures--and direct ONC and CMS to include these data elements, if feasible, in the requirements for certification of EHRs under the EHR incentive programs. Agency Comments and Our Evaluation: We provided a draft of this report to HHS for review, and HHS provided written comments, which are reprinted in appendix II. In its comments, HHS concurred with our recommendations and stated its intention to apply the experience it gains in implementing the qualified CDR program to facilitate changes that lead to improved quality and efficiency. For example, HHS stated that it saw value in providing greater specificity in the expectations it sets for qualified CDRs, in particular with respect to having them demonstrate improvement in quality and efficiency, once HHS has sufficient experience with the program to establish a baseline against which to assess their performance. HHS also stated its intention to establish a process to monitor the qualified CDR program that would draw on relevant and appropriate expert judgment and to do what it could to reduce barriers to the development of qualified CDRs. In addition, HHS agreed to have CMS and ONC work together to consider the inclusion of key data elements for qualified CDRs as they develop enhanced health IT criteria for the next stage of the EHR incentive programs. Meanwhile, HHS noted several other efforts that it currently has under way to improve health IT systems in general, which can also provide assistance to qualified CDRs attempting to use health IT to facilitate their operations. While HHS concurred with each of our recommendations, its comments also noted some challenges that it expects to face. For example, HHS stated that it will examine the possibility of establishing a core measure set for qualified CDRs, but it observed that doing so could prove difficult given the number of different clinical specialties on which qualified CDRs may focus. As noted in the draft report, a minimum set of core measures--even if small--could help CDRs to promote national-level quality improvement objectives such as improving care coordination by permitting the sharing and aggregating of the data across CDRs and other sources of quality data. HHS also provided us with technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, the National Coordinator for Health Information Technology, and other interested parties. In addition, the report is available at no charge on the GAO website at [hyperlink, http://www.gao.gov]. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at kohnl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Signed by: Linda T. Kohn: Director, Health Care: List of Committees: The Honorable Max Baucus: Chairman: The Honorable Orrin G. Hatch: Ranking Member: Committee on Finance: United States Senate: The Honorable Fred Upton: Chairman: The Honorable Henry A. Waxman: Ranking Member: Committee on Energy and Commerce: House of Representatives: The Honorable Dave Camp: Chairman: The Honorable Sander M. Levin: Ranking Member: Committee on Ways and Means: House of Representatives: [End of section] Appendix I: List of Participants at GAO's Expert Meeting Hosted by the Institute of Medicine, June 10-11, 2013: Name: Kevin Bozic, MD, MBA; Title: William R. Murray Professor and Vice Chair, Department of Orthopaedic Surgery; Affiliation: University of California, San Francisco; Title: Chair, Steering Committee; Affiliation: California Joint Replacement Registry. Name: Patrick Conway, MD, MSc, SFHM; Title: Chief Medical Officer Director, Center for Clinical Standards and Quality; Affiliation: Centers for Medicare & Medicaid Services. Name: Jason Alexander Efstathiou, MD, PhD; Title: Assistant Professor of Radiation Oncology; Affiliation: Massachusetts General Hospital. Name: Richard E. Gliklich, MD; Title: President; Affiliation: Quintiles Outcome. Name: Kate Goodrich, MD, MHS; Title: Director, Quality Measurement and Health Assessment Group, Center for Clinical Standards and Quality; Affiliation: Centers for Medicare & Medicaid Services. Name: J. Michael McGinnis, MD, MA, MPP Co-Moderator; Title: Senior Scholar and Executive Director, Roundtable on Value & Science-Driven Health Care; Affiliation: Institute of Medicine. Name: Arnold Milstein, MD, MPH Co-Moderator; Title: Professor of Medicine and Director, Clinical Excellence Research Center; Affiliation: Stanford University; Title: Medical Director; Affiliation: Pacific Business Group on Health. Name: Elizabeth Mitchell; Title: CEO; Affiliation: Network for Regional Health Improvement. Name: Peggy E. O'Kane; Title: President; Affiliation: National Committee for Quality Assurance. Name: J. Marc Overhage, MD, PhD; Title: Chief Medical Informatics Officer; Affiliation: Siemens Medical Solutions, Inc. Name: Eric D. Peterson, MD, MPH; Title: Fred Cobb, MD, Distinguished Professor of Medicine Affiliation: Duke University Medical Center; Title: Director; Affiliation: Duke Clinical Research Institute. Name: Jacob Reider, MD; Title: Chief Medical Officer; Affiliation: Office of the National Coordinator for Health Information Technology. Name: David M. Shahian, MD; Title: Professor of Surgery; Affiliation: Harvard Medical School; Title: Vice President, Center for Quality and Safety Affiliation: Massachusetts General Hospital; Title: Chair, Workforce on National Databases; Affiliation: Society of Thoracic Surgeons. Name: David Share, MD; Title: Senior Vice President, Value Partnerships; Affiliation: Blue Cross and Blue Shield of Michigan. Source: GAO. [End of table] [End of section] Appendix II: Comments from the Department of Health and Human Services: Department of Health & Human Services: Office of The Secretary: Assistant Secretary for Legislation: Washington, DC 20201: November 22, 2013: Linda Kohn: Director, Health Care: U.S. Government Accountability Office: 441 G Street NW: Washington, DC 20548: Dear Ms. Kohn: Attached are comments on the U.S. Government Accountability Office's (GAO) report entitled, "Clinical Data Registries: ITHS Could Improve Medicare Quality and Efficiency Through Key Requirements and Oversight" (GAO-14-75). The Department appreciates the opportunity to review this report prior to publication. Sincerely, Signed by: Jim R. Esquea: Assistant Secretary for Legislation: Attachment: General Comments Of The Department Of Health And Human Services (HHS) On The Government Accountability Office's (GAO) Draft Report Entitled, "Clinical Data Registries: HHS Could Improve Medicare Quality And Efficiency Through Key Requirements And Oversight (GAO-14-75): The Department appreciates the opportunity to review and comment on this draft report. GAO Recommendation 1: GAO recommends that the Secretary of HHS direct CMS to establish key requirements for qualified CDRs that focus on improving quality and efficiency. These requirements could include, for example, having CDRs: (1) identify key areas of opportunity to improve quality and efficiency for their target populations and collect data for additional measures designed to address them; (2) collect data for a core set of measures established by CMS; and (3) demonstrate that their processes for auditing the accuracy and completeness of the data they collect are systematic and rigorous. HHS Response: HHS concurs that QCDRs can act as an important tool in our efforts to improve health outcomes. We look forward to working with clinical data registry stakeholders to evaluate opportunities to ensure that QCDRs are improving quality and efficiency. CMS intends to apply the experience gained on the QCDR program on an ongoing basis to facilitate the changes in QCDRs to improve quality and efficiency, including those related to a potential QCDR core measure set. Establishing a core measure set across QCDRs of various specialties will require experience over time with the different QCDRs to determine what measures and types of measures could be potentially included in a possible QCDR core measure set. It will be difficult to mandate a set of QCDR core measures set, given that QCDRs can cross many clinical specialties. GAO Recommendation 2: GAO recommends that the Secretary of HHS direct CMS to establish a requirement for qualified CDRs to demonstrate improvement on key measures of quality and efficiency for their target populations. HHS Response: HHS concurs with this recommendation for a future year. We do see value in providing additional specificity on QCDRs demonstrating improvement in quality and efficiency; however, a baseline must be established built on our experience with the QCDR program for this to be a valuable indicator. GAO Recommendation 3: GAO recommends that the Secretary.of HI-IS direct CMS to establish a process for monitoring compliance with requirements for qualified CDRs that draws on relevant expert judgment. This process should assess qualified CDR performance on each requirement in a way that takes into account the varying circumstances of qualified CDRs and their available opportunities to promote quality and efficiency for their target populations. HHS Response: HHS concurs with this recommendation and intends to have monitoring capabilities for the QCDR program. CMS anticipates that in future years the application of relevant and appropriate expert judgment in the oversight of QCDRs could be expanded through rulemaking to the extent possible while maintaining compliance with appropriate laws and regulations. GAO Recommendation 4: GAO recommends that the Secretary of HHS determine and implement actions to reduce barriers in the development of qualified CDRs, such as (1) developing guidance that clarifies HIPAA requirements to promote participation in qualified CDRs; (2) working with private sector entities to make relevant multi-payer cost data available to qualified CDRs; (3) testing one or more models of shared savings between Medicare and qualified CDRs that achieve reduced Medicare expenditures with improved quality of care; and (4) providing technical assistance to qualified CDRs. HHS Response: HHS concurs and agrees to reduce barriers for the development of QCDRs to the extent possible while maintaining compliance with appropriate laws and regulations. GAO Recommendation 5: GAO recommends that the Secretary of HHS determine key data elements needed by qualified CDRs--such as those relevant for a required core set of measures — and direct ONC and CMS to include these data elements, if feasible, in the requirements for certification of electronic health records (EHRs) under the EHR Incentive Programs. HHS Response: HITS concurs and agrees to include, if feasible, in the QCDRs requirements, key data elements needed by QCDRs. CMS will continue to work closely with ONC in the establishment of criteria for EHR certification. To date, HHS has taken steps and initiated activities that align with this goal, including the Data Framework Initiative [hyperlink, http://www.hhs.gov/open/initiatives/hdi/about.htnil], the Structured Data Capture Initiative [hyperlink, http://www.healthitgov/buzz-blogielectronic-health-and-medical- records/ehr-interoperabilitystructured-data-capture-initiative/], the Clinical Quality Measure Data Standards Initiative [hyperlink, http://www.healthitgov/policy-researchers-implementers/clinical- quality-measures], and the Supplemental Data File Initiative [hyperlink, http://www.healthdata.gov/. CMS and ONC believe that these efforts will improve health a systems in general and will benefit QCDR efforts going forward. Establishing QCDR key data elements will be a matter for future rulemaking. [End of section] Appendix III: GAO Contact and Staff Acknowledgments: GAO Contact: Linda T. Kohn, (202) 512-7114 or kohnl@gao.gov: Staff Acknowledgments: In addition to the contact named above, Will Simerl, Assistant Director; Emily Binek; Monica Perez-Nelson; Eric Peterson; and Roseanne Price made key contributions to this report. [End of section] Footnotes: [1] Medicare is the nation's largest health care program, covering 51 million beneficiaries at a cost of $574 billion in 2012. [2] Pub. L. No. 112-240, § 601, 126 Stat. 2313, 2345-2347 (2013). [3] R. E. Gliklich and N. A. Dreyer, eds., Registries for Evaluating Patient Outcomes: A User's Guide, 2nd ed., AHRQ Publication No.10- EHC049 (Rockville, MD: Agency for Healthcare Research and Quality, September 2010), 10. [4] Providers have the option of reporting PQRS data on their selected measures to CMS using PQRS registries. These registries differ from CDRs in that their sole function is to compile and submit data to PQRS. [5] In addition to physicians, other professionals eligible to participate in PQRS include nurse anesthetists, nurse practitioners, optometrists, physical therapists, and physician assistants. Physicians made up approximately three quarters of the eligible professionals who received PQRS incentive payments in 2011. In 2016 the payment reduction is set to increase to 2.0 percent for eligible professionals who do not meet PQRS requirements in 2014. [6] Section 601 of ATRA states that for 2014 and subsequent years HHS shall treat an eligible professional as satisfactorily submitting data on quality measures under PQRS if, in lieu of reporting measures, the eligible professional is satisfactorily participating in a qualified CDR. Section 601 also requires that HHS establish requirements for an entity to be considered a qualified CDR and establish a process to determine whether the entity meets those requirements. [7] 78 Fed. Reg. 74230 (December 10, 2013). The preamble discussion of provisions related to satisfactory participation in a qualified CDR by individual eligible professionals begins on 78 Fed. Reg. 74465. [8] EHRs are digital versions of patients' medical records that contain information about a patient's medical history, diagnoses, and treatments including medications, immunization dates, allergies, radiology images, and lab and test results. [9] For example, the Consumer Assessment of Healthcare Providers and Systems-Clinician & Group Surveys (CG-CAHPS) are a set of surveys that gather patient input on, for example, the effectiveness of physician communication and responsiveness to patient concerns. [10] NQF is a nonprofit organization that endorses measures--that is, determines which measures should be recognized as the national performance standard for a given aspect of care--and encourages their use over other measures. NQF has endorsed over 700 measures. [11] Beginning in 2015, the Medicare EHR program is to begin applying a payment adjustment, or penalty, for professionals that do not meet the Medicare EHR program requirements. The Health Information Technology for Economic and Clinical Health (HITECH) Act, enacted as part of the American Recovery and Reinvestment Act of 2009, among other things, provided funding for various activities to promote the use of EHRs. The HITECH Act was enacted as title XIII of division A and title IV of division B of the Recovery Act. Pub. L. No. 111-5, div. A, tit. XIII, 123 Stat. 115, 226-279 and div. B, tit. IV, 123 Stat. 115, 467-496 (2009). [12] E. D. Peterson et al., "Trends in Quality of Care for Patients with Acute Myocardial Infarction in the National Registry of Myocardial Infarction from 1990 to 2006," American Heart Journal, vol. 156, no.6 (2008), 1045-1055. [13] A. W. ElBardissi et al., "Trends in Isolated Coronary Artery Bypass Grafting: An Analysis of the Society of Thoracic Surgeons Adult Cardiac Surgery Database," The Journal of Thoracic and Cardiovascular Surgery, vol. 143, no. 2 (2012), 273-281. [14] M. N. Neuss et al., "Measuring the Improving Quality of Outpatient Care in Medical Oncology Practices in the United States," Journal of Clinical Oncology, vol. 31, no. 11 (2013), 1471-1477. [15] G. C. Lamb et al., "Publicly Reported Quality-of-Care Measures Influenced Wisconsin Physician Groups to Improve Performance," Health Affairs, vol. 32, no. 3 (2013), 536-542. [16] T. B. Ferguson et al., "Use of Continuous Quality Improvement to Increase Use of Process Measures in Patients Undergoing Coronary Artery Bypass Graft Surgery," Journal of the American Medical Association, vol. 290, no. 1 (2003), 49-56. [17] D. A. Share et al., "How a Regional Collaborative of Hospitals and Physicians in Michigan Cut Costs and Improved the Quality of Care," Health Affairs, vol. 30, no. 4 (2011), 636-644. [18] Share et al., "Regional Collaborative," 641. [19] Lamb et al., "Publicly Reported Measures," 538-540; W. R. Lewis et al., "An Organized Approach to Improvement in Guideline Adherence for Acute Myocardial Infarction," Archives of Internal Medicine, vol. 168, no. 16 (2008), 1813-1819. [20] The six National Quality Strategy domains are (1) Person and Caregiver-Centered Experience and Outcomes, (2) Patient Safety, (3) Communication and Care Coordination, (4) Community/Population Health, (5) Efficiency and Cost Reduction, and (6) Effective Clinical Care. [21] One additional requirement in CMS's plans is that CDR measures come from one of the following categories: CG-CAHPS, NQF-endorsed measures, current PQRS measures, measures used by medical specialty boards or specialty societies, and measures used by regional quality collaboratives. As NQF alone has endorsed over 700 measures, having the option of selecting measures from any of these sources provides CDRs a large number of potential measures to choose from. [22] 78 Fed. Reg. 74474 (December 10, 2013). [23] Among other things, HIPAA provided for the establishment of national privacy and security standards of certain health information. Pub. L. No. 104-191, Title II, Subtitle F, 110 Stat. 1936, 2021 (1996). Under HIPAA, the Secretary of Health and Human Services is authorized to promulgate regulations that establish privacy and security standards, and HHS implemented these provisions through its issuance of the HIPAA Rules--the Privacy Rule, the Security Rule, and the Enforcement Rule. The HIPAA Privacy and Security Rules are promulgated at 45 C.F.R. Parts 160 and 164 and have recently been updated. See 78 Fed. Reg. 5566 (Jan. 25, 2013). The Privacy Rule established a category of health information, called "protected health information," which may be used or disclosed to others by "covered entities" only under specified circumstances or conditions. The Privacy Rule establishes different conditions for uses and disclosures for quality improvement purposes and for research purposes, and HHS has issued guidance concerning the HIPAA research disclosure provisions. See Health Services Research and the HIPAA Privacy Rule, NIH Publication Number 05-5308, May 2005. [24] An IRB is an entity designated to review and monitor biomedical and behavioral research in clinical trials involving human subjects, with the intended purpose of protecting the rights and welfare of the research subjects. Under certain circumstances, the Privacy Rule permits a covered entity to use or disclose protected health information for research without an individual's authorization. One way that a covered entity can use or disclose protected health information for research without an authorization is by obtaining documentation of IRB approval of a waiver of the individual authorization requirement. [25] A business associate for a health care provider is a person or entity who performs functions or activities on behalf of the provider that involve access to protected health information. Federal privacy rules generally require that health care providers enter into contracts or agreements with their business associates to ensure that the business associates appropriately safeguard protected health information. This agreement also specifies the permissible uses and disclosures of protected health information by the business associate. [26] R. Hillestad et al., Identity Crisis: An Examination of the Costs and Benefits of a Unique Patient Identifier for the U.S. Health Care System (Santa Monica, CA: RAND Corporation, 2008), 7. [27] Among the relevant sources that these CDR resource centers could draw on are resources produced by the AHRQ-funded Electronic Data Methods Forum, whose activities include support for selected CDRs, available at [hyperlink, http://www.edm-forum.org]. [28] C. J. Hsiao et al., "Office-Based Physicians Are Responding to Incentives and Assistance by Adopting and Using Electronic Health Records," Health Affairs, vol. 32, no. 8 (2013). [29] Data elements are discrete data fields that record a specific piece of clinical information, such as primary diagnosis at admission. [30] A task force developing clinical standards with representation from multiple cardiovascular specialty organizations has stipulated the need for common vocabulary and definitions in order to aggregate and compare data collected by different EHR systems. C. P. Cannon et al., "2013 ACCF/AHA Key Data Elements and Definitions for Measuring the Clinical Management and Outcomes of Patients with Acute Coronary Syndromes and Coronary Artery Disease," Journal of the American College of Cardiology, vol. 61, no. 9 (2013). [31] SNOMED CT (Systematized Nomenclature of Medicine Clinical Terms) is a comprehensive clinical terminology developed and maintained by the International Health Terminology Standards Development Organisation based in Denmark. LOINC (Logical Observation Identifiers Names and Codes) is a clinical terminology focusing on laboratory tests and other clinical observations produced by the Regenstreif Institute in Indianapolis, Indiana. [End of section] GAO’s Mission: The Government Accountability Office, the audit, evaluation, and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony: The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s website [hyperlink, http://www.gao.gov]. Each weekday afternoon, GAO posts on its website newly released reports, testimony, and correspondence. To have GAO e-mail you a list of newly posted products, go to [hyperlink, http://www.gao.gov] and select “E-mail Updates.” Order by Phone: The price of each GAO publication reflects GAO’s actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO’s website, [hyperlink, http://www.gao.gov/ordering.htm]. Place orders by calling (202) 512-6000, toll free (866) 801-7077, or TDD (202) 512-2537. Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information. Connect with GAO: Connect with GAO on facebook, flickr, twitter, and YouTube. Subscribe to our RSS Feeds or E mail Updates. Listen to our Podcasts. Visit GAO on the web at [hyperlink, http://www.gao.gov]. To Report Fraud, Waste, and Abuse in Federal Programs: Contact: Website: [hyperlink, http://www.gao.gov/fraudnet/fraudnet.htm]; E-mail: fraudnet@gao.gov; Automated answering system: (800) 424-5454 or (202) 512-7470. Congressional Relations: Katherine Siggerud, Managing Director, siggerudk@gao.gov: (202) 512-4400: U.S. Government Accountability Office: 441 G Street NW, Room 7125: Washington, DC 20548. Public Affairs: Chuck Young, Managing Director, youngc1@gao.gov: (202) 512-4800: U.S. Government Accountability Office: 441 G Street NW, Room 7149: Washington, DC 20548. [End of document]