No Child Left Behind Act:

Enhancements in the Department of Education's Review Process Could Improve State Academic Assessments

GAO-09-911: Published: Sep 24, 2009. Publicly Released: Sep 24, 2009.

Additional Materials:

Contact:

Cornelia M. Ashby
(202) 512-3000
contact@gao.gov

 

Office of Public Affairs
(202) 512-4800
youngc1@gao.gov

The No Child Left Behind Act of 2001 (NCLBA) requires states to develop high-quality academic assessments aligned with state academic standards. Education has provided states with about $400 million for NCLBA assessment implementation every year since 2002. GAO examined (1) changes in reported state expenditures on assessments, and how states have spent funds; (2) factors states have considered in making decisions about question (item) type and assessment content; (3) challenges states have faced in ensuring that their assessments are valid and reliable; and (4) the extent to which Education has supported state efforts to comply with assessment requirements. GAO surveyed state and District of Columbia assessment directors, analyzed Education and state documents, and interviewed assessment officials from Maryland, Rhode Island, South Dakota, and Texas and eight school districts in addition to assessment vendors and experts.

States reported their overall annual expenditures for assessments have increased since passage of the No Child Left Behind Act of 2001 (NCLBA), which amended the Elementary and Secondary Education Act of 1965 (ESEA), and assessment development was the largest expense for most states. Forty-eight of 49 states that responded to our survey said that annual expenditures for ESEA assessments have increased since NCLBA was enacted. Over half of the states reported that overall expenditures grew due to development of new assessments. Test and question--also referred to as item--development was most frequently reported by states to be the largest ESEA assessment expense, followed by scoring. State officials in selected states reported that alternate assessments for students with disabilities were more costly than general population assessments. In addition, 19 states reported that assessment budgets had been reduced by state fiscal cutbacks. Cost and time pressures have influenced state decisions about assessment item type--such as multiple choice or open/constructed response--and content. States most often chose multiple choice items because they can be scored inexpensively within tight time frames resulting from the NCLBA requirement to release results before the next school year. State officials also reported facing trade-offs between efforts to assess highly complex content and to accommodate cost and time pressures. As an alternative to using mostly multiple choice, some states have developed practices, such as pooling resources from multiple states to take advantage of economies of scale, that let them reduce cost and use more open/constructed response items. Challenges facing states in their efforts to ensure valid and reliable assessments involved staff capacity, alternate assessments, and assessment security. State capacity to provide vendor oversight varied, both in terms of number of state staff and measurement-related expertise. Also, states have been challenged to ensure validity and reliability for alternate assessments. In addition, GAO identified several gaps in assessment security policies that were not addressed in Education's review process for overseeing state assessments that could affect validity and reliability. An Education official said that assessment security was not a focus of its review. The review process was developed before recent efforts to identify assessment security best practices. Education has provided assistance to states, but issues remain with communication during the review process. Education provided assistance in a variety of ways, and states reported that they most often used written guidance and Education-sponsored meetings and found these helpful. However, Education's review process did not allow states to communicate with reviewers during the process to clarify issues, which led to miscommunication. In addition, state officials were in some cases unclear about what review issues they were required to address because Education did not identify for states why its decisions differed from the reviewers' written comments.

Recommendations for Executive Action

  1. Status: Closed - Implemented

    Comments: Education stated that it recognized the value of this recommendation. On May 31, 2013, Education provided monitoring questions they are field testing, which are based on best practices in test security to be included in new monitoring protocols.

    Recommendation: To help ensure the validity and reliability of ESEA assessments, the Secretary of Education should update Education's peer review protocols to incorporate best practices in assessment security when they become available in spring 2010.

    Agency Affected: Department of Education

  2. Status: Closed - Implemented

    Comments: Education said that it is looking into using a secure server as a means for state officials to communicate with reviewers during the peer review process, and that this would strengthen the process. In 2012, Education told us that they had incorporated real time email communication during the peer review process using staff's secure email accounts within the Department system.

    Recommendation: To improve the efficiency of Education's peer review process, the Secretary of Education should develop methods for peer reviewers and states to communicate directly during the peer review process so questions that arise can be addressed quickly. For example, peer reviewers could be assigned a generic e-mail address that would allow them to remain anonymous but still allow them to communicate directly with states.

    Agency Affected: Department of Education

  3. Status: Closed - Not Implemented

    Comments: The Department of Education (ED) indicated that, in response to the recommendation, it would conduct a conference call in advance of upcoming peer reviews to clarify why ED's decisions in some cases differ from peer reviewers' written comments. In 2012, ED reported that the agency had been taking this action since March 2009. However, this was before our September 2009 report was released, and as of the report's release we still found a need for improvement. In May 2013, Education told us that they are suspending peer review to revise the process.

    Recommendation: To improve the transparency of its approval decisions pertaining to states' standards and assessment systems and help states understand what they need to do to improve their systems, in cases where the Secretary of Education's peer review decisions differed from those of the reviewers, the Secretary should explain why they differed.

    Agency Affected: Department of Education

 

Explore the full database of GAO's Open Recommendations »

Dec 22, 2014

Dec 19, 2014

Dec 16, 2014

Dec 12, 2014

Nov 13, 2014

Sep 24, 2014

Sep 10, 2014

Aug 22, 2014

Jul 16, 2014

Looking for more? Browse all our products here