This is the accessible text file for GAO report number GAO-08-874 
entitled 'Elections: States, Territories, and the District Are Taking a 
Range of Important Steps to Manage Their Varied Voting System 
Environments' which was released on September 25, 2008.

This text file was formatted by the U.S. Government Accountability 
Office (GAO) to be accessible to users with visual impairments, as part 
of a longer term project to improve GAO products' accessibility. Every 
attempt has been made to maintain the structural and data integrity of 
the original printed product. Accessibility features, such as text 
descriptions of tables, consecutively numbered footnotes placed at the 
end of the file, and the text of agency comment letters, are provided 
but may not exactly duplicate the presentation or format of the printed 
version. The portable document format (PDF) file is an exact electronic 
replica of the printed version. We welcome your feedback. Please E-mail 
your comments regarding the contents or accessibility features of this 
document to Webmaster@gao.gov. 

This is a work of the U.S. government and is not subject to copyright 
protection in the United States. It may be reproduced and distributed 
in its entirety without further permission from GAO. Because this work 
may contain copyrighted images or other material, permission from the 
copyright holder may be necessary if you wish to reproduce this 
material separately. 

Report to the Chairman, Committee on Rules and Administration, U.S. 
Senate: 

United States Government Accountability Office: 
GAO: 

September 2008: 

Elections: 

States, Territories, and the District Are Taking a Range of Important 
Steps to Manage Their Varied Voting System Environments: 

GAO-08-874: 

GAO Highlights: 

Highlights of GAO-08-874, a report to the Chairman, Committee on Rules 
and Administration, U.S. Senate. 

Why GAO Did This Study: 

Our Nation’s overall election system depends on all levels of 
government and the interplay of people, processes, and technology, 
which includes the voting systems that are used during an election. GAO 
has previously reported on issues and challenges associated with 
ensuring that voting systems are secure and reliable. The states, 
territories, and the District of Columbia (District) each play a 
pivotal role in managing voting systems to ensure that they perform as 
intended. 

In light of this role, GAO was asked to answer the following questions 
relative to states, territories, and the District: (1) what voting 
methods and systems are these entities using in federal elections and 
what changes are underway; (2) how do they certify or otherwise approve 
voting systems; (3) what other steps do they take to ensure the 
accuracy, reliability, and security of voting systems; (4) how do they 
identify, evaluate, and respond to voting system problems; and (5) how 
do they view federal voting system-related resources and services. 

To accomplish this, GAO conducted a Web-based survey of election 
officials in all 50 states, the four U.S. territories, and the District 
and received responses from all but three states; contacted the 
officials to better understand their approaches and issues; and 
reviewed documentation provided by survey respondents and other 
contacts. 

What GAO Found: 

The mix of voting methods and systems that were used in the 2006 
general election varied across states, territories, and the District, 
and this mix is not expected to change substantially for the 2008 
general election. This variety is due to several factors, but 
particularly the degree of influence that these governments have 
exerted over local jurisdictions in selecting systems. 

In establishing their voting environments, states, territories, and the 
District reported approving or otherwise certifying their systems 
against requirements and described largely similar approaches in doing 
so. Further, they reported facing some of the same challenges, such as 
ensuring that vendors meet requirements and completing the approval 
process on time; and identified steps they have taken to address these 
challenges. 

To further ensure that their approved systems performed as intended, 
these entities also reported conducting one or more types of 
postapproval voting system testing—acceptance, readiness, Election Day 
parallel, postelection audit, and security. Certain types of tests—such 
as acceptance and readiness—were reported as being conducted by many 
states, territories, and the District, while others—such as 
parallel—were reported as being employed by only a handful. The manner 
of performing the tests also varied. 

Notwithstanding their system approval and testing efforts, most states, 
territories, and the District nevertheless have reported experiencing 
problems on Election Day. While these entities largely described the 
problems as isolated and having minimal impact, a few reported that 
they experienced problems that were more widespread and significant. 
However, the full scope of the problems that may have been experienced 
is not clear because states and others reported that local 
jurisdictions were generally not required to report problems. To 
address this, a few states and territories reported that they are 
becoming more active in identifying and resolving problems, for 
instance, by developing policies and procedures to address them. 
However, election officials also cited related challenges, such as 
determining the cause of the problems and appropriate corrective 
actions. 

To aid states, territories, and the District in managing their voting 
system environments, the federal government, through the Election 
Assistance Commission, provides a number of services and resources, 
such as federal certification of systems and guidance. With the 
exception of the timing of the certification process, most entities 
reported that they are largely satisfied with these services and 
resources, although some are not satisfied. 

While following similar approval and testing approaches and resolving 
voting system problems, differences in how each entity executes these 
approaches offer important opportunities for these governments to share 
knowledge and experience. To the extent that this occurs, the manner in 
which systems perform on Election Day can only improve. 

To view the full product, including the scope and methodology, click on 
[hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-08-874]. To view 
GAO’s survey of election officials, click on GAO-08-1147SP.For more 
information, contact Randolph C. Hite at (202) 512-3439 or 
hiter@gao.gov. 

[End of section] 

Contents: 

Letter: 

Results in Brief: 

Background: 

States', Territories', and the District's Voting Environments Largely 
Consist of Multiple Methods and Systems, and Have Been Influenced by 
Various Factors: 

States, Territories, and the District Have Largely Defined Similar 
Approaches and Face Common Challenges in Approving Voting Systems: 

States, Territories, and the District Required and Conducted a Range of 
Tests after System Approval and Faced a Variety of Testing Challenges: 

States, Territories, and the District Generally Reported Minor Voting 
System Problems, Diverse Responses, and Challenges in Addressing Them: 

States, Territories, and the District Are Largely Satisfied with 
Federal Voting System Resources and Services, but Their Use Varies: 

Concluding Observations: 

Appendix I: Objectives, Scope, and Methodology: 

Appendix II: GAO Contact and Staff Acknowledgments: 

Glossary: 

Related GAO Products: 

Tables: 

Table 1: Capabilities Provided by Prevalent Voting Methods and Systems: 

Table 2: Types of Voting System Testing: 

Table 3: Voting Methods that Survey Respondents Plan to Use by Voting 
Stage for the 2008 General Election: 

Table 4: Example of Voting Methods, Manufacturers, and Voting System 
Models Planned for Use in One State for the 2008 General Election: 

Table 5: Types, Purposes, and Circumstances of Qualified Approval with 
Number of States that Have Provisions for Each Type: 

Table 6: Circumstances Reported by States and Others for Revoking 
Voting System Approval: 

Table 7: Types and Purposes of Approval-Related Testing: 

Table 8: States' Approaches for Addressing Federal Certification 
Requirements for the 2008 Election: 

Table 9: EAC's Guidance Applicable to Voting Systems: 

Table 10: Method Used to Contact States, Territories, and the District: 

Figures: 

Figure 1: DRE System: 

Figure 2: Precinct Count Optical Scan Tabulator and Central Count 
Optical Scan Tabulator: 

Figure 3: Ballot Marking Device: 

Figure 4: Conceptual Depiction of a Voting System Life Cycle Model: 

Figure 5: Number of Voting Methods That Survey Respondents Plan to Use 
for the 2008 General Election: 

Figure 6: Reported Involvement by States and Others in the Selection of 
Voting Systems for the 2004 and 2008 General Elections: 

Figure 7: Number of Voting Systems Planned for Use in the 2008 General 
Election in Relation to the Reported Type of Involvement by States and 
Others in Voting System Selection: 

Figure 8: 2008 Voting System Approval Requirements Reported by States 
and Others for 2008: 

Figure 9: Circumstances for Reapproving Voting Systems as Reported by 
States and Others: 

Figure 10: General Steps that States and Others Follow in Approving 
Voting System: 

Figure 11: Voting System Approval Challenges Reported by States and 
Others: 

Figure 12: Number of Required Test Types Reported by States and Others 
for the 2006 General Election: 

Figure 13: Types of Postapproval Testing Performed for the 2006 General 
Election as Reported by States and Others: 

Figure 14: Acceptance Testing Requirements Reported by States and 
Others for the 2006 General Election: 

Figure 15: Responsibilities for Performing Acceptance Testing Reported 
by States and Others for the 2006 General Election: 

Figure 16: Readiness Testing Requirements Reported by States and Others 
for the 2006 General Election: 

Figure 17: Responsibilities for Performing Readiness Testing Reported 
by States and Others for the 2006 General Election: 

Figure 18: Parallel Testing Requirements Reported by States and Others 
for the 2006 General Election: 

Figure 19: Responsibilities for Performing Parallel Testing Reported by 
States and Others for the 2006 General Election: 

Figure 20: Postelection Audit Requirements Reported by States and 
Others for the 2006 General Election: 

Figure 21: Responsibilities for Performing Postelection Audits Reported 
by States and Others for the 2006 General Election: 

Figure 22: Security Testing Requirements Reported by States and Others 
for the 2006 General Election: 

Figure 23: Responsibilities for Performing Security Testing Reported by 
States and Others for the 2006 General Election: 

Figure 24: Testing Challenges Reported by States and Others for the 
2006 General Election: 

Figure 25: Voting System Problems Reported by States and Others for the 
2006 General Election: 

Figure 26: Extent of Voting System Problems Reported by States and 
Others for the 2006 General Election by Problem Type: 

Figure 27: Extent of Voting System Problems Reported by States and 
Others for the 2006 General Election: 

Figure 28: Voting System Problem Reporting Requirements Reported by 
States and Others for the 2006 General Election: 

Figure 29: Sources of Information on Voting System Problems Reported by 
States and Others for the 2006 General Election: 

Figure 30: Actions Taken in Evaluating Voting System Problems as 
Reported by States and Others for the 2006 General Election: 

Figure 31: Participation by States and Local Jurisdictions in Problem 
Evaluation Activities as Reported by States and Others for the 2006 
General Election: 

Figure 32: Responsibilities for Corrective Actions to Address Voting 
System Problems as Reported by States and Others for the 2006 General 
Election: 

Figure 33: Recipients of Communications about Voting System Problems as 
Reported by States and Others for the 2006 General Election: 

Figure 34: Challenges Reported by States and Others in Addressing 
Voting System Problems for the 2006 General Election: 

Abbreviations: 

District: District of Columbia: 

DRE: direct recording electronic: 

EAC: Election Assistance Commission: 

FEC: Federal Election Commission: 

HAVA: Help America Vote Act of 2002: 

NASED: National Association of State Election Directors: 

NIST: National Institute of Standards and Technology: 

[End of section] 

United States Government Accountability Office:
Washington, DC 20548: 

September 25, 2008: 

The Honorable Dianne Feinstein: 
Chairman: 
Committee on Rules and Administration: 
United States Senate: 

Dear Madam Chairman: 

Following the 2000 and 2004 general elections, we issued a series of 
reports and testified on virtually every aspect of our nation's overall 
election system, including the many challenges and opportunities 
associated with various types of voting systems.[Footnote 1] In this 
regard, we emphasized that voting systems alone were neither the sole 
contributor nor the solution to the problems that were experienced 
during the 2000 and 2004 elections, and that the overall election 
system as a whole depended on the effective interplay of people, 
processes, and technology and involved all levels of government. During 
this period, the Congress passed the Help America Vote Act of 2002 
(HAVA),[Footnote 2] which authorized funding for local and state 
governments to make improvements in election administration, including 
upgrading antiquated voting systems. In addition, HAVA created the 
Election Assistance Commission (EAC) to, among other things, provide 
resources and services that states and localities can use to acquire 
and manage voting systems. 

State, territory, and the District of Columbia (the District) 
governments play a key role in ensuring that the mix of voting systems 
used during an election is accurate, secure, and reliable and that any 
problems with these systems are addressed. Accordingly, you asked us to 
answer the following questions relative to the 50 states, 4 U.S. 
territories, and the District: (1) what voting methods and systems they 
are using in federal elections and what changes are underway; (2) how 
they certify or otherwise approve voting systems for use in federal 
elections; (3) what other steps they take to ensure that voting systems 
are accurate, reliable, and secure; (4) how they identify, evaluate, 
and respond to voting system problems; and (5) how they view federal 
voting system-related resources and services. 

To accomplish this, we conducted a Web-based survey (GAO-08-1147SP) of 
election officials in all 50 states, 4 territories, and the District 
regarding their respective requirements, activities, experiences, 
changes, and views relative to: voting methods and systems used; voting 
system approval, testing, and problem management; and federal resources 
and services.[Footnote 3] Three U.S. territories and one commonwealth 
were selected for this review--American Samoa, Guam, the Commonwealth 
of Puerto Rico, and the U.S. Virgin Islands--based on their federally 
mandated requirement to comply with the provisions of HAVA. We obtained 
responses from 47 states, all 4 territories, and the District.[Footnote 
4] Three states (Michigan, New Jersey, and Utah) chose not to respond 
to our survey.[Footnote 5] We also contacted election officials in 
almost every state and territory, and the District, to better 
understand and illustrate their respective approaches and issues, and 
obtained and reviewed relevant documentation from these officials and 
their Web sites. The scope of this work did not include contacting 
election officials from local jurisdictions to verify survey responses 
or other information provided by state officials. 

We conducted this performance audit from October 2007 to September 2008 
in accordance with generally accepted government auditing standards. 
Those standards require that we plan and perform the audit to obtain 
sufficient, appropriate evidence to provide a reasonable basis for our 
findings and conclusions based on our audit objectives. We believe that 
the evidence obtained provides a reasonable basis for our findings and 
conclusions based on our audit objectives. Further details of our 
objectives, scope, and methodology are included in appendix I. 

Results in Brief: 

The mix of voting methods and systems that were used in the 2006 
general election, and the mix that is expected to be used in the 
upcoming 2008 general election, vary across states, territories, and 
the District. These mixes were due largely to several factors, 
particularly the degree of influence that the states, territories, and 
the District have exerted over local jurisdictions in selecting 
systems. 

In establishing their voting system environments, states, territories, 
and the District reported approving or otherwise certifying their 
systems against their respective requirements. Moreover, they reported 
that they employed similar basic approval approaches, and they have 
faced some of the same challenges. To further ensure that their 
approved mixes of systems performed as intended during an election, 
most of these entities also reported conducting one or more types of 
postapproval tests. While some of these tests were conducted by almost 
all states, territories, and the District, others were confined to only 
a handful of these entities. Notwithstanding their efforts to approve 
and subsequently test their systems, they reported experiencing 
problems on Election Day. Most states and territories, and the 
District, described these problems as isolated and as having minimal 
impact on elections, although a few states reported more widespread and 
significant problems. Overall, however, the full scope of voting system 
problems that have been experienced is unclear because local 
jurisdictions generally do not have to report problems. To address 
this, a few states and territories have become more active in 
identifying and resolving problems, and a number have reported taking 
actions to overcome a range of challenges that many states and 
territories share. 

To aid states, territories, and the District in managing their 
respective voting system environments, the federal government, through 
EAC, provides voting system-related services and resources, such as 
federal certification of systems and guidance pertaining to systems. 
With the exception of the timing of federal certification of systems, 
most states, the territories, and the District reported that they are 
largely satisfied with these services and resources. 

Multiple Voting Methods and Systems Continue to Be Used in Elections, 
with the Mix Being Heavily Influenced by the Roles States, Territories, 
and the District Play in Selecting Systems: 

States, territories, and the District reported using a mix of voting 
methods and systems for the 2006 general election, and few changes to 
this mix are expected for the 2008 general election. For most states 
and one territory, this mix will typically consist of using at least 
two different methods across the election stages,[Footnote 6] with the 
most common number being four. Moreover, the mix of systems planned for 
the 2008 elections continues to mostly include direct recording 
electronic (DRE), precinct count optical scan, and central count 
optical scan, although ballot marking devices and vote-by-phone systems 
are becoming more prevalent. 

A key factor that has influenced each mix of systems is the level of 
state, territory, and District involvement in the selection of voting 
systems for their local jurisdictions. For the 2008 general election, 
most states and all four territories reported that they will either 
select voting systems for jurisdictions or provide jurisdictions with a 
list of approved voting systems from which to select. Moreover, states 
and territories that select voting systems for local jurisdictions 
generally plan to use fewer voting systems for the 2008 general 
election than do states that use other approaches. Other factors that 
have influenced selection of voting methods and systems for 2008 and 
may continue to do so are compliance with state and federal 
requirements, availability of funding to purchase voting equipment, and 
voter concerns with existing systems. 

Approval of Voting Systems Is Governed by Largely Similar Approaches 
and Generally Affected by the Same Challenges: 

State, territory, and District statutes largely specify requirements 
and responsibility for approving voting systems to be used in an 
election. Specifically, 43 states, 2 territories, and the District 
reported having requirements for approving or otherwise certifying 
voting systems, and their respective requirements are mostly captured 
in statute. The remaining states and territories have requirements that 
have been administratively established. 

Regardless of the basis for their approval requirements, states, 
territories, and the District largely follow a similar series of basic 
steps in approving voting systems. These steps are (1) establishing 
standards or criteria; (2) evaluating documentation; (3) testing 
systems to state standards and examining test results; and (4) making 
an approval decision; all in conjunction with involving the public in 
the process and resolving system problems during the process. However, 
the nature and extent of the specific approval activities conducted as 
part of these broad steps varies. For example, the testing performed by 
some states ranges from system demonstrations using mock elections to 
source code reviews. 

In addition, responsibility for performing approval activities varies 
across states, territories, and the District. For example, the approval 
authorities for 12 states and 1 territory rely solely on their election 
staff to perform the various approval activities, while the approval 
authorities in 28 states, 1 territory, and the District rely on two or 
more stakeholders. The approval authority is typically the state's 
secretary of state or the state's election board or committee, although 
the approval authority may delegate responsibility for performing 
certain approval steps to other stakeholders, such as the state chief 
information officer or chief technology officer. 

States and territories also face similar challenges in approving voting 
systems. The most frequently cited challenges are ensuring that vendors 
meet system requirements; ensuring that voters' concerns are 
considered; having sufficient qualified staff and facilities to conduct 
tests; and ensuring that the approval process is timely. 

A Range of Tests Were Required and Performed after Voting System 
Approval, and the Scope and Approach to Performing Them Varied: 

For the 2006 general election, most states and others reported that 
they required more than one type of postapproval voting system testing 
to be performed. Of the five types of testing--acceptance, readiness 
(logic and accuracy), parallel, postelection audit, and security--about 
one-third of the states, territories, and the District reported 
requirements for at least four types, in addition to the testing 
required as part of system approval. In contrast, a small number of 
states reported that they required only readiness testing, which was 
the most frequently cited type of testing performed, as it is intended 
to determine a system's readiness just prior to use in an election. 
Moreover, those entities that required readiness testing typically 
reported similar testing approaches (i.e., using test ballots to 
exercise system recording, tabulation, and reporting functions; 
verifying the completeness and accuracy of test results; and sealing 
the systems until they were activated on Election Day). 

With respect to the other four types of testing, many states, one 
territory, and the District reported employing acceptance testing, 
which determines whether the delivered voting equipment meets state or 
local requirements. Further, many states, territories, and the District 
reported that they conducted security tests. Relatively few states 
reported performing parallel testing during elections, primarily 
because they were not statutorily required to do so, or they did not 
have sufficient voting units or funding. Several states and the 
District also reported requirements for postelection audit testing, 
which largely consisted of verifying election totals by recounting the 
recorded votes. For example, one state manually recounted a random 
sample of at least one percent of the precincts, while another state 
used voter-verified paper audit trails to verify election totals. 

Across all types of testing, the states, territories, and the District 
varied as to the timing, scope, and activities performed, as well as 
the personnel involved. For instance, several states reported that 
their security testing focused on assessing the physical security of 
the systems and the facilities in which they were stored, while a few 
others also performed a wide range of security reviews, such as risk 
assessments, source code reviews, and penetration tests.[Footnote 7] 
Also, while some states and territories reported testing all voting 
system units, others tested only selected units. Moreover, while most 
testing was performed by local jurisdictions with guidance from the 
states, several states also performed these tests using state staff, 
vendors, or contractors. 

States, territories, and the District generally reported minor 
challenges related to having sufficient testing resources and executing 
testing activities in a timely manner. Nevertheless, roughly half of 
respondents reported experiencing such challenges and a handful of 
states viewed them as major. 

Nature and Extent of Reported Voting System Problems Were Not Viewed as 
Significant, Although Related Challenges Suggest Complete Information 
May Not Be Available: 

States, territories, and the District reported experiencing a variety 
of problems with their voting systems during the 2006 general election, 
but identified few instances of problems occurring at multiple 
locations and largely characterized the problems as occurring to little 
extent and with little impact. The most frequently reported problems 
were systems where paper jammed or was improperly fed or imprinted; 
systems that stopped operating or would not operate at all during the 
election; systems with slow response time; and systems that did not 
tabulate votes correctly. Furthermore, 12 states reported that they had 
experienced these problems and one other to a moderate or great extent. 

The extent to which states and others are aware of system problems is 
unclear because less than one-half of them required local jurisdictions 
to report problems that arose during the 2006 election, relying instead 
on voluntary reporting by local jurisdictions, voters, and voting 
system vendors. Nevertheless, many respondents reported that they and 
their local jurisdictions evaluated problems after the election, for 
example, through reviews of system logs and reports, audits, 
investigations, recounts of election results, and system retests. They 
also reported that both levels of government were involved in 
implementing corrective actions, and that many respondents developed 
new policies and procedures to address and correct the problems. 

About one-half of the states and the District reported facing multiple 
challenges in managing voting system problems that arose in the 2006 
election. The most-reported challenges were determining the causes of 
problems and identifying, evaluating, and selecting corrective actions, 
but challenges with adequate funding, staffing, and training to correct 
problems were also reported. State officials also described various 
actions they have taken to overcome these challenges. 

Federal Voting System Services and Resources Generally Are Viewed 
Favorably: 

The federal government, through EAC, has made available various 
products and services available to our nation's elections community, 
including federal certification of voting systems, voluntary voting 
system guidelines, accredited voting system testing laboratories, and 
election administration and voting system management guidance. Among 
these services, approximately one-third of the states reported plans to 
purchase new systems for use in the 2008 election, thus requiring 
federal system certification. Because none of these systems have been 
certified by EAC as of May 2008, these states reported that they intend 
to either forego planned system replacements and upgrades for the 2008 
general election or seek other ways to satisfy state statutes or 
directives that require federal certification. 

Except for the timing of EAC's certification of systems, most states, 
territories, and the District reported that they were generally 
satisfied with EAC services and resources to the extent that they 
expressed any view on them. For example, over one-half reported 
satisfaction with the comprehensiveness, clarity, or ease of use of the 
voluntary voting system guidelines, although one state noted that the 
guidelines may be too demanding to allow any voting systems to be 
certified within a reasonable time frame. Most respondents reported 
that they were also satisfied with EAC's quick start management guides, 
which provide recommended practices for state and local election 
officials in areas such as voting system certification, acceptance 
testing, ballot preparation and printing and pre-election testing, and 
voting system security. With respect to accredited test laboratories, 
two states reported that they were using them in support of their 
respective voting system approval processes. 

The role that states, territories, and the District play in ensuring 
that unique voting system environments perform as intended on Election 
Day is significant. While the general approaches that each follows to 
carry out this role relative to approving and testing systems and 
resolving system problems are largely similar, the details surrounding 
how these approaches are executed show differences. These differences 
offer important opportunities for states, territories, and the District 
to leverage shared knowledge and experience in evolving their 
respective approaches. Other opportunities exist to learn from and 
address state, territory, and the District views and perspectives on 
federal services and resources. To the extent that this occurs, then 
the manner in which voting systems perform on Election Day can only 
improve. 

Background: 

The fairness and accuracy of the U.S. election system is a foundation 
of our democracy. Within this system, each of the 50 states, 4 
territories, and the District plays a pivotal role and has a somewhat 
distinct approach to accomplishing these goals. The U.S. election 
system also involves the interaction of people at all levels of 
government, year-round preparation and planning, and a range of 
technologies, such as electronic voting systems. 

Following the 2000 general election, we issued a series of reports 
addressing a range of issues and challenges associated with voting 
systems.[Footnote 8] These reports also identified challenges that 
election officials reported facing in major stages of the election 
process. Subsequently, the Congress passed the Help America Vote Act of 
2002 (HAVA) to help states upgrade antiquated voting equipment and 
technologies and support them in making federally mandated improvements 
to their voting systems. Since the 2004 general election, we have 
issued voting system-related reports on system security and reliability 
and on evolving voting system methods, technologies, and management 
practices. 

The Overall U.S. Election System Relies on All Levels of Government and 
the Interplay of People, Processes, and Technology: 

All levels of government--federal, state, and local--share 
responsibilities for elections and voting systems. Regardless of the 
level of government, election administration is a year-round activity, 
involving varying groups of people and a range of technologies 
performing activities within each stage of the election process. 

Election Authority and Responsibility Spans All Levels of Government: 

Election authority and responsibility in the United States is shared by 
federal, state, and local governments. At the federal level, the 
Congress has authority under the Constitution to regulate the 
administration of presidential and congressional elections. In this 
regard, it has passed legislation affecting the administration of state 
elections in several major areas of the voting process, such as HAVA. 
However, the Congress does not have general constitutional authority 
over the administration of state and local elections. 

Individual states, territories, and the District are responsible for 
the administration of both their own elections and federal elections. 
Each regulates its respective elections through legislation, 
administrative codes, executive directives, or other mechanisms, which 
establish requirements, policies, and procedures for adopting voting 
system standards, testing voting systems, ensuring ballot access, 
establishing registration procedures, determining absentee voting 
requirements, establishing voting locations, providing Election Day 
workers, and counting and certifying the vote. Thus, the U.S. election 
process can be seen as an assemblage of 55 somewhat distinct election 
systems--one for each of the 50 states, the 4 territories, and the 
District. 

Further, although election policy and procedures are legislated 
primarily at the state level, states typically have decentralized 
election administration so that the details are carried out at the city 
or county levels. This is important because there are more than 10,000 
local election jurisdictions and their sizes vary enormously--from a 
rural county with about 200 voters to a large urban county, such as Los 
Angeles County, where the total number of registered voters for the 
2000 elections exceeded the registered voter totals in 41 states. 
[Footnote 9] 

Election Administration Is a Multi-step Process: 

Election administration is a year-round process, involving key 
activities that are performed within four stages of the election 
process.[Footnote 10] These stages, and the activities that comprise 
them, are as follows: 

* Voter registration. Among other things, local election officials 
register eligible voters and maintain voter registration lists, 
including updates to registrants' information and deletions of the 
names of registrants who are no longer eligible to vote. 

* Absentee and early voting. This type of voting allows eligible 
persons to vote in person or by mail before Election Day. Election 
officials must design ballots and other systems to permit this type of 
voting and educate voters on how to vote by these methods. 

* Election Day voting. In preparation for Election Day, a range of 
activities are performed, such as arranging locations for polling 
places, recruiting and training poll workers, designing ballots, and 
preparing and testing voting equipment for use in casting and 
tabulating votes. On Election Day, key activities include opening and 
closing polling places and assisting voters in casting votes. 

* Vote counting and certification. Once polls are closed, the cast 
ballots are tabulated, decisions are made whether and how to count 
ballots that cannot be read by the vote-counting equipment, the final 
vote counts are certified, and recounts or audits are performed, if 
required. 

Voting systems are primarily involved in the last three of these 
stages, during which votes are recorded, cast, and counted. 

Electronic Voting Systems Support Vote Casting and Counting: 

The technology used to cast and count votes is one essential part of 
the multifaceted U.S. election process. In the United States today, 
votes are cast, and in some instances counted, by electronic voting 
methods: optical scan, direct recording electronic, ballot marking 
device, and vote-by-phone.[Footnote 11] In addition, some jurisdictions 
use election management systems to integrate vote casting and 
tabulating functions for a given election with other election 
management functions. Table 1 shows the critical vote casting and 
tabulating functions offered by different systems. 

Table 1: Capabilities Provided by Prevalent Voting Methods and Systems: 

Voting method or system: Direct recording electronic; 
Marks ballot: [Check]; 
Casts ballot: [Check]; 
Tabulates ballot: [Check]. 

Voting method or system: Optical scan; 
Marks ballot: [Empty]; 
Casts ballot: [Check]; 
Tabulates ballot: [Check]. 

Voting method or system: Ballot marking device; 
Marks ballot: [Check]; 
Casts ballot: [Empty]; 
Tabulates ballot: [Empty]. 

Voting method or system: Vote-by-phone; 
Marks ballot: [Check]; 
Casts ballot: [Empty]; 
Tabulates ballot: [Empty]. 

Voting method or system: Election management system; 
Marks ballot: [Empty]; 
Casts ballot: [Empty]; 
Tabulates ballot: [Check]. 

Source: GAO. 

[End of table] 

Before voting equipment can be used in any given election to perform 
these functions, it must be programmed to accommodate the specific 
characteristics of that election, including preparing a ballot that is 
unique to that election and, depending on the voting equipment, 
programming the equipment to present the ballot to the voter and read 
the ballot as voted. Software then downloads the election-specific 
ballot configuration through the use of memory cartridges or other 
media to produce either a digital or paper ballot that lists the names 
of the candidates and the issues to be voted on. On or before Election 
Day, voters record their choices. Some ballots may include a space for 
write-in choices. When voters have finished marking their ballot 
selections, how the ballot is cast and counted varies by voting method. 

A description of four electronic voting methods and election management 
systems follows. 

Direct recording electronic (DRE). These devices capture votes 
electronically, without the use of paper ballots. DREs come in two 
basic models: pushbutton or touchscreen. DRE ballots are marked by a 
voter pressing a button or touching a screen that highlights the 
selected candidate's name or an issue. Voters can change their 
selections until they hit the final "vote" button or screen, which 
casts their vote (see fig. 1). Although these systems do not use paper 
ballots, they can retain permanent electronic images of all the 
ballots, which can be stored on various media, including internal hard 
disk drives, flash cards, or memory cartridges. 

DREs require the use of software to program the various ballot styles 
and tabulate the votes, which is generally done through the use of 
memory cartridges or other media. For pushbutton models, the software 
assigns the buttons to particular candidates, while for touchscreen 
models; the software defines the size and location on the screen where 
the voter makes the selection. DREs offer various configurations for 
tabulating the votes. Some contain removable storage media that can be 
taken from the voting device and transported to a central location to 
be tallied. Others can be configured to electronically transmit the 
vote totals from the polling place to a central tally location. Vote 
tally software often is used to tabulate the vote totals from one or 
more units. These systems also are designed not to allow overvotes 
(i.e., where the voter votes for two candidates for one office, 
invalidating the vote). 

Figure 1: DRE System: 

[Refer to PDF for image] 

Photograph of DRE system. 

Source: GAO. 

[End of figure] 

Optical scan. This method uses electronic technology to tabulate paper 
ballots. An optical scan system is made up of computer-readable paper 
ballots, appropriate marking devices, privacy booths, and a 
computerized tabulation device. Optical scan ballots are marked using 
an appropriate writing instrument to fill in boxes or ovals, or to 
complete an arrow next to a candidate's name or an issue. To cast the 
ballot, voters deposit their ballots into a sealed box to be counted 
either at the polling place--a precinct count optical scan[Footnote 
12]--or at a central location--a central count optical scan. The 
ballots are tabulated by optical-mark-recognition equipment (see fig. 
2), which counts votes by sensing or reading the marks on the ballot. 
Software instructs the tabulation equipment how to assign each vote 
(i.e., to assign valid marks on the ballot to the proper candidate or 
issue). 

Figure 2: Precinct Count Optical Scan Tabulator and Central Count 
Optical Scan Tabulator: 

[Refer to PDF for image] 

Two photograph of Optical Scan Tabulators. 

Source: GAO. 

[End of figure] 

If ballots are counted at the polling place, voters or election 
officials put the ballots into the tabulation equipment, which tallies 
the votes; these tallies can be captured in removable storage media 
that are transported to a central tally location, or they can be 
electronically transmitted from the polling place to the central tally 
location. Some precinct-based optical scanners also now include a 
digital ballot imaging component that digitally reads a voter's ballot 
selection, tabulates the results, and saves a digital image of the 
marked ballot on a memory card for auditing purposes. In addition, 
precinct-based optical scanners can be programmed to detect overvotes 
and undervotes (where the voter does not vote for all contests or 
issues on the ballot) and to take some action in response (such as 
rejecting the ballot). If election officials program precinct-based 
optical scan systems to detect and reject overvotes and undervotes, 
voters can fix their mistakes before leaving the polling place. 

By contrast, if ballots are centrally counted, election officials 
transfer the sealed ballot boxes to the central location after the 
polls close, where election officials run the ballots through the 
tabulation equipment in the presence of observers. Central count 
optical scanners thus do not allow voters to correct any mistakes that 
may have been made. 

Ballot marking devices. These devices use electronic technology to mark 
an optical scan ballot at voter direction, interpret the ballot 
selections, communicate the interpretation for voter verification, and 
then print a voter-verified ballot. A ballot marking device integrates 
components such as an optical scanner, printer, touch-screen monitor, 
and a navigational keypad (see fig. 3). 

Figure 3: Ballot Marking Device: 

[Refer to PDF for image] 

Illustration of Ballot Marking Device. 

Source: ES&S (Election Systems & Software). 

[End of figure] 

Voters use the device's accessible interface to record their choices on 
a paper or digital ballot. For example, voters with visual impairments 
will use an audio interface as well as a Braille keypad to make a 
selection. Voters who prefer to vote in an alternate language can also 
utilize the audio interface. Voters with disabilities can make their 
selection using a foot-pedal or a sip-puff device. These devices do not 
store or tabulate votes electronically. When votes have been recorded 
and verified, they are printed on a standard optical scan ballot that 
must be read, recorded, and tabulated by a precinct-based or central 
count optical scanner. This technology includes functionality to 
prevent overvotes and undervotes. 

Vote-by-phone. Vote-by-phone systems use electronic technology to mark 
paper ballots. This system is made up of a standard touch-tone 
telephone and a printer. Unlike the other electronic voting systems, 
programming of ballots is done manually by an election official at a 
secured location. When voters call from a polling place to connect to 
the system, the ballot is read to the voters who then make choices 
using the telephone keypad. The system then prints out a paper ballot 
at either a central location (central print) or a polling site (fax 
print). Central print ballots are read back to the voter over the phone 
for verification, after which the voter can decide to cast the ballot 
or discard it and revote. Fax print ballots produce a physical ballot 
at the polling place for the voter to review, verify, and cast in a 
ballot box. The system also informs voters of undervotes. 

Election management systems. These systems, which are used in 
conjunction with one of the other types of voting systems, integrate 
the functions associated with preparing vote-casting and tabulating 
equipment for a given election with other election management 
functions. Election management systems run on jurisdictions' existing 
personal computers or vendor-provided election management system 
computers and generally consist of one or more interactive databases 
containing information about a jurisdiction's precincts, the election 
contest, the candidates, and the issues being decided. They can then be 
used to design and generate various ballots, program vote-casting and 
tabulating equipment, and centrally tally and generate reports on 
election progress and results. 

HAVA Was Enacted to Strengthen the Overall U.S. Election Process: 

In October 2002, the Congress passed HAVA to provide states, 
territories, and the District with organizations, processes, and 
resources for improving the administration of future federal elections. 
One of the primary HAVA provisions relates to encouraging states and 
others to upgrade antiquated voting systems and technologies and 
authorizing $3.86 billion over several fiscal years to support states 
in making federally mandated improvements to their voting systems. HAVA 
also includes minimum requirements for such systems, to include 
providing voters with the ability to verify their votes before casting 
their ballot, producing permanent paper records for manual auditing of 
voting systems, and complying with ballot counting error rates set out 
in specified federal voting system standards. HAVA also requires that 
such systems provide individuals with disabilities the same opportunity 
for access and participation by providing for the use of at least one 
DRE or other voting system equipped for individuals with disabilities 
at each polling place. The deadline for states and jurisdictions to 
comply with specific minimum requirements for voting systems, such as 
producing a paper record for audit purposes, was January 1, 2006. 

In addition, HAVA established EAC and assigned it wide-ranging duties 
to help improve state and local administration of federal elections. To 
assist EAC in establishing voting system standards and performing its 
responsibilities, HAVA established three organizations and levied new 
requirements on a fourth. Specifically, it established a technical 
guidelines committee to develop and recommend voting system standards 
to EAC. To assist in an independent review of these standards, EAC 
chartered, as required by HAVA, a Standards Board, comprised of 110 
state, territory, District, and local election officials, and 
established the Board of Advisors to review the voluntary guidelines 
developed by EAC's guidelines committee and provide comments and 
recommendations to EAC. Finally, the act assigned the National 
Institute of Standards and Technology (NIST) responsibility for 
providing technical support to EAC's guidelines committee and making 
the Director of NIST the committee chair. 

Among other things, EAC is responsible for (1) providing voluntary 
guidance to states implementing certain HAVA provisions, (2) serving as 
a national clearinghouse for election-related information and a 
resource for information with respect to the administration of federal 
elections, (3) conducting studies, (4) administering programs that 
provide federal funds for states to make improvements to some aspects 
of election administration, (5) accrediting independent voting system 
test laboratories, and (6) certifying voting systems. EAC is led by 
four commissioners who are to be appointed by the president and 
confirmed by the Senate. The services and resources that EAC provides 
in discharging its responsibilities are discussed below. 

* Providing voluntary guidance. HAVA requires EAC to adopt a set of 
federal voting system standards. In December 2005, EAC adopted the 
voluntary guidelines, which define a set of specifications and 
requirements against which voting systems are to be designed, 
developed, and tested to determine whether they provide the 
functionality, accessibility, and security capabilities required to 
help ensure the integrity of voting systems. As such, the voluntary 
guidelines specify the functional requirements, performance 
characteristics, documentation requirements, and test evaluation 
criteria for the federal certification of voting systems. In 2007, the 
EAC's guidelines committee submitted to EAC the next update to the 
voluntary guidelines. 

* Serving as an information clearinghouse. HAVA requires EAC to 
maintain a clearinghouse of information on the experiences of state and 
local governments relative to, among other things, implementing the 
voluntary voting system guidelines and operating voting systems. As 
part of this responsibility, EAC has created a space on its Web site to 
post or link to voting system reports and studies that have been 
conducted or commissioned by a state or local government that reflect 
its experience in operating a voting system or implementing the 
voluntary guidelines. EAC does not review the information for quality 
and does not endorse the reports and studies. 

* Administering provision of federal funds. HAVA requires EAC to 
administer a program to disburse funding to states for the replacement 
of older voting equipment and election administration improvements 
under Title III of HAVA. EAC began distributing funds in 2004 for (1) 
helping states meet HAVA's Title III requirements for uniform and 
nondiscriminatory election technology and administration, including the 
act's requirements pertaining to voting system standards; (2) 
provisional voting; (3) voting information; (4) a computerized 
statewide voter registration list; and (5) identification of first-time 
voters who register to vote by mail. 

* Accrediting independent test laboratories. HAVA assigned 
responsibilities for laboratory accreditation to both EAC and NIST. In 
general, NIST focuses on assessing laboratory technical qualifications 
and recommends laboratories to EAC for accreditation. EAC uses NIST's 
assessment results and recommendations, and augments them with its own 
review of related laboratory testing documentation to reach an 
accreditation decision. 

* Certifying voting systems. HAVA requires EAC to provide for the 
testing, certification, decertification, and recertification of voting 
system hardware and software. According to EAC's Testing and 
Certification Program Manual, EAC certification means that a voting 
system has been successfully tested by an accredited, independent 
testing laboratory; meets requirements set forth in a specific set of 
federal voting system standards; and performs according to the vendor's 
specifications.[Footnote 13] 

For fiscal year 2007, EAC's appropriation totaled $16.2 million. EAC 
reported that this included $6.7 million (48.4 percent) for activities 
related to improving voting technology, such as accrediting voting 
system laboratories and managing the voting system certification 
process; $2.7 million (19.5 percent) for EAC administration activities 
and Federal Register notices; $2.4 million (17.1 percent) for HAVA 
funds management activities; and $1.8 million (13.3 percent) for the 
production and distribution of election management guidelines and 
related quick start management guides. The remaining funds went toward 
meetings for the Standards Board and Board of Advisors. EAC's budget 
for fiscal year 2008 is $16.53 million and its budget request for 
fiscal year 2009 is around $16.7 million. 

Management of Voting System Performance Is a Continuous Process: 

As we previously reported,[Footnote 14] the effective management of 
voting systems extends beyond Election Day activities and is a 
continuous process that involves the interplay of people, processes, 
and technology during the entire life of a system. The performance of 
these systems is heavily influenced by a number of factors, including 
how well the system is defined, developed, acquired, tested, operated, 
and managed. 

The development of a voting system starts with an explicit definition 
of what the system is to do and how well it is to do it. These 
requirements are then translated into design specifications that are 
used to develop the system. Electronic voting systems are typically 
developed by vendors, then purchased as commercial, off-the-shelf 
products and operated by state and local election administrators. 
During the three phases of a system (development, acquisition, and 
operations), a range of tests is performed and the process is managed 
to ensure that performance expectations are met. Together, these 
activities form a voting system life cycle (see fig. 4). 

Figure 4: Conceptual Depiction of a Voting System Life Cycle Model: 

[Refer to PDF for image] 

This figure is an illustration of the conceptual depiction of a voting 
system life cycle model. 

Sources: GAO analysis of NIST, IEEE, and EAC publications. 

[End of figure] 

Successful implementation of the three key phases of a voting system's 
life cycle requires the coordinated efforts of vendors, state 
officials, and local governments: 

* Requirements/standards. Voting system standards define the functional 
and performance requirements that must be met, and thus provide the 
baseline against which systems are developed, acquired, and tested. 
They also specify how the systems should be operated and managed. 
Voting system standards apply to system hardware, software, firmware, 
and documentation, and they span prevoting, voting, and postvoting 
activities. In addition to national standards, some states and local 
jurisdictions have specified their own voting system requirements. They 
include the functional and performance requirements that are contained 
in state statutes, administrative codes, policies, procedures, and best 
practices. These requirements also provide the baseline against which 
voting systems are developed, approved, acquired, tested, operated, and 
managed. 

* Development. Product development is performed by the voting system 
vendor and includes defining more detailed system requirements, 
designing the system specifications, developing software, integrating 
hardware and software components, and testing the integrated system. 

* Acquisition. Voting system acquisition activities are performed by 
state and local governments and include publishing a solicitation, 
evaluating offers, choosing a voting system method, choosing a vendor, 
awarding and administering contracts, and testing the acquired system. 

* Operations. Operation of voting systems is typically the 
responsibility of local jurisdictions, whose officials may, in turn, 
rely on or obtain assistance from system vendors. These activities 
include ballot design and programming, setting up systems before 
voting, pre-election testing, vote capture and counting during 
elections, recounts and system audits after elections, and storage of 
systems between elections. Among other things, this phase includes 
activities associated with the physical environments in which the 
system operates. These include ensuring the physical security of the 
polling place and voting equipment and controlling the chain of custody 
for voting system components and supplies. The operations phase also 
includes monitoring the election process by use of system audit logs 
and backups, and the collection, analysis, reporting, and resolution of 
election problems. 

* Testing. Testing is conducted by multiple entities throughout the 
system life cycle. Vendors conduct testing during system development, 
for example. National testing of systems is conducted by the EAC-and 
NIST-accredited voting system testing laboratories. As described in 
depth later in this report, states perform a range of tests prior to 
approving or otherwise certifying a system, as well as after system 
approval but prior to the system's use in an election. Types of voting 
system testing include: certification testing (federal level), 
certification/approval testing (state level), acceptance testing, 
readiness (logic and accuracy) testing, security testing, Election Day 
parallel testing, and postelection voting system audits. Table 2 
summarizes these types of tests. 

Table 2: Types of Voting System Testing: 

Type: Certification (federal)[A]; 
Purpose: To verify compliance of voting equipment with federal 
standards prior to or as a condition of system acceptance. 

Type: Certification/approval (state); 
Purpose: To validate compliance of voting equipment with state specific 
requirements before an election. 

Type: Acceptance; 
Purpose: To verify that voting equipment delivered by a vendor meets 
state or local requirements before an election. 

Type: Readiness (logic and accuracy); 
Purpose: To verify that voting equipment is functioning properly, 
usually by confirming that predictable outputs are produced from 
predefined inputs before an election. 

Type: Security; 
Purpose: To verify that technical security controls embedded in voting 
equipment operate as intended, as well as ensure that security policies 
and procedures governing the testing, operation, and use of the systems 
are properly defined and implemented by the responsible officials 
before an election. 

Type: Election Day parallel; 
Purpose: To verify accurate performance of voting equipment through 
random selection and systematic evaluation of operational equipment 
during an election. 

Type: Postelection audit; 
Purpose: To review and reconcile election records to confirm correct 
conduct of an election or uncover evidence of problems with voting 
equipment or election processes after an election. 

Source: GAO. 

[A] Responsibility for overseeing federal testing of voting systems and 
certifying those that met federal standards was assigned to EAC in HAVA 
§ 231(a)(1) (codified at 42 U.S.C. § 15371(a)(1)). EAC assumed this 
responsibility in August 2005 from the National Association of State 
Election Directors (NASED). Under NASED, national testing against 
federal standards was called qualification testing. 

[End of table] 

Management. Voting system vendors manage the development of the system, 
while states and/or local jurisdictions manage the acquisition, 
operation, and maintenance of the system. Management activities include 
test management, configuration management, requirements management, and 
risk management. 

GAO Has Previously Identified Voting System Related Issues and 
Challenges: 

Since 2000, we have reported on a range of issues and challenges 
associated with voting systems.[Footnote 15] 

* In 2001, we reported[Footnote 16] that the challenges confronting 
local jurisdictions in using voting technologies include having 
reliable measures and objective data to know whether the technology 
being used is meeting the needs of the jurisdiction's user communities; 
ensuring that necessary security, testing, and maintenance activities 
are performed; ensuring that the technology will provide benefits over 
its useful life commensurate with life cycle costs (acquisition as well 
as operations and maintenance) and that these collective costs are 
affordable and sustainable; and ensuring that the three elements of 
people, process, and technology are managed as interrelated and 
interdependent parts of the total voting system. 

* Also in 2001, we reported[Footnote 17] that no federal agency had 
been assigned explicit statutory responsibility for developing voting 
equipment standards, but that the Federal Election Commission (FEC) had 
assumed this role by developing voluntary standards in 1990 for 
computer-based systems. We found that those standards described most-- 
but not all--types of system requirements and that the FEC planned to 
issue revised standards in 2002. Accordingly, we recommended, among 
other things, that the FEC accelerate the development of requirements 
for equipment usability, including considerations for human 
capabilities and limitations. 

* Later that same year, we provided perspective on the challenges 
inherent in our election system, including the difficulty of accurately 
diagnosing and correcting election system problems in an environment 
where people and processes can be more significant factors than voting 
technology. We went on to suggest four criteria[Footnote 18] against 
which proposals could be evaluated: (1) appropriate federal role in 
election reform; (2) balance between accessibility and integrity; (3) 
integration of people, process, and technology; and (4) affordability 
and sustainability of election reforms. 

* Our final report in the series reported[Footnote 19] that funding 
constraints at the local level hindered the acquisition of voting 
equipment that is more accessible to persons with disabilities. In 
addition, we found that expanding the availability of alternative 
voting methods or accommodations can provide voters with additional 
options, but implementing these changes can present election officials 
with legal, administrative, and operational challenges. 

* In 2005, we reported[Footnote 20] that numerous entities had raised 
concerns about voting systems' security and reliability, citing 
instances of weak security controls, system design flaws, inadequate 
system version control, inadequate security testing, incorrect system 
configuration, poor security management, and vague or incomplete voting 
system standards. We recommended that EAC define specific tasks, 
processes, and time frames for improving the nation's voting systems 
standards, testing capabilities, and management support available to 
state and local election officials. 

* Our nationwide study of the 2004-2006 election cycles reported 
[Footnote 21] that larger local election jurisdictions may be replacing 
older equipment with technology-based voting methods to a greater 
extent than small jurisdictions, which continue to use paper ballots 
extensively and are the majority of jurisdictions. We concluded that as 
the elections technology environment evolves, voting system performance 
management, security, and testing will continue to be important to 
ensuring the integrity of the overall elections process. 

* In our 2007 testimony, we explained[Footnote 22] how challenges 
confronting all levels of government in acquiring and operating voting 
systems for future elections are not unlike some of those faced by any 
technology user: adoption and consistent application of standards for 
system capabilities and performance; successful management and 
integration of the people, process, and technology components; rigorous 
and disciplined performance of testing and security activities; and 
reliable measurement to determine whether the systems are performing as 
intended. 

States', Territories', and the District's Voting Environments Largely 
Consist of Multiple Methods and Systems, and Have Been Influenced by 
Various Factors: 

States, territories, and the District report that they plan to rely on 
a variety of voting methods and systems for the 2008 general election. 
For most states and two territories, at least two different methods are 
planned for use across several election stages, with four methods being 
most frequently planned. Moreover, they intend to rely on multiple 
types of voting methods, with the most prevalent types being precinct 
count optical scan, central count optical scan, and DRE. Ballot marking 
devices are also to be commonly used; vote-by-phone is expected to be 
in very limited use. 

A key factor that has influenced the number of system types used is the 
level of state involvement in the selection of voting systems, which 
has increased since the 2004 election. For the 2008 general election, 
the majority of states and territories reported that they will either 
select the voting systems that jurisdictions use or provide 
jurisdictions with a list of approved voting systems from which to 
select. However, a few respondents reported that they will continue to 
use approaches that were more widely used in 2004, such as approving 
local jurisdictions' selections. In general, states and territories 
that select voting systems for local jurisdictions reported that they 
will employ fewer voting systems in the 2008 general election than 
those states that allow local jurisdictions to select their systems. 

Several other factors have influenced the selection of voting methods 
and systems for the 2008 general election or may affect their selection 
in the future. According to election officials, these include meeting 
state or federal requirements, funding availability, and voter concerns 
with existing systems. These are similar to factors that we have 
previously reported as affecting voting system investment decisions. 

States, Territories, and the District to Rely Largely on Several Voting 
Methods and Systems for the 2008 Election: 

For the 2008 general election, most states, territories, and the 
District reported that they will rely on more than one voting method to 
conduct the three election stages of vote casting, counting, and 
certification (absentee, early, and Election Day polling place voting). 
For example, 9 states, 1 territory, and the District plan to use two 
methods; 16 states plan to use four methods (the most common number); 
and 9 states plan to use the maximum number of reported methods--five. 
For those states, territories, and the District that plan to use two or 
more methods, the mix of methods consistently includes either DRE or 
optical scan (precinct or central count) methods, or both. Furthermore, 
many states also intend to use the more emergent voting methods (ballot 
marking devices and vote by phone), while others expect to use older 
methods (e.g., lever machines, punch card, and paper ballot). The 
voting method used, as well as the size and demographics of a voting 
jurisdiction, significantly affects the complexity of planning and 
conducting an election, as we previously reported.[Footnote 23] Figure 
5 illustrates the number of methods that respondents plan to use for 
the 2008 election. 

Figure 5: Number of Voting Methods That Survey Respondents Plan to Use 
for the 2008 General Election: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Number of voting methods: 1; 
Number of respondents: 3. 

Number of voting methods: 2; 
Number of respondents: 11. 

Number of voting methods: 3; 
Number of respondents: 13. 

Number of voting methods: 4; 
Number of respondents: 16. 

Number of voting methods: 5; 
Number of respondents: 9. 

Source: GAO 2008 survey of state, territory, and the District of 
Columbia election officials. 

[End of figure] 

This mix of methods planned for the 2008 general election included few 
changes from the mix of methods that states, territories, and the 
District reported for the 2006 general election. For example, 8 states, 
1 territory, and the District used two methods; 19 states used four 
methods (the most common number); and 8 states used five methods, which 
was also the maximum number of reported methods. The mix of methods for 
the 2006 general election also consistently included either DRE, 
precinct count optical scan, or central count optical scan methods, and 
many states used emergent methods like ballot marking devices. 

According to survey respondents, some voting methods are to be more 
widely used for multiple vote casting and counting stages, or in a 
particular stage, than others. Specifically, most states, two 
territories, and the District reported that they plan to use precinct 
or central count optical scan systems in at least one stage. Many 
states and the District also plan to use DRE or ballot marking devices, 
and 2 states plan to use vote-by-phone, in more than one stage. Only 
one state plans to use lever machines, while only one other state plans 
to use punch cards. In addition, survey respondents reported that 
precinct count optical scan and DRE systems will be the most widely 
used method for two election stages (polling places on Election Day and 
early voting), while central count optical scan will be the most widely 
used method for absentee voting. The numbers of states, territories, 
and the District that plan to rely on specific voting methods for the 
voting stages is similar to numbers reported for the 2006 general 
election, although ballot marking devices are becoming more prevalent. 
Table 3 shows the number of respondents planning to use specific voting 
methods for each voting stage. 

Table 3: Voting Methods that Survey Respondents Plan to Use by Voting 
Stage for the 2008 General Election: 

Voting method: DRE; 
Voting stage[A]: Election Day polling place voting: 31; 
Voting stage[A]: Early voting[B]: 17; 
Voting stage[A]: Absentee voting[C]: 9. 

Voting method: Precinct count optical scan; 
Voting stage[A]: Election Day polling place voting: 38; 
Voting stage[A]: Early voting[B]: 18; 
Voting stage[A]: Absentee voting[C]: 24. 

Voting method: Central count optical scan; 
Voting stage[A]: Election Day polling place voting: 17; 
Voting stage[A]: Early voting[B]: 10; 
Voting stage[A]: Absentee voting[C]: 40. 

Voting method: Ballot marking device; 
Voting stage[A]: Election Day polling place voting: 26; 
Voting stage[A]: Early voting[B]: 10; 
Voting stage[A]: Absentee voting[C]: 14. 

Voting method: Vote-by-phone; 
Voting stage[A]: Election Day polling place voting: 6; 
Voting stage[A]: Early voting[B]: 2; 
Voting stage[A]: Absentee voting[C]: [Empty]. 

Voting method: Lever machine; 
Voting stage[A]: Election Day polling place voting: 1; 
Voting stage[A]: Early voting[B]: [Empty]; 
Voting stage[A]: Absentee voting[C]: [Empty]. 

Voting method: Punch card; 
Voting stage[A]: Election Day polling place voting: 1; 
Voting stage[A]: Early voting[B]: 1; 
Voting stage[A]: Absentee voting[C]: 1. 

Voting method: Paper (hand-counted) ballot; 
Voting stage[A]: Election Day polling place voting: 18; 
Voting stage[A]: Early voting[B]: 7; 
Voting stage[A]: Absentee voting[C]: 21. 

Source: GAO 2008 survey of state, territory, and the District of 
Columbia election officials. 

[A] Our survey question asked about each of these voting options 
separately. There may be some overlapping responses for early voting 
and absentee voting, due in part to how these voting stages are defined 
within states' and others' statutory frameworks. 

[B] Early voting is voting generally in-person in advance of Election 
Day at specific polling place locations, separate from absentee voting. 

[C] Absentee voting is voting generally by mail in advance of Election 
Day (although ballots may often be returned up through Election Day and 
dropped off in person). 

[End of table] 

In addition to a variety of voting methods, most states also will rely 
on a mix of voting systems for the 2008 general election. For example, 
30 states reported that they plan to use 2 to 5 systems, 9 states plan 
to use 6 to 10 systems, and 3 states plan to use 11 to 15 systems. A 
few states and the majority of territories plan to rely on a single 
voting system. In contrast, one state plans to use five different 
system models from two different vendors in some counties. Table 4 
provides an example of one state's planned use of systems for the 2008 
general election to illustrate the range and variation in voting system 
models that can be used within a state. 

Table 4: Example of Voting Methods, Manufacturers, and Voting System 
Models Planned for Use in One State for the 2008 General Election: 

Voting method: Optical scan; 
Manufacturer: Premier; 
System model: ACV-OS; 
Number of counties planning to use: 51. 

Voting method: Optical scan; 
Manufacturer: Premier; 
System model: ACV-OSX; (Digital Scan); 
Number of counties planning to use: 20. 

Voting method: Optical scan; 
Manufacturer: ES&S; 
System model: M100; 
Number of counties planning to use: 28. 

Voting method: Ballot marking device; 
Manufacturer: Premier; 
System model: A300; 
Number of counties planning to use: 71. 

Voting method: Ballot marking device; 
Manufacturer: ES&S; 
System model: A100; 
Number of counties planning to use: 28. 

Source: GAO analysis of state-provided data. 

[End of table] 

State Involvement in System Selection Has Increased Since 2004 and Is 
Reflected in the Number of Voting Systems Planned for Use in 2008: 

State involvement in local jurisdictions' selection of voting systems 
has increased since the 2004 general election. Moreover, the level of 
involvement in system selection by states, territories, and the 
District has influenced the number of voting systems planned for use in 
the 2008 election. For the 2004 general election, 29 states and the 
District had selected the voting systems to be used by local 
jurisdictions or provided a list of approved systems from which 
jurisdictions could make selections. The remaining 21 states either 
allowed local jurisdictions to select voting equipment without state 
involvement or allowed the jurisdiction to obtain state approval for 
selecting a system. 

For the 2008 general election, most states and territories exercise 
control over the selection of voting systems in one of two ways. First, 
the majority of states reported that they would be involved in voting 
system selection by providing a list of approved systems from which 
local election officials could select. Second, most of the remaining 
states and all of the territories reported that they would actually 
select the systems for local jurisdictions to use. The remaining three 
states and the District reported different approaches for the upcoming 
election, such as requiring that systems selected by local 
jurisdictions be approved by the state, providing a list of systems for 
local jurisdictions to chose from but selecting all the accessible 
systems, or allowing jurisdictions to purchase any system that meets 
state requirements. Figure 6 summarizes the role of each state, 
territory, and the District in selecting voting systems for 
jurisdictions for the 2004 and 2008 elections. 

Figure 6: Reported Involvement by States and Others in the Selection of 
Voting Systems for the 2004 and 2008 General Elections: 

[Refer to PDF for image] 

This figure contains maps of the United States indicating the reported 
involvement by states and others in the selection of voting systems for 
the 2004 and 2008 general elections. The following information is 
depicted: 

Year: 2004; 
State was not involved in equipment selection: 
Arkansas: 
Mississippi: 
Nebraska: 
Ohio: 
Utah: 

Year: 2004; 
State required equipment chosen by local jurisdictions to be approved 
by the state: 
Arizona: 
Colorado: 
Florida: 
Iowa: 
Minnesota: 
Missouri: 
Montana: 
New Hampshire: 
North Dakota: 
Oregon: 
Pennsylvania: 
South Carolina: 
South Dakota: 
Virginia: 
Washington: 
Wyoming: 

Year: 2004; 
State provided list of equipment from which local jurisdictions were 
required to choose: 
Alabama: 
California: 
Connecticut: 
Idaho: 
Illinois: 
Indiana: 
Kansas: 
Kentucky: 
Maine: 
Massachusetts: 
Michigan: 
New Jersey: 
New Mexico: 
New York: 
North Carolina: 
Rhode Island: 
Tennessee: 
Texas: 
West Virginia: 
Wisconsin: 

Year: 2004; 
State required local jurisdictions to use specific method: 
Alaska: 
Delaware: 
District of Columbia: 
Georgia: 
Hawaii: 
Louisiana: 
Maryland: 
Nevada: 
Oklahoma: 
Vermont: 

Year: 2008; 
State was not involved in equipment selection/No response: 
District of Columbia: 
Michigan: 
New Jersey: 
Utah: 

Year: 2008; 
Other: 
Maine: 
Mississippi: 

Year: 2008; 
State required that voting systems selected by local jurisdictions then 
be approved by the state: 
Vermont: 

Year: 2008; 
State provided a list of approved voting systems from which local 
jurisdictions were required to select: 
Alabama: 
Arizona: 
Arkansas: 
California: 
Colorado: 
Florida: 
Idaho: 
Illinois: 
Indiana: 
Iowa: 
Kansas: 
Kentucky: 
Massachusetts: 
Minnesota: 
Missouri: 
Montana: 
New Hampshire: 
New Mexico: 
New York: 
North Carolina: 
Ohio: 
Oregon: 
Pennsylvania: 
Tennessee: 
Texas: 
Virginia: 
Washington: 
West Virginia: 
Wisconsin: 
Wyoming: 

Year: 2008; 
State selected voting system(s) for local jurisdictions: 
Alaska: 
American Samoa: 
Connecticut: 
Delaware: 
Georgia: 
Guam: 
Hawaii: 
Louisiana: 
Maryland: 
Nebraska: 
Nevada: 
North Dakota: 
Oklahoma: 
Puerto Rico: 
Rhode Island: 
South Carolina: 
South Dakota: 
U.S. Virgin Islands: 

Sources: GAO 2005 and 2008 surveys of states, territories, and the 
District of Columbia election officials, MapArt (map). 

Note: Territories were not included in GAO's 2005 survey of state 
election officials. Two states and the District reported individual 
approaches for involvement in local jurisdictions' selection of 
systems; they are listed as Other in the figure. 

[End of figure] 

State officials that we interviewed cited various reasons for why 
states have become more involved in voting system selection. For 
example, officials from one state told us that providing a list of 
systems to local jurisdictions was a statutory requirement. Officials 
from another state said that, as a result of HAVA, the state purchased 
systems for jurisdictions, which facilitated control over its 
expenditures of HAVA funding. In addition, another state's officials 
stated that it provided a mechanism to ensure local jurisdictions were 
using the appropriate systems. 

The number of voting systems planned for use in the 2008 election is a 
function of the level of state-, territory-, and District-level 
involvement in system selection. Based on survey data, states and 
territories that selected systems for local jurisdictions plan to use 
fewer systems in the upcoming election than states that required 
jurisdictions to choose systems from an approved list. Figure 7 shows 
the number of systems planned for use in the 2008 election with respect 
to two main approaches of survey respondents to voting system 
selection. 

Figure 7: Number of Voting Systems Planned for Use in the 2008 General 
Election in Relation to the Reported Type of Involvement by States and 
Others in Voting System Selection: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Number of systems: 1-3; 
State selected voting system(s) for local jurisdictions (number of 
respondents): 17; 
State provided a list of approved systems from which local 
jurisdictions were required to select (number of respondents): 7. 

Number of systems: 4 or more; 
State selected voting system(s) for local jurisdictions (number of 
respondents): 1; 
State provided a list of approved systems from which local 
jurisdictions were required to select (number of respondents): 22. 

Source: GAO 2008 survey of state, territory, and the District of 
Columbia election officials. 

Note: Two states that use alternate approaches to involvement in voting 
system selection and the District plan to use from one to three 
systems. The remaining state plans to use five systems. 

[End of figure] 

Several Additional Factors Influence Selection of Voting Methods and 
Systems: 

Officials in the states and territories that we interviewed identified 
one or more factors beyond the states' involvement in voting system 
selection that have influenced their selection of voting methods and 
systems for the 2008 general election or may affect their solution in 
the future. These factors, which are similar to some of the voting 
system investment considerations that we have previously identified, 
are (1) meeting state and federal requirements, (2) availability of 
funding, and (3) voter concerns with existing systems. 

Meeting state and federal requirements. Election officials from 2 
states and 1 territory told us that they had adopted ballot marking 
devices and vote-by-phone systems for the 2008 general election in 
order to comply with state and federal accessibility requirements. 
Conversely, officials from other states told us that potential changes 
to federal requirements may influence them to postpone selecting any 
new systems. For example, officials with 2 states told us that their 
decision to upgrade their systems or purchase new systems in the future 
could be influenced, in part, by potential new federal voting system 
requirements that are included in bills before the Congress. In 
addition, election officials from one state and one territory told us 
that the anticipated 2007 federal voluntary voting system guidelines 
could necessitate system changes, and thus, in one case, they would 
postpone the selection of new voting systems until the 2007 guidelines 
were finalized. 

Availability of funding. Based on survey responses, 37 states, 1 
territory, and the District reported using all available HAVA funds to 
purchase voting systems. Election officials from two of these and other 
states told us that without additional funding, further investment in 
new or upgraded systems may not be possible. Specifically, officials 
from one state said they are applying for additional HAVA funds to 
replace may of their voting systems except for their optical scan 
equipment. Election officials in another state told us they would like 
to adopt optical scan systems but simply do not have the funds needed 
to purchase the systems. These and other election officials expressed 
concern that any future changes to voting methods and systems will be 
hard to undertake unless they receive additional funding for purchasing 
the systems. 

Concerns of voters. State and other election officials that we 
interviewed reported voter concerns with existing systems as a factor 
in the selection of new systems. For example, election officials from 2 
states and 1 territory told us that because of voter concerns regarding 
DREs, they may either eliminate or limit the systems' use for the 
upcoming election. Instead, they planned to rely more on their other 
voting methods. In contrast, officials from another state that use DREs 
said that because of high voter confidence in and satisfaction with 
these systems and their reliability, stability, and predictability, 
they had no plans to purchase other systems. Election officials from 2 
other states also reported that they were not planning to acquire new 
systems for the upcoming election because their voters are satisfied 
with their current systems. 

These factors influencing states' selection of voting methods and 
systems are similar to the factors that we previously reported in 2006 
[Footnote 24] as influencing local jurisdictions' purchase of new 
systems. In particular, meeting state requirements was one of the most 
frequent factors cited by local jurisdictions in determining which 
systems to purchase. Other widely influential factors cited by local 
jurisdictions included ease of use and affordability. Meeting HAVA 
requirements and state funding were also cited as factors in the 
purchase of systems. 

States, Territories, and the District Have Largely Defined Similar 
Approaches and Face Common Challenges in Approving Voting Systems: 

Most states, territories, and the District approve or otherwise certify 
voting systems for use in elections, and in doing so, follow a similar 
series of basic steps. The majority of these states also have processes 
in place to qualify an initial approval, and to reapprove or revoke a 
prior approval if certain conditions are met. However, the nature and 
extent of the activities that comprise the basic approval steps, and 
who performs these activities, vary. For example, some states' election 
staff conduct mock elections or system demonstrations while other 
states rely on academic institutions, external experts, or consultants 
to perform a range of system tests and establish a basis for approval. 
States, territories, and the District also reported facing similar 
challenges in approving systems, including ensuring that vendors meet 
requirements, having sufficient qualified staff and facilities to 
conduct testing, and completing the approval process in a timely 
fashion. 

Most States, Territories, and the District Approve Voting Systems, but 
Fewer Provide for Approvals to Be Qualified, Reapproved, or Revoked: 

Most states, two territories, and the District approve or otherwise 
certify voting systems, but fewer provide for qualifying approval, or 
for renewing or revoking a prior system approval.[Footnote 25] System 
approval processes are largely governed by statutory requirements, but 
the specificity of these requirements varies by state. Most states' 
statutes include a detailed list of requirements that voting systems 
must meet for approval to be granted, and a few state and territory 
statutes include specific approval activities that must be performed. 
Similarly, most states that provide for approval revocation also 
specify circumstances for doing so in statute; a few of these also 
identify specific revocation steps. 

System Approval Processes Are Largely in Place and Governed by 
Requirements, but Specific Requirements and Approval Activities Vary: 

Most states, 2 of the territories, and the District approve or certify 
voting systems for use in elections to ensure they meet specific state 
requirements and standards. Based on responses to our survey and a 
review of statutes, 43 of 50 states, 2 of 4 territories, and the 
District currently have a requirement to approve or certify voting 
systems (see fig. 8), and many have statutory frameworks. The 7 states 
and 1 territory that do not have an approval requirement have reported 
alternative approval approaches that are mostly based on nonstatutory 
requirements, according to election officials. The other territory does 
not use electronic voting systems. The number of states with approval 
requirements is similar to the numbers that were previously reported 
relative to the 2000 and 2004 elections.[Footnote 26] 

Figure 8: 2008 Voting System Approval Requirements Reported by States 
and Others for 2008: 

[Refer to PDF for image] 

This figure is a map of the United States indicating 2008 voting system 
approval requirements reported by states and others for 2008, as 
follows: 

Does not have requirement: 
American Samoa: 
Guam: 
Maine: 
Mississippi: 
Oklahoma: 

Has requirement: 
Alabama: 
Alaska: 
Arizona: 
Arkansas: 
California: 
Colorado: 
Connecticut: 
Delaware: 
District of Columbia: 
Florida: 
Georgia: 
Hawaii: 
Idaho: 
Illinois: 
Indiana: 
Iowa: 
Kansas: 
Kentucky: 
Louisiana: 
Maryland: 
Massachusetts: 
Michigan: 
Minnesota: 
Missouri: 
Montana: 
Nebraska: 
Nevada: 
New Hampshire: 
New Jersey: 
New Mexico: 
New York: 
North Carolina: 
North Dakota: 
Ohio: 
Oregon: 
Pennsylvania: 
Puerto Rico: 
Rhode Island: 
South Carolina: 
South Dakota: 
Tennessee: 
Texas: 
U.S. Virgin Islands: 
Utah: 
Vermont: 
Virginia: 
Washington: 
West Virginia: 
Wisconsin: 
Wyoming: 

Sources: GAO 2008 survey of state, territory, and the District of 
Columbia election officials and GAO analysis of state statutes for 
Michigan, New Jersey, and Utah; MapArt (map). 

[End of figure] 

Based on our review of state and territory statutes, the requirements 
that voting systems must meet, and the activities to be performed, vary 
as to their specificity. Most statutes include a detailed list of 
requirements that must be met for approval to be granted, such as 
permitting voters to cast ballots for all offices that they are 
entitled to, ensuring secrecy in casting a ballot, and including a 
mechanism to record and tabulate the votes cast. However, a few state 
statutes do not cite detailed requirements, but rather state that an 
approval authority is to specify the appropriate technical standards or 
criteria for approval. Further, few state and territory statutes 
include specific activities to be performed. Rather, the statutes 
typically include such general activities as: (1) testing of voting 
system functions, (2) examining previous testing laboratory results, 
(3) involving the public in the approval process through public 
hearings or periods for public comment, (4) reviewing vendor financial 
information and system maintenance manuals by an approval authority, 
and (5) placing system source code in escrow. 

While 7 states and 1 territory do not have a statutory framework that 
governs approval, they do have statutory voting system requirements, 
and they have defined approaches to selecting systems. More 
specifically, the 7 states' statutes require certain system functions 
or authorize specific voting methods, but they do not have statutory 
requirements that assign responsibility for approving systems to a 
specific entity. To select systems, election officials in the majority 
of the 7 states and 1 territory told us that they develop detailed 
administrative requirements that the voting system must meet and 
processes that govern how systems are selected. To illustrate, one 
state uses a combination of mechanisms and acceptance testing 
procedures to ensure that systems meet state requirements. The 
remaining states and the territory require certain vendor documentation 
that they evaluate to ensure that the system complies with state 
requirements or federal standards. 

State officials that we interviewed cited various reasons for not 
having a statutory framework governing voting system approval. 
Officials in one state said that because the state historically had not 
been involved in local jurisdictions' selection of systems, its 
legislature did not see a need for state approval. Officials for 
another state indicated that the state's current statutes on voting 
systems make a statutory approval process unnecessary because, by law, 
only the state is allowed to purchase systems for local jurisdictions 
to use. Officials with one other state said that an approval process 
was not necessary because of the limited number of voting system units 
in the state and that it was not feasible given resource limitations. 

Qualified Approval Is Permitted in Many States, but Conditions Vary: 

Survey responses from many states identified the use of qualified 
approvals to either approve a voting system due to special 
circumstances or to add conditions or procedures that must be met to 
fully comply with state requirements and to permit the system's use. 
Specifically, 23 of the states with an approval requirement reported at 
least one of four types of qualified approval--exemption, emergency, 
conditional, or provisional. Of these, 2 states' statutes address 
qualified approvals. Neither the territories nor the District have 
qualified approvals (see table 5). 

Table 5: Types, Purposes, and Circumstances of Qualified Approval with 
Number of States that Have Provisions for Each Type: 

Type: Exception; 
Purpose: Allow system to be used in an election without undergoing 
state approval process; 
Circumstances for use: Approve legacy system; system with limited 
functionality; 
Number of states with provisions: 7. 

Type: Emergency; 
Purpose: Grant approval to voting system when standard approval process 
cannot be completed before the election; 
Circumstances for use: Approve system upgrades or modifications within 
limited time frame; 
Number of states with provisions: 12. 

Type: Conditional; 
Purpose: Grant approval to voting system contingent on taking 
additional actions before the system can be used in an election; 
Circumstances for use: Approve system with required administrative 
procedures; approval expires at predetermined time; 
Number of states with provisions: 6. 

Type: Provisional; 
Purpose: Grant approval to voting system contingent on additional 
actions while the system is in use for an election; 
Circumstances for use: Approve system for use in certain capacity 
(maximum number of voters, specific jurisdictions, pilot projects); 
Number of states with provisions: 12. 

Source: GAO 2008 survey of state, territory, and District of Columbia 
election officials. 

Note: The total number of respondents to this question was 45. Of 
these, 2 states and 1 territory reported "Don't know." The rest of the 
territories and the District reported that they did not have any 
qualified approval processes. Although survey respondents that did not 
have a requirement for approval of voting systems were excluded from 
this survey question, 1 state and 1 territory that did not have an 
approval requirement provided responses to this question. 

[End of table] 

As shown in table 5, certain types of qualified approval are more 
prevalent than others, and several states have multiple types in place. 
Specifically, emergency and provisional approvals are more prevalent 
than exception or conditional approvals. In addition, approximately one-
third of the 23 states have multiple types of qualified approvals, with 
the most frequent combination being emergency and provisional. 

While almost one-half of the states have a qualified approval process 
in place, only 12 states reported that they have used these processes 
since December 2004; however, most of these states reported that they 
have done so repeatedly. For example, 2 states used provisional 
approval almost every time they granted approval to a voting system 
because the approval decision also outlined specific conditions for 
local jurisdictions to follow to use the system in an election. In 
another case, a state repeatedly used conditional approval because 
adding the condition allowed for (1) addressing any residual system 
concerns or (2) operating the system for a limited time before 
requiring re-examination. A few other states reported utilizing 
qualified approvals only once to address specific circumstances. In 2 
states, a form of qualified approval was used because there was 
insufficient time to provide an unqualified approval. In another state, 
an exception approval was used because the system was to be used in a 
limited capacity, and thus certain functionality did not need to be 
approved. 

Reapproval Is Largely Required, but Circumstances Vary: 

Almost all states, territories, and the District that require system 
approval also require system reapproval under certain circumstances to 
ensure that systems continue to meet the requirements under which they 
were initially approved, according to state-identified approval 
statutes and other survey responses. The circumstances that prompt 
reapproval and the activities to be performed are typically established 
in state statutes, or in administrative procedures that are set by the 
approval authority, but these circumstances vary. According to survey 
respondents, most require reapproval of a system when system hardware 
or software is modified. In addition, more than one-half also require 
reapproval when examination or testing show that the system no longer 
meets the requirements under which it was originally approved, or when 
state requirements change. States also responded that other 
circumstances could lead to reapproval, such as state or local 
jurisdiction requests for reapproval or expiration of a prior approval 
(see fig. 9). 

Figure 9: Circumstances for Reapproving Voting Systems as Reported by 
States and Others: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Circumstances that lead to reapproval: System software modification or 
upgrade; 
Number of respondents: 39. 

Circumstances that lead to reapproval: System hardware modification or 
upgrade; 
Number of respondents: 38. 

Circumstances that lead to reapproval: System examination requirements 
not met; 
Number of respondents: 31. 

Circumstances that lead to reapproval: State approval requirements 
change; 
Number of respondents: 26. 

Circumstances that lead to reapproval: State or local jurisdiction 
request; 
Number of respondents: 20. 

Circumstances that lead to reapproval: System approval expiration; 
Number of respondents: 10. 

Circumstances that lead to reapproval: Vendor support contract 
expiration; 
Number of respondents: 3. 

Circumstances that lead to reapproval: Other state required system 
reapproval; 
Number of respondents: 2. 

Source: GAO 2008 survey of state, territory, and the District of 
Columbia election officials. 

Note: The total number of survey respondents to this question was 45. 
Although respondents that do not have a requirement for approval were 
excluded from this survey question, 1 state and 1 territory that did 
not have an approval requirement provided responses to this question. 

[End of figure] 

A few statutes also allow an approval authority to determine the 
circumstances for reviewing existing approval. Typically, approval 
authorities could require reapproval if they: (1) determine changes to 
the system affect its accuracy, efficiency, or capacity; (2) receive a 
request for system re-examination by state electors; or (3) otherwise 
deem it appropriate. In addition, two states require reapproval every 4 
or 8 years, respectively. 

Since December 2004, almost one-half of the respondents reported that 
they have reapproved from one-fourth to all of their voting systems to 
introduce upgrades, make a system accessible to voters with 
disabilities, or incorporate software or firmware changes. Officials 
from one state told us that when any voting system changes are made, 
the approval for any previous version of that system is automatically 
revoked. 

Prior Approval Generally Can Be Revoked, but Specific Requirements 
Vary: 

The majority of states and one territory have a process to rescind an 
existing voting system approval if the system fails to fulfill 
requirements, but the circumstances that prompt it, and the process 
followed to justify it, vary. Specifically, 31 states and 1 territory 
with an approval requirement reported they have a revocation process, 
[Footnote 27] and for the majority of these states, the process is 
established by statute, though a few states and one territory specify 
their respective processes in administrative procedures. Based on our 
review of state statutes and survey responses, most of these states and 
the one territory specify that any unapproved modifications or changes 
that cause a system to no longer comply with state requirements could 
lead to revocation. Other circumstances, such as changes in state 
requirements, irregularities discovered as a result of postelection 
audits, and federal decertification also could lead to revocation, 
based on survey responses (see table 6). 

Table 6: Circumstances Reported by States and Others for Revoking 
Voting System Approval: 

Circumstance: Software modification or upgrade that causes 
noncompliance with state requirements; 
Number of respondents: 29. 

Circumstance: Testing reveals system requirements not met; 
Number of respondents: 29. 

Circumstance: Hardware modification or upgrade causes noncompliance 
with state requirements; 
Number of respondents: 28. 

Circumstance: Postelection audit irregularities; 
Number of respondents: 26. 

Circumstance: Federal decertification; 
Number of respondents: 26. 

Circumstance: State requirement changes; 
Number of respondents: 25. 

Circumstance: State or local requests; 
Number of respondents: 16. 

Circumstance: System approval expiration; 
Number of respondents: 14. 

Circumstance: Another state or jurisdiction approval revoked; 
Number of respondents: 13. 

Circumstance: Other[A];
Number of respondents: 6. 

Source: GAO 2008 survey of state, territory, and District of Columbia 
election officials and GAO review of relevant statutes. 

Note: Respondents that did not report a requirement for revoking 
approval were excluded from this survey question. In addition, 1 or 2 
states responded "Don't know" for some of the options in this question. 

[A] Other circumstances included vendor contract expiration, system 
misrepresentation in the approval application, and system not used 
after initial approval. 

[End of table] 

Although the statutes that we reviewed typically specify when a 
revocation is effective, few specify the steps to be followed. With 
respect to effective dates, several statutes specify that systems are 
to be withdrawn immediately following the revocation, although two 
states allow local jurisdictions anywhere from 6 to 24 months to 
withdraw a nonapproved system from use and implement or purchase a new 
system. For a small number of states, the statutory steps are broadly 
defined, such as (1) holding public hearings; (2) requiring vendors to 
provide written responses on corrective measures or other system 
documentation; and (3) providing written notice to vendors, state or 
local election officials, and the public when revocation is being 
considered. In addition, one state's statute specifies that a 
revocation that occurs 6 months or less before an election will not go 
into effect until after the election. 

States, Territories, and the District Generally Have Defined Similar 
Approaches, but Specific Approval and Revocation Activities and 
Stakeholders Involved Vary: 

Notwithstanding the variability in voting system approval requirements, 
most states, territories, and the District follow a similar series of 
general steps in approving voting systems and revoking a prior 
approval. However, the specific activities that comprise these steps 
vary. For example, based on discussions with election officials, 
approval-related testing can include mock election testing, source code 
review, or function testing. In addition, the stakeholders that perform 
the approval steps vary across states, territories, and the District. 
For example, the approval authorities in 12 states and 1 territory rely 
solely on their election board, committee, or secretary of state to 
perform approval activities, while the approval authorities in 28 
states, 1 territory, and the District rely on two or more stakeholders. 
In addition, the majority of approval authorities also engage state or 
local officials and external experts or consultants (e.g. academic 
institutions) in the approval process in order to augment technical 
expertise, and a few states collaborate with either the state chief 
information officer or chief technology officer. Several states and one 
territory also reported making recent improvements to the approval 
process activities that they had in place for the 2006 election, such 
as adding additional testing requirements and changing who performs 
certain approval activities. 

Basic Steps Governing Approval of Voting Systems Are Largely Similar: 

The primary steps that govern how states, territories, and the District 
are to approve voting systems are generally similar. Based on survey 
responses, approval can be viewed in terms of four steps: (1) 
establishing standards or criteria, (2) evaluating documentation, (3) 
testing systems to state standards and examining test results, (4) 
making an approval decision. Two other activities may interact with 
these steps: involving the public in aspects of the approval, and 
resolving problems that surface during the approval process. Each of 
the general approval process steps and activities, and their relative 
timing, are depicted in figure 10 and described in the following 
sections. 

Figure 10: General Steps that States and Others Follow in Approving 
Voting System: 

[Refer to PDF for image] 

This figure is an illustration of general steps that states and others 
follow in approving voting system. The following information is 
depicted: 

Approval timeline: 

Involve public: Members of the public are kept informed of voting 
system approval activities through one or more mechanisms, including 
public notices, public hearings, invitations to view or participate in 
approval activities, or periods for public comment. 

1. Establish standards or criteria: Minimum system requirements 
(functional or performance) are established that must be met for a 
voting system to be approved. 

2. Evaluate documentation: Voting system documentation is evaluated to 
determine whether the system meets the standards or criteria. 
Documentation may include technical documentation, vendor business 
information, and test plans and results from accredited testing 
laboratories. 

3. Test and examine results: Voting systems are tested to determine 
whether they meet the standards or criteria. The results are examined 
to confirm whether the system successfully passed testing. 

During steps 2 and 3: 
Resolve problems: Problems that arise as a result of the documentation 
evaluation or testing are resolved with the vendor. 

4. Make approval decision: 
The results of voting system testing and documentation evaluation, 
including any recommendations, are reviewed and a decision is made 
regarding system approval. 

Source: GAO analysis. 

[End of figure] 

1. Establish standards or criteria. Based on survey responses and 
contacts with election officials, the majority of states, one 
territory, and the District have established standards or criteria that 
a voting system must satisfy to be approved. For the most part, these 
standards or criteria address system performance; physical, design, or 
environmental characteristics; system security; system audibility; and 
information privacy. A small number of survey respondents also reported 
approval requirements for long-term system sustainability, life cycle 
costs, and system use by other states. 

Election officials described a variety of approaches and resources for 
developing these requirements. For example, several states and a 
territory have committees or focus groups composed of major 
stakeholders--such as state executives, knowledgeable technical experts 
or consults, and advocacy groups--to determine system approval 
standards or criteria. Other states engage state and local election 
officials. Resources that states and other officials cited as 
contributing to the development of standards were other state 
requirements, the federal voluntary voting system guidelines, industry 
technology standards, and studies of voting systems. 

In developing the specific standards, election officials told us that a 
few factors guided the process, including: (1) compliance with state or 
federal requirements, as appropriate; (2) satisfaction of voters' 
needs; and (3) appropriate technical specificity. As noted earlier, 
officials for some states told us that their respective state statutes 
have detailed requirements, thus the statutes provided a solid basis 
for developing approval requirements. Officials for other states said 
that their legislatures had vested its approval authority with 
responsibility for developing the appropriate technical standards. 
According to these officials, this provides flexibility for changing 
requirements because it does not require legislative action. Officials 
for some states also noted that having many groups involved in the 
process helps ensure that the standards are comprehensive. 

2. Evaluate documentation. In this step, the approval authority or 
designated staff evaluates vendor-provided system documentation against 
the standards or criteria for approval. Such documentation is submitted 
as part of the approval application. State and other election officials 
told us that they evaluate a range of vendor documentation and other 
sources to assess a system. On the basis of our interviews with state 
officials, and analysis of survey responses and state-provided statutes 
and documentation, the most common types of documents include test 
plans and test results from independent testing laboratories, 
operations manuals, vendor financial and performance information, and 
vendor contracts. Some states also collect other documentation, such as 
software source code, training materials, photographs, and other 
states' approval reports. 

3. Test and examine results. Testing is intended to determine whether 
the system meets specified standards or criteria. As part of testing, 
test plans, procedures, or checklists are developed and executed, and 
test results are examined to confirm whether the system successfully 
passed testing. Based on survey responses, the majority of states and 
territories perform system testing as part of their approval process, 
and the testing generally covers major system functions. Specifically, 
34 states and 3 territories perform approval testing,[Footnote 28] and 
in doing so test a range of functions, such as ballot definition or 
layout, ballot marking, vote casting, tabulation, transmission of 
results, and the election management system. In addition, a few states 
reported testing the integration of the electronic poll book.[Footnote 
29] 

The type of testing and examination of results performed varies among 
states and territories to support system approval decision making. 
These include mock elections, accessibility, source code review, and 
volume testing. See table 7 for the types of tests and the purpose of 
each. 

Table 7: Types and Purposes of Approval-Related Testing: 

Type: Accessibility; 
Purpose: Determine whether the system's accessibility functions (e.g., 
audio ballot volume and system interfaces) meet requirements and 
perform as intended. 

Type: Function; 
Purpose: Determine whether system functions (e.g., ballot definition, 
processing ballots, tabulating results) meet requirements and perform 
as intended. 

Type: Software comparison; 
Purpose: Determine whether the certified version of system software has 
been installed on a voting system by comparing the vendor-provided 
version with the certified version. 

Type: Mock election; 
Purpose: Determine whether tabulation software is accurate by 
tabulating marked ballot test decks and comparing the results with 
original paper ballots; confirm the correct ballot presentation and 
vote casting. 

Type: Regression testing; 
Purpose: Determine whether software or firmware upgrades did not 
adversely affect other system features that were not part of the 
upgrade. 

Type: Security testing; 
Purpose: Determine whether system components or configurations include 
access controls, and whether system configurations, network 
communications logs, and removable media have security vulnerabilities. 

Type: Source code review; 
Purpose: Determine whether software code is constructed correctly and 
contains no malicious code or security vulnerabilities. 

Type: Volume testing; 
Purpose: Determine whether the system will operate in conditions 
approximating normal use by voters on Election Day. 

Source: GAO analysis of state-provided documentation. 

[End of table] 

Depending on the type of testing performed, election officials or their 
designees develop test plans that list specific test activities to be 
performed for each type of test. In general, these plans are developed 
by looking at technical documentation provided by the vendor to 
determine how the systems should be tested, although officials with one 
state noted that their staff used the testing protocols from an 
independent testing laboratory when developing its test plan. After 
testing is completed, election officials told us that they review the 
results to ensure that the system successfully passed the testing. 

4. Make approval decision. Approval authorities typically base their 
approval decision on the results of the documentation evaluation and 
the testing performed, as well as stakeholder recommendations. Once a 
decision is reached, a letter is sent to the vendor notifying it of the 
decision. The approval results also are provided to local jurisdictions 
so that they may acquire systems. Voting system approval results also 
may be posted on state Web sites as an information resource for local 
jurisdictions or the public. 

All states, territories, and the District reported that they make an 
approval decision as part of their approval process, though the 
processes for dissemination of the results vary. For example, officials 
from one state reported that the final approval decision is a multiday 
process where examiners meet with the approval authority to review all 
test results and determine whether or not the system will be approved, 
or if it will receive a form of qualified approval. Officials from 
another state told us that the approval authority reviews the testing 
reports and the recommendations from technical experts to make the 
approval decisions. According to state statutes, some states require 
the approval authority to notify local jurisdictions about the specific 
systems that are approved, while other states also require that the 
basis for the approval be made available to local jurisdictions and the 
public. Several of the election officials that we interviewed confirmed 
that they notify local jurisdictions of the approval decision by 
providing a list of approved voting systems to jurisdictions or placing 
a copy of the list on the state's Web site. A few officials told us 
that the basis for system approval is also posted on the state's Web 
site. 

Involve Public. Involving the public in some manner in the approval 
process was cited by 29 of 47 states, 3 of 4 territories, and the 
District as a related aspect of the approval process. More 
specifically, these respondents reported that they hold public hearings 
or seek public comments during the approval process. Further, election 
officials that we interviewed told us that the public is either invited 
to hearings where a vendor demonstrates the system or is invited to 
participate in some aspect of the testing process. In addition, some 
officials said that the public is involved in developing voting system 
standards or criteria. 

Resolve problems. All but 2 states that had an approval requirement 
identified problem resolution as a part of the approval process. During 
our interviews with state officials, they identified four factors that 
they said facilitate problem resolution: (1) sufficient time, (2) 
effective relationships and communication with vendors, (3) thorough 
documentation and understanding of the problems, and (4) vendor 
understanding of the state's practices and requirements. 

Activities for Revoking Prior System Approval Generally Are Similar: 

As with system approval, there are generally similar activities 
performed by the 31 approving states and 1 territory that have 
processes to revoke a prior voting system approval. Based on survey 
responses and election official interviews, these activities can be 
grouped into four general steps--re-evaluation, decision, withdrawal, 
and reconsideration. 

Re-evaluate the system. These state and territory officials reported 
that they will re-examine a system to determine if it still meets 
approval standards when they receive complaints or information from 
such sources as local jurisdiction reports, postelection audits, or 
other states. Several states also hold a public hearing as part of this 
reevaluation. 

Decide on revocation. Most of these states reported that the decision 
on whether to revoke a prior approval is made by the approval 
authority. However, for one state, a judge makes the decision based on 
a review of the system evaluation report and complaints. Once a 
decision is made, the approval authority provides written notification 
to the vendor that the system's approval has been revoked. 

Withdraw revoked system. As noted earlier, the majority of these states 
and one territory with a revocation process reported that once a 
revocation decision has been made, a system generally is to be 
withdrawn immediately from use, although some states allow more time. 
Local jurisdictions are informed of the withdrawal of approval and its 
effective date. 

Reconsideration decision. The majority of these states reported that 
they allow the vendor to request reconsideration of revocation, but the 
methods for doing so vary. Vendors can submit written requests, provide 
testimony at public hearings, or submit documentation for 
reconsideration. 

States, Territories, and the District Involve Varying Stakeholders in 
Their Approval Processes: 

Overall responsibility for approving the voting system normally rests 
with an approval authority that is established by statute. Typically, 
the approval authority also determines the stakeholders that 
participate in the approval process. In doing so, the approval 
authority may delegate responsibility to one or more other stakeholders 
for performing certain approval steps, depending on statutory 
requirements and available resources. These stakeholders can include 
other state, territory, or local jurisdiction staff, subject matter 
experts or consultants, and the state's chief information officer or 
chief technology officer. 

Based on survey responses and review of state-provided election 
statutes, the approval authority is typically the state's secretary of 
state or the state's election board or committee. Approximately one- 
half of state statutes designate the secretary of state as the approval 
authority, while almost all of the remaining states, as well as the 
territories and the District, require a state election board or 
commission to be the approval authority. For 2 states, the state 
election director is the approval authority. 

The approval authorities' delegation of responsibility for performing 
the approval steps varies in terms of the stakeholders involved and 
their assigned roles and responsibilities. Based on survey responses, 
the approval authorities for 12 states and 1 territory rely solely on 
their election staff to perform the approval steps and activities, 
while the approval authorities in 28 states, 1 territory, and the 
District rely on two or more stakeholders to conduct these steps and 
activities. Election officials for the states and territories that we 
interviewed told us that these stakeholders include other state or 
local officials, experts or consultants who provide technical expertise 
on voting systems, or election and program management. In addition, 
officials for some states stated that these stakeholders assisted the 
approval authority in making the approval decisions. For example: 

* Election officials from one state told us that the state's cyber 
security office, along with experts from state universities, helps 
develop the approval process requirements, particularly in the areas of 
security, telecommunications, and audit capabilities. According to 
state officials, this assistance allows the state to develop more 
comprehensive requirements in these areas. 

* Election officials from another state explained that staff from the 
office of the chief information officer is involved in all aspects of 
the approval process. They said that the staff provides technical 
expertise for system acquisition management and evaluations of voting 
systems. These office staff members also help evaluate requests for 
proposals and participated in reviews of system source code. 

* Another state's election officials told us that they work with county 
clerks and information technology staff to perform system testing. 
According to officials, county clerks provide needed expertise in 
election management processes because they administer the voting 
systems during elections, while county information technology staff 
provide expertise and resources to conduct software comparisons and 
address potential system problems. 

In addition, officials for several states told us they use external 
experts or consultants in performing certain approval process steps. 
These include universities, consulting groups or firms, and technical 
or subject matter experts. For example: 

* Election officials from one state told us they use the staff and 
resources from a state university to evaluate documentation and test 
the systems. More specifically, university staff review technical 
documentation, develops test plans, conducts testing and examines 
testing results, serves as the state escrow agent for voting system 
source code, and provides system training and technical support to 
local election officials. 

* Officials from another state told us they use a private, nonprofit 
company as a technology advisor and to develop testing requirements, 
review laboratory test plans, and develop suggested practices for 
county boards. 

* Another state's election officials told us they contract with 
technical consultants to evaluate technical documentation, review 
system source code, ask vendors questions during system demonstrations, 
and conduct testing and examination of results. 

State officials emphasized that experts and consultants are important 
stakeholders in the process because they provide needed technical 
expertise and resources that are not otherwise available. They also 
said that using the external experts and consultants provides a measure 
of impartiality and independence in the reviews because they are viewed 
as independent of the election office. 

Several States and a Territory Have Made Recent Improvements to Their 
Approval Process Activities: 

Several states and one territory have made improvements to approval 
process activities since the 2006 general election relative to 
establishing standards or criteria, evaluating documentation, and 
testing and examining of results. In selected cases, states have made 
improvements to a range of approval steps and activities, while other 
states have made improvements to one process step. For example: 

* Officials from 2 states told us they have developed new or additional 
approval requirements and processes to either accommodate a new voting 
system or add additional security controls for their systems. 

* Officials from one state stated they have begun to review additional 
technical documentation, such as system configurations, security 
specifications, and operations and maintenance procedures, as well as 
the vendor's configuration management plan and quality assurance 
program. 

* Officials from 2 states said they have expanded the types of testing 
performed to include volume testing, reviews of source code, 
penetration testing, and accessibility testing for voters with 
disabilities. Officials from another state said that staff members from 
their secretary of state's office have begun participating in setting 
up the mock elections testing. 

States and Others Largely Reported Facing Similar Challenges in 
Approving Systems: 

According to survey responses, many states, several territories, and 
the District faced similar challenges in approving voting systems for 
the 2006 general election. Of the thirteen challenges identified in our 
survey, the most prevalent challenges reported by respondents were: 
ensuring that vendors meet requirements, ensuring that voters' concerns 
are considered, having sufficient qualified staff and facilities to 
conduct tests, and ensuring that the approval process is completed in 
time for the election (at least 25 responses for each). The thirteen 
challenges can be grouped into three categories: (1) system management, 
(2) resource availability, and (3) stakeholder coordination (see fig. 
11). 

Figure 11: Voting System Approval Challenges Reported by States and 
Others: 

[Refer to PDF for image] 

This figure is a stacked vertical bar graph depicting the following 
data: 

Challenges: System management: Ensuring vendors meet requirements; 
Number of respondents, Minor: 22; 
Number of respondents, Major: 8. 

Challenges: System management: Properly configuring systems; 
Number of respondents, Minor: 15; 
Number of respondents, Major: 5. 

Challenges: System management: Addressing system testing failures; 
Number of respondents, Minor: 16; 
Number of respondents, Major: 2. 

Challenges: System management: Integrating various components of 
system; 
Number of respondents, Minor: 17; 
Number of respondents, Major: 1. 

Challenges: System management: Sufficient qualified staff and 
facilities to conduct tests; 
Number of respondents, Minor: 21; 
Number of respondents, Major: 5. 

Challenges: Resource availability: Ensuring approval process is 
completed in time; 
Number of respondents, Minor: 15; 
Number of respondents, Major: 10. 

Challenges: Resource availability: Having sufficient funding to conduct 
approval; 
Number of respondents, Minor: 13; 
Number of respondents, Major: 4. 

Challenges: Resource availability: Ensuring voters’concerns are 
considered; 
Number of respondents, Minor: 26; 
Number of respondents, Major: 1. 

Challenges: Stakeholder coordination: Ensuring state officials 
understand their role; 
Number of respondents, Minor: 13; 
Number of respondents, Major: 1. 

Challenges: Stakeholder coordination: Confirming local jurisdictions 
have received approved systems; 
Number of respondents, Minor: 12; 
Number of respondents, Major: 2. 

Challenges: Stakeholder coordination: Ensuring local jurisdictions use 
approved systems; 
Number of respondents, Minor: 13; 
Number of respondents, Major: 1. 

Challenges: Stakeholder coordination: Communicating status of approval 
to local jurisdictions; 
Number of respondents, Minor: 7; 
Number of respondents, Major: 2. 

Challenges: Stakeholder coordination: Decisions by other states; 
Number of respondents, Minor: 9; 
Number of respondents, Major: 1. 

Source: GAO 2008 survey of state, territory and the District of 
Columbia election officials. 

[End of figure] 

States, territories, and the District typically reported experiencing 
multiple challenges. Of the 46 respondents with approval processes, 41 
reported at least one challenge and most had more than one. The 
majority of respondents reported five or more challenges. 

Although most respondents considered their challenges to be minor, each 
of the thirteen challenges was identified as a major challenge by at 
least one respondent. Further, five of the challenges were reported to 
be major by four or more respondents. These five are (1) ensuring that 
vendors meet requirements, (2) properly configuring systems, (3) having 
sufficient qualified staff and facilities, (4) ensuring that approval 
is completed in time for the election, and (5) having sufficient 
funding to conduct approval. These major challenges were confined to 16 
of the respondents. 

Election officials that we interviewed also provided their views on the 
root causes of some of the challenges and shared their approaches for 
addressing them. These approaches included facilitating working 
relationships with vendors, collaborating with university and other 
state or local officials to leverage their expertise and resources, and 
utilizing technological solutions. 

System Management Challenges: 

System management-related challenges include ensuring vendors meet 
requirements, addressing system testing failures, properly configuring 
systems, and integrating system components. More than one-third of the 
respondents reported that they faced at least three of these 
challenges, and nine reported all four challenges. 

The most frequently cited system management challenge was ensuring that 
vendors met requirements (29 states and 1 territory). Moreover, 8 
states considered this to be a major challenge. Election officials from 
several states told us that this challenge stems from the fact that 
vendors have not always provided complete approval applications or 
successfully met testing requirements. For the most part, these 
officials attributed this challenge to vendors' insufficient knowledge 
of state approval requirements, inexperienced vendor staff, or lack of 
commitment to complying with approval requirements. The significance of 
this challenge is evident by the fact that 12 states reported that they 
have denied approval to voting systems because state requirements could 
not be met. 

Many of the respondents reported challenges related to properly 
configuring systems or integrating system components, but more 
respondents considered system configuration as a major challenge (5 
respondents) than system integration (1 respondent). Election officials 
from one state attributed challenges with system configuration to 
varying degrees of responsibility for local jurisdiction officials in 
handling various county systems. 

Over one-third of respondents reported having faced the remaining 
challenge of addressing system testing failures, but just 2 respondents 
considered this challenge to be major. According to election officials 
that we interviewed, when vendors did not provide needed documentation 
or other technical information, or when system failures occurred during 
vendor demonstrations or system testing, approval delays have resulted. 

Officials from the states that have experienced system management- 
related challenges described various steps to deal with them, but 
generally identified one important step as establishing effective 
working relationships with vendors to facilitate access to needed 
information. In addition, officials from one state said that they have 
thoroughly documented testing failures so that vendors would be better 
positioned to address the problems. Officials from one state told us 
that they have revoked an existing system approval, in part, to make 
the vendor more responsive to meeting state requirements. 

Resource Availability Challenges: 

Many of the commonly identified approval-related challenges relate to 
insufficient resources, to include lack of qualified staff and 
facilities, inadequate funding, and limited time. Specifically the 
majority of respondents reported facing at least two of these three 
challenges, and about one-third of the respondents reported having 
faced all three. Collectively, these challenges were also identified as 
major by more respondents than any other category of challenges. 

Over one-half of respondents reported that having an adequate number of 
staff, including staff with the requisite technical expertise, was a 
challenge, as was having sufficient test facilities with adequate space 
or equipment to efficiently or effectively complete such approval 
activities as source code reviews, volume testing, and functional 
testing. Further, more than one-third of respondents cited funding 
limitations as a challenge. However, election officials that we 
interviewed cited different reasons for these challenges. For example, 
officials said that their funding challenge was largely due to lean 
state budgets. In contrast, officials from another state attributed 
their staffing challenge to statutory limitations on what they can pay 
technical experts. According to these officials, this results in 
experts working on their own time. 

The third resource-related challenge is ensuring that the approval 
process is timely. This challenge was cited by 25 respondents and was 
also the most frequently cited major challenge (10 respondents). 
According to election officials from several states, they have not 
always had sufficient time to perform such approval activities as 
testing and problem resolution. Further, officials from a few states 
identified demanding approval time frames that are set in statute or 
administrative requirements as the source of this challenge. 

To overcome staffing and facility challenges, officials from some 
states told us they have relied on EAC certification or voting system 
testing laboratory results. Officials from some other states told us 
that they have formed partnerships with universities to leverage their 
expertise, used staff from other state or local jurisdiction offices, 
or requested additional funding from state legislatures to expand staff 
and facilities. One state official stated that he is considering 
partnering with other states to conduct testing for similar 
requirements. 

Stakeholder Coordination Challenges: 

The stakeholder coordination-related challenges include ensuring that 
voters' concerns are considered and that state officials understand 
their role, confirming that local jurisdictions have received and used 
approved systems, and communicating approval status to local 
jurisdictions and revocation decisions made by other states. Overall, 8 
respondents reported that they have experienced more than three of the 
challenges in this category, with one state reporting that it has 
experienced all six. For the most part, these challenges were not as 
widely reported as challenges in the other two categories. 
Nevertheless, ensuring that voters' concerns were addressed was the 
second most frequently reported approval challenge. 

Of the 27 respondents that were challenged in ensuring that voters' 
concerns were considered, all but one considered it a minor challenge. 
Statements by election officials in several states help to explain why 
it was so widely viewed as minor. According to these officials, 
questions and concerns raised by voters regarding the reliability, 
security, or accuracy of the state's voting systems were often not 
relevant to their state. Further, they said that voting system problems 
described in media reports or other states' revocation decisions were 
also not always relevant to them, which may be why only 10 respondents 
reported other states' revocation decisions as a challenge. To address 
voter concerns, state officials explained that they listen to voters' 
concerns and provide information on state systems and processes, as 
appropriate, to alleviate the concerns. However, they added that doing 
so requires time and resources. 

The remaining five challenges in this category were cited by 9 to 14 
respondents and relate to coordination with local jurisdictions or 
considering approval results from other states. For the most part, 
election officials attributed these challenges to inexperienced vendor 
staff, system complexity, or local jurisdiction staff changes. More 
specifically, officials from one state told us that vendors did not 
always install the approved system configurations on local jurisdiction 
systems, while officials from one state told us that it was a challenge 
to keep local jurisdictions fully informed on the status of the 
approval phases and to ensure that the local jurisdictions installed 
the changes to system components. Further, officials from one territory 
told us that local election officials did not always follow the 
territory's administrative procedures. 

To address these challenges, election officials that we interviewed 
described a range of steps that they take. For example, officials from 
a few states told us that they assisted counties by using software 
escrow to save copies of approved voting system configurations and 
compared these against the version used by local jurisdictions, thereby 
ensuring use of the approved system configuration. Officials from two 
states said that either state or local officials periodically perform 
system inspections to ensure that the systems are accurate and meet 
state requirements. Officials from one state said that they have 
instituted new approval policies and procedures that help to address 
the approval coordination issues. 

States, Territories, and the District Required and Conducted a Range of 
Tests after System Approval and Faced a Variety of Testing Challenges: 

Beyond the testing performed in support of voting system approval, 
states, territories, and the District required and conducted other 
types of testing to ensure that systems perform as intended during 
elections. While each type of testing was required and conducted to 
some degree, the extent and content of the tests varied considerably 
for the 2006 general election. Most states employed testing prior to 
accepting new, changed, or upgraded systems, as well as readiness 
testing prior to Election Day use. Many states also performed security 
testing at different times and conducted postelection audits. In 
contrast, relatively few states conducted Election Day parallel 
testing. Although states, territories, and the District varied as to 
the personnel that were involved in the various types of testing, they 
reported that most types of testing were conducted by the local 
jurisdictions--sometimes in conjunction with state election officials. 
Several states also used vendors, consultants, or contractors to 
conduct testing. 

The challenges that states faced in testing voting systems varied. 
Overall, about two-thirds of the survey respondents identified at least 
one testing challenge, while 16 states identified five or more 
challenges, with one reporting that it faced eight challenges. Among 
the challenges concerning all types of postapproval testing, roughly 
two-fifths cited limited staffing, funding, and time to complete 
testing before the election. They also cited approaches being taken to 
address the challenges. 

Most States, Territories, and the District Required and Performed 
Postapproval Testing, but Approaches Varied: 

All but one state and two territories augment their approval of voting 
systems with postapproval testing that provides opportunities to 
anticipate and address potential voting system problems before they 
affect election results. As we have previously reported,[Footnote 30] 
rigorous testing at multiple points in the voting system life cycle 
provides important assurance that a system conforms to state and local 
requirements, functions correctly, and is secure and reliable. Five 
types of tests that states or local jurisdictions typically conduct 
when acquiring and operating voting systems are: 

* acceptance testing; 

* readiness testing; 

* parallel testing; 

* postelection audits, and: 

* security testing.[Footnote 31] 

Most states, two territories, and the District reported that they 
required two or more types of postapproval tests. Specifically, 42 of 
the 52 respondents required two or more types of tests, and 
approximately one-third of those respondents required at least four 
types. Many of these states and territories, as well as the District, 
also required approval of voting systems. On the other hand, 7 
respondents required just one type of test, and 3 reported no 
requirements for voting system testing. The types of tests that were 
required were largely specified in statute. In addition, some had 
regulations or directives for specific types of tests and time frames 
for conducting them (see fig. 12). 

Figure 12: Number of Required Test Types Reported by States and Others 
for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Number of test types required: 0; 
Number of respondents: 3. 

Number of test types required: 1; 
Number of respondents: 7. 

Number of test types required: 2; 
Number of respondents: 13. 

Number of test types required: 3; 
Number of respondents: 12. 

Number of test types required: 4; 
Number of respondents: 15. 

Number of test types required: 5; 
Number of respondents: 2. 

Source: GAO 2008 survey and interviews of state, territory, and the 
District of Columbia election officials. 

[End of figure] 

For the 2006 general election, survey responses show that some of these 
tests were more widely performed than others. For instance, almost all 
states performed readiness testing, making it the most widely used type 
of test. Acceptance testing was the next most widely used, followed by 
security testing, postelection audits, and Election Day parallel 
testing (see fig. 13). Except for acceptance and security testing, the 
relative prevalence of these tests was consistent with our previously 
reported findings relative to the 2004 election.[Footnote 32] Reasons 
respondents gave for performing or not performing particular test types 
are discussed in the following sections. 

Figure 13: Types of Postapproval Testing Performed for the 2006 General 
Election as Reported by States and Others: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Type of test: Acceptance; 
Number of respondents: 41. 

Type of test: Readiness; 
Number of respondents: 49. 

Type of test: Election Day parallel; 
Number of respondents: 10. 

Type of test: Postelection audit; 
Number of respondents: 26. 

Type of test: Security; 
Number of respondents: 35. 

Source: GAO 2008 survey and interviews of state, territory, and the 
District of Columbia election officials. 

[End of figure] 

Requirements and responsibility for performing testing are largely 
specified in statute or directives, based on: responses to our survey; 
contacts with state, territory, and District election officials; and 
our analysis of materials they provided. Further, responsibility for 
postapproval testing was typically assigned to local jurisdictions, 
although many state election officials told us that they provided 
testing guidance to local jurisdictions and that they sometimes 
required the jurisdictions to file test documentation with them. 
Furthermore, our work showed considerable variation in the nature and 
scope of testing for the 2006 general election. 

The following sections provide an overview of the five types of 
postapproval testing and the range of reported approaches. 

Acceptance Testing: 

Acceptance testing validates that the units delivered by the vendor 
perform in accordance with specifications[Footnote 33] and related 
contract provisions. According to EAC guidance, it includes: (1) 
physical analysis to ensure that the system is intact and physical 
components, such as locks and doors, operate properly; (2) diagnostic 
analysis to test and calibrate mechanical and electronic components, 
such as a memory card or other device, printers, readers, and touch 
screens; and (3) functional analysis to test the operation of hardware, 
firmware, and software for election functions, such as voting, ballot 
marking, tabulation, and reporting.[Footnote 34] EAC guidance also 
recommends conducting a mock election as part of acceptance testing. 

Based on our survey, almost two-thirds of states, territories, and the 
District (31 of 52 respondents) required acceptance testing for the 
2006 general election to verify that voting systems delivered by the 
vendor met state requirements (see fig. 14). According to most 
respondents, these requirements are contained in state statutes, codes, 
or regulations. 

Figure 14: Acceptance Testing Requirements Reported by States and 
Others for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a map of the United States indicating acceptance testing 
requirements reported by states and others for the 2006 general 
election, as follows: 

Not Required: 
Alabama: 
Florida: 
Hawaii: 
Illinois: 
Indiana: 
Minnesota: 
Missouri: 
Oregon: 
Pennsylvania: 
Puerto Rico: 
South Dakota: 
Tennessee: 
U.S. Virgin Islands: 

Required: 
Arizona: 
Arkansas: 
California: 
Colorado: 
Connecticut: 
Delaware: 
District of Columbia: 
Georgia: 
Guam: 
Idaho: 
Iowa: 
Kansas: 
Kentucky: 
Louisiana: 
Maryland: 
Massachusetts: 
Montana: 
Nebraska: 
Nevada: 
New Hampshire: 
New Jersey: 
New Mexico: 
New York: 
North Carolina: 
North Dakota: 
Ohio: 
Rhode Island: 
South Carolina: 
Texas: 
Virginia: 
Washington: 
West Virginia: 
Wisconsin: 
Wyoming: 

Don't know/no response: 
Alaska: 
American Samoa: 
Maine: 
Michigan: 
Mississippi: 
Oklahoma: 
Utah: 
Vermont: 

Sources: GAO 2008 survey of state, territory, and the District of 
Columbia election officials; MapArt (map). 

[End of figure] 

Statutory testing provisions range in content from high-level 
requirements to more detailed specifications governing testing timing, 
scope, policies, and responsibilities. In general, states and others 
require acceptance testing for each type of system and required testing 
for both new and modified systems. One state also required that their 
systems operate successfully in an actual election to be accepted. A 
few state statutes address the entire acceptance testing process, from 
initial delivery by the vendor through decision making by election 
officials. For example, one state's statute required the following: 

* After the system has been delivered: "...the local board shall test 
the system to confirm that the system, including all hardware, 
software, and other components: 

(a) Is identical to the system certified by the State Board; 

(b) Is fully functional and capable of satisfying the needs of the 
board; and: 

(c) Satisfies all requirements, terms, and conditions of the contract." 

* The acceptance test shall demonstrate the system's ability to: 

"(1) Process simulated ballots for each precinct or polling place in 
the county; 

(2) Accept valid votes in every ballot position enabled by the ballot 
format; 

(3) Reject over-votes and votes in invalid ballot positions; 

(4) Generate system status and error messages; 

(5) Generate system audit records; 

(6) Comply with all applicable statutes, regulations, and procedures; 
etc." 

* After the acceptance test has been performed: 

"If the system fails the test required of this regulation, the local 
board may not accept the contract." 

* In performing the acceptance test: 

"...the local board may enlist the assistance of State Board personnel 
or independent consultants." 

Several states that did not identify current statutory or regulatory 
requirements for acceptance testing nevertheless indicated that they 
performed such testing for the 2006 general election. In addition to 
the 29 states, 1 territory, and the District, that reported that they 
required and performed acceptance tests, 9 states and a territory 
reported that they also performed these tests, even though no 
requirements existed for doing so. Officials from one of these states 
told us that their local jurisdictions performed the tests, and that 
they assisted them by providing guidance. 

In most of the states that reported performing acceptance testing, 
local election officials were identified as either solely or partly 
responsible for conducting the tests. Specifically, in 21 states, local 
officials were the only level of government involved in performing the 
tests.[Footnote 35] In 14 states, responsibility was shared between 
local and state election officials. For example, state officials from 
one state were responsible for providing testing oversight, finalizing 
timetables, coordinating with the counties, and keeping records of the 
test. In another state, each piece of voting equipment was tested 
multiple times at both state and local levels. For 2 states, 1 
territory, and the District, acceptance testing was performed solely at 
the state level (see fig. 15). 

Figure 15: Responsibilities for Performing Acceptance Testing Reported 
by States and Others for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Entities responsible for performing testing: Local jurisdictions; 
Number of respondents: 21. 

Entities responsible for performing testing: State and local 
jurisdictions; 
Number of respondents: 14. 

Entities responsible for performing testing: States[A]; 
Number of respondents: 4. 

Source: GAO 2008 survey and interviews of state, territory, and the 
District of Columbia election officials. 

Note: One state reported responsibilities as "Other" and another 
territory reported responsibilities for experts, consultants, and 
contractors. 

[A] Includes responses from one territory and the District. 

[End of figure] 

Other parties frequently assisted state and local officials in 
performing acceptance testing. Of the 41 respondents who reported 
performing acceptance testing, 26 indicated that vendors, contractors, 
and consultants assisted them. In fact, officials with several states 
told us that vendors played a significant role in test planning and 
execution. For example, officials in one state said that vendors 
performed these tests under contracts with local jurisdictions. 
Officials in another state said that vendors helped state officials 
develop the acceptance test and that local election officials were 
invited, but not required, to attend the tests. Officials from another 
state said that their local jurisdictions did not have the technical 
expertise or equipment to do any tests of their own so they used 
consultants to help them. In another state, local jurisdictions 
developed testing criteria and were assisted by state election 
officials who helped to perform the tests, while another state official 
leveraged existing relationships with technical experts or vendors to 
address any problems. 

Election officials we interviewed generally described similar 
activities as part of acceptance testing: 

* Checklists were used to guide election officials through the test 
steps and application of criteria. For instance, one state election 
official told us that a checklist was completed for each voting system 
and individual unit. The list included instructions for inspecting 
mechanical, electronic, and optical components and identified qualities 
to evaluate. 

* Physical and mechanical aspects of voting units were inspected to 
ensure that system components were working properly. For instance, one 
state official told us that they inspect paper feed paths to ensure 
proper operation. 

* Diagnostic tests were executed to detect malfunctions or failures and 
to ensure proper functionality. One state election official told us 
that these diagnostic tests were run on each voting unit and its 
components. For instance, touch screens were calibrated to accept voter 
selections correctly. 

* Ballot generation, voting, and tabulation were conducted with test 
data to ensure that the system and its components could accurately and 
reliably accept, record, and tabulate the votes. Such testing sometimes 
included large volumes of ballots and votes and involved generating 
vote totals and reconciling them within and among voting units. 

* The accepted configuration of the system was documented and 
electronically captured to establish a baseline for future comparison. 
Furthermore, the voting system's software configuration was verified 
against the state-approved configuration. 

* An acceptance test report was prepared and submitted for review and 
approval by a higher authority. For instance, one state election 
official said that results of acceptance tests performed by local 
jurisdictions were certified to the secretary of state. 

Readiness Testing: 

Readiness testing, also referred to as logic and accuracy testing, 
ensures that voting equipment is functioning properly--usually by 
confirming that predictable outputs are produced from predefined 
inputs. Readiness tests are sometimes conducted publicly in the weeks 
leading up to Election Day to verify the readiness of the system for 
the specific election. Members of the press, the public, and the 
candidates are invited to observe. 

According to EAC guidance,[Footnote 36] an effective readiness test 
should (1) verify all of the conditions previously tested during the 
acceptance test and (2) ensure that each machine is configured for the 
specific election (e.g., the correct ballot information is loaded, 
including the names of all applicable candidates, races and contests). 
The tabulation functions also should be tested by recording test votes 
on each machine, verifying that it is possible to vote for each 
candidate on the ballot, and confirming that these votes were tabulated 
correctly. 

Most states reported readiness testing requirements for the 2006 
general election, and the conditions for conducting readiness testing 
were typically specified in state statutes. Specifically, 46 states, 2 
territories, and the District reported requirements for readiness 
testing (see fig. 16). 

Figure 16: Readiness Testing Requirements Reported by States and Others 
for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a map of the United States indicating readiness testing 
requirements reported by states and others for the 2006 general 
election as follows: 

Not required: 
American Samoa: 
Puerto Rico: 

Required: 
Alabama: 
Alaska: 
Arizona: 
Arkansas: 
California: 
Colorado: 
Connecticut: 
Delaware: 
District of Columbia: 
Florida: 
Georgia: 
Hawaii: 
Idaho: 
Illinois: 
Indiana: 
Iowa: 
Kansas: 
Kentucky: 
Louisiana: 
Maine: 
Maryland: 
Massachusetts: 
Minnesota: 
Missouri: 
Montana: 
Nebraska: 
Nevada: 
New Hampshire: 
New Mexico: 
New York: 
North Carolina: 
North Dakota: 
Ohio: 
Oklahoma: 
Oregon: 
Pennsylvania: 
Rhode Island: 
South Carolina: 
South Dakota: 
Tennessee: 
Texas: 
Vermont: 
Virginia: 
Washington: 
West Virginia: 
Wisconsin: 
Wyoming: 

Don't know/no response: 
Michigan: 
Mississippi: 
New Jersey: 
Utah: 

Sources: GAO 2008 survey of state, territory, and the District of 
Columbia election officials; MapArt (map). 

[End of figure] 

State statutes or regulations typically specified when readiness 
testing should be conducted, who was to be responsible for conducting 
it, and whether public demonstrations were to be required. Statutory 
requirements ranged from high-level requirements to conduct testing, to 
specific requirements governing test timing, scope, policies, and 
responsibilities. The following examples from various state statutes 
illustrate the specificity of readiness testing requirements: 

* "Electronic ballot tabulating systems shall be tested for logic and 
accuracy within seven days before their use..." 

* "The test shall be conducted by processing a pre-audited group of 
ballots marked to record a predetermined number of valid votes for each 
candidate and on each measure..." 

* "If any error is detected, the cause shall be ascertained and 
corrected and an errorless count shall be made before the machine is 
approved." 

* "...the county board of election commissioners shall certify the 
accuracy of the voting system and file the test results with the county 
clerk." 

* "...in addition to conducting the pre-election test itself, the local 
board shall: 

(1) Conduct a pre-election public demonstration of how the test was 
conducted; 

(2) Allow the public to inspect the printouts of test results." 

Readiness testing also varied in terms of the number of voting units 
tested and when testing was performed. For instance, some states 
performed these tests on all units, while others performed them on a 
certain percentage of units. Furthermore, some state statutes required 
every type of voting system to be tested prior to use in an election, 
while others excluded or were silent on certain voting equipment. For 
instance, one state statute required that at least one memory card from 
each precinct be tested (e.g. uploaded to the county server to ensure 
that the upload features necessary to compile and count the votes were 
working properly). Another state required each tally machine to be 
tested three times for each election--no later than 5 days before the 
election, on the morning of Election Day, and when the polls closed. In 
another state, readiness testing was to be conducted 5 days prior to 
each election and 5 days afterward, unless a recount was in progress. 

All of the states and territories that required readiness testing also 
reported that they actually performed such testing for the 2006 general 
election. In addition, a number of states identified readiness testing 
as their only method of required testing. 

In general, local jurisdictions were responsible for developing 
readiness testing plans and performing them, although some 
jurisdictions engaged state election officials and vendors in defining 
and conducting the tests. Of the 49 respondents that reported 
performing readiness testing, 33 indicated that local jurisdictions 
were solely responsible for conducting the tests. Responsibilities were 
shared between local jurisdictions and the state in 9 cases. For 
example, copies of test reports in one state were maintained at both 
the state and local levels. Officials from another state told us that 
the state provided local jurisdictions with minimum requirements for 
readiness testing and that the local board of elections was responsible 
for performing the test and ensuring that the minimum requirements were 
met. State or territory election personnel had sole responsibility for 
readiness testing in 4 states, 1 territory, and the District (see fig. 
17). For example, one state had a centralized approach where all 
readiness tests were performed by the state. Officials for this state 
said that local jurisdictions were unable to pay for the test and the 
clerks of court had other duties that made their election 
responsibilities a lower priority. 

Figure 17: Responsibilities for Performing Readiness Testing Reported 
by States and Others for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Entities responsible for performing testing: Local jurisdictions; 
Number of respondents: 33. 

Entities responsible for performing testing: State and local 
jurisdictions; 
Number of respondents: 9. 

Entities responsible for performing testing: States[A]; 
Number of respondents: 6. 

Source: GAO 2008 survey and interviews of state, territory, and the 
District of Columbia election officials. 

Note: One state reported responsibilities for an independent testing 
authority. 

[A] Includes responses from one territory and the District. 

[End of figure] 

Based on survey responses, 14 states used vendors, consultants, 
contractors, or other entities to assist with readiness testing for the 
2006 general election. In almost all cases, they shared responsibility 
for performing the test with either state or local election officials, 
or both. For example, one state utilized a committee made up of 
statisticians and mathematics experts to assist with its readiness 
tests. 

State and territory officials that we interviewed described generally 
similar readiness testing policies and procedures. For example, they 
used test ballots to exercise system recording, tabulation, and 
reporting functions; verified that test results were complete and 
accurate; confirmed that the ballot box was empty and vote totals were 
zero after testing; and sealed the systems until they were activated on 
Election Day. Further, survey respondents that required readiness 
testing reported using the actual election definition[Footnote 37] and 
ballot formats for the upcoming election to test system recording, 
tabulation, and reporting functions. 

One state's approach illustrates some of the typical aspects of 
readiness testing. Specifically, the state must test the voting system 
within 14 days before Election Day to ensure that it will correctly 
mark ballots using all methods supported by the system and count the 
votes cast for all candidates and ballot questions. Public notice of 
the time and place of the test must be given at least 2 days in advance 
by publishing it in official newspapers. The test itself must be 
observed by at least two election judges, who are not of the same major 
political party, and must be open to representatives of the political 
parties, candidates, the press, and the public. The test is conducted 
by processing a preaudited group of ballots containing a predetermined 
number of valid votes for each candidate and on each question. Ballots 
that have votes in excess of the number allowed by law must be 
processed to test the ability of the voting system tabulator and 
electronic ballot marker to reject those votes. In addition, test 
ballots that have been marked using the electronic ballot marking 
device, audio ballot reader, or other assistive voting technology are 
processed and verified. 

An atypical form of readiness testing was practiced by one state. 
Specifically, officials from this state said that postelection 
readiness testing is performed at the request of state election 
officials when there is a reported discrepancy or error. The 
postelection test is performed after the official count has been 
completed but before reviewing and counting votes by precinct, and 
producing an official total. The software and data used to set up the 
election, tabulate the ballots, and conduct the pre-election readiness 
test are used to conduct the postelection test. According to state 
officials, the postelection test is intended to demonstrate that no 
changes occurred in the system's software or setup data since the pre- 
election readiness test. 

Election Day Parallel Testing: 

Parallel testing verifies the accurate performance of voting equipment 
through random selection and systematic evaluation of equipment that is 
operated under simulated Election Day voting. It is typically conducted 
on Election Day. According to EAC guidance,[Footnote 38] parallel 
testing should ensure that: (1) ballots used for the parallel test are 
identical to the ballots used in the actual election; (2) the test 
takes place during the hours of the election, using software and 
hardware that is to be used in the election; and (3) a video record is 
created of all voting to determine whether or not any discrepancies in 
the results were caused by data entry errors. 

Few states required Election Day parallel testing for the 2006 general 
election, and neither the territories nor the District did. 
Specifically, only 5 of 47 states reported a requirement for parallel 
testing (see fig. 18). In addition, these 5 states' statutes are not 
specific relative to parallel testing scope, policies, or 
responsibilities. For example, one statute only states that: 

"...the local board shall: Conduct parallel testing according to the 
parallel testing plan developed by the State Administrator." 

Because the statutes are not specific, the scope of parallel testing 
that these states performed varied. For example, one state conducted 
Election Day parallel testing on 5 percent of its voting systems or a 
minimum of one per county. Another state conducted parallel testing 
twice for each election--first in each county during the pre-election 
public demonstration, then again on Election Day. 

Figure 18: Parallel Testing Requirements Reported by States and Others 
for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a map of the United States indicating parallel testing 
requirements reported by states and others for the 2006 general 
election, as follows: 

Nor required: 
Alabama: 
Alaska: 
American Samoa: 
Arizona: 
Arkansas: 
Colorado: 
Connecticut: 
Delaware: 
District of Columbia: 
Florida: 
Guam: 
Hawaii: 
Idaho: 
Illinois: 
Indiana: 
Iowa: 
Kansas: 
Kentucky: 
Louisiana: 
Maine: 
Massachusetts: 
Minnesota: 
Mississippi: 
Missouri: 
Nebraska: 
New Hampshire: 
New Mexico: 
New York: 
North Carolina: 
North Dakota: 
Ohio: 
Oklahoma: 
Pennsylvania: 
Puerto Rico: 
Rhode Island: 
South Carolina: 
South Dakota: 
Tennessee: 
U.S. Virgin Islands: 
Vermont: 
Virginia: 
Washington: 
West Virginia: 
Wisconsin: 
Wyoming: 

Required: 
California: 
Georgia: 
Maryland: 
Montana: 
Texas: 

Don't know/no response: 
Michigan: 
Nevada: 
New Jersey: 
Oregon: 
Utah: 

Sources: GAO 2008 survey of state, territory, and the District of 
Columbia election officials; MapArt (map). 

[End of figure] 

Five additional states reported that they performed parallel testing 
even though they were not required by statute, code, or regulation. 
Election officials with other states told us that parallel testing was 
not performed either because either it was not required, sufficient 
voting units were not available to perform the tests, or the cost made 
it prohibitive. For example, officials in one state told us that each 
polling place had one optical scan and one DRE unit, which made it 
impossible to perform parallel testing. 

According to survey responses, either states or local jurisdictions 
were responsible for performing parallel testing when it occurred. 
Specifically, 5 of the 10 states that conducted parallel testing 
reported that the state was responsible for performing such testing, 
while 4 other states reported local election officials were 
responsible, and--in some cases--involved others in the tests (see fig. 
19). One of these states reported that while local jurisdictions 
perform the testing, the state provides guidance and oversight. 

Figure 19: Responsibilities for Performing Parallel Testing Reported by 
States and Others for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Entities responsible for performing testing: Local jurisdictions; 
Number of respondents: 4. 

Entities responsible for performing testing: State and local 
jurisdictions; 
Number of respondents: 0. 

Entities responsible for performing testing: State; 
Number of respondents: 5. 

Source: GAO 2008 survey and interviews of state, territory, and the 
District of Columbia election officials. 

Note: One state reported responsibilities for experts or consultants. 

[End of figure] 

Furthermore, 5 of 10 respondents that performed parallel testing 
reported that vendors, consultants, contractors, or other entities were 
involved in the testing and that responsibility for performing the test 
was shared with either state or local election officials in four of 
these cases. For example, in one state, representatives of the League 
of Women Voters and members of the public were recruited to participate 
in parallel testing along with state officials. State officials also 
told us that some local jurisdictions that have contracts with system 
vendors for Election Day support use the support technicians to assist 
them with parallel testing. 

For some states, parallel testing occurs at the polling place to make 
it open to public observation and possible public participation. The 
approach followed by one state is for two individuals to read aloud 
actual votes cast on a paper ballot, two people to separately record 
the votes cast on paper, and two people to cast the votes on a touch- 
screen voting machine. The teams periodically check to ensure that the 
two hand tallies match and that the number of cast ballots match. At 
the conclusion of parallel testing, the two sets of hand tallies are 
compared to the results generated by the voting unit to see if they 
match. 

For other states, selected machines are removed from the voting 
equipment pool prior to the election and are tested in a controlled 
environment. The approach for one state is as follows: two touch-screen 
machines of each model to be used by a county on Election Day are 
randomly selected and removed from polling places shortly before the 
election. They are then transported to the testing facility and tested 
on Election Day in a simulated election conducted at the same time and 
in the same manner as the actual election. All test votes are 
videotaped to compare the results reported by the machine against the 
votes actually entered on the machine by state testers. 

Postelection Audits: 

Postelection audits are independent, documented reviews of election 
results to reconcile them with other records to either confirm or 
correct the results. The audits can uncover problems with voting 
equipment or election processes. About one-half of survey respondents 
(21 states and the District) required postelection audits for the 2006 
general election. The territories did not report requirements for 
postelection audits in that election (see fig. 20). 

Figure 20: Postelection Audit Requirements Reported by States and 
Others for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a map of the United States depicting postelection audit 
requirements reported by states and others for the 2006 general 
election as follows: 

Not required: 
Alabama: 
Alaska: 
American Samoa: 
Arkansas: 
Florida: 
Georgia: 
Guam: 
Idaho: 
Indiana: 
Iowa: 
Kansas: 
Kentucky: 
Louisiana: 
Maine: 
Maryland: 
Massachusetts: 
Mississippi: 
Montana: 
Nebraska: 
New Hampshire: 
Ohio: 
Oklahoma: 
Oregon: 
Puerto Rico: 
Rhode Island: 
South Carolina: 
South Dakota: 
U.S. Virgin Islands: 
Vermont: 
Virginia: 
Wyoming: 

Required: 
Arizona: 
California: 
Colorado: 
Connecticut: 
Delaware: 
District of Columbia: 
Hawaii: 
Illinois: 
Minnesota: 
Missouri: 
Nevada: 
New Mexico: 
New York: 
North Carolina: 
North Dakota: 
Pennsylvania: 
Tennessee: 
Texas: 
Washington: 
West Virginia: 
Wisconsin: 

Don't know/no response: 
Michigan: 
New Jersey: 
Utah: 

Sources: GAO 2008 survey of state, territory, and the District of 
Columbia election officials; MapArt (map). 

[End of figure] 

Most states reported that the specific requirements for audits were 
provided by statute. Based on our review of the statutes, many do not 
explicitly mention the term "audit," but instead use a wide variety of 
terms, such as hand or manual count, recount or manual recount, manual 
tally, statistical recount, or postelection review. Moreover, the 
elements and details of these statutes also vary widely and in some 
cases refer to other forms of testing. The following examples taken 
from various state statutes illustrate the diversity of statutory 
requirements for postelection audits. 

"During the official canvass of every election in which a voting system 
in used, the official conducting the election shall conduct a public 
manual tally of the ballots tabulated by those devices … cast in 1 
percent of the precincts chosen at random by the elections official... 
The manual tally shall be a public process... The official conducting 
the election shall include a report on the results ...[and] identify 
any discrepancies between the machine count and the manual tally and a 
description of how each of these discrepancies was resolved... the 
voter verified paper audit trail shall govern if there is a discrepancy 
between it and the electronic record." 

"After each election, the secretary of state shall order a random 
testing of the voting system programming for one precinct in each 
county of the state according to logic and accuracy testing procedures 
...as may be further defined by the secretary of state in writing." 

"Following each general election, [the Government accountability board 
shall] audit the performance of each voting system used in this state 
to determine the error rate of the system in counting ballots that are 
validly cast by electors. If the error rate exceeds the rate permitted 
under the [federal] standards, the board shall take remedial action and 
order remedial action to be taken by affected counties and 
municipalities to ensure compliance with the standards." 

"...If the secretary [of state] determines that a random audit shall be 
conducted..., the town clerk shall direct two members of the board of 
civil authority to transport the ballot bags to the office of the 
secretary of state...The secretary shall...conduct the audit...[and] 
publicly announce the results of the audit as well as the results from 
the original return of the vote. If the secretary finds that the audit 
indicates that there was possible fraud in the count or return of 
votes, the secretary shall refer the results to the attorney general 
for possible prosecution." 

"...the county auditor shall conduct an audit of results of the votes 
cast on the direct recording electronic voting devices used in the 
county. This audit must be conducted by randomly selecting by lot up to 
four percent of the direct recording electronic voting devices or one 
direct recording electronic voting device, whichever is greater, and, 
for each device, comparing the results recorded electronically with the 
results recorded on paper...On one-fourth of the devices selected for 
audit, the paper records must be tabulated manually; on the remaining 
devices, the paper records may be tabulated by a mechanical device 
determined by the secretary of state to be capable of accurately 
reading the votes cast and printed...Three races or issues, randomly 
selected by lot, must be audited on each device. This audit procedure 
must be subject to observation by political party representatives..." 

"...[the] election authority shall test the voting devices and 
equipment in 5% of the precincts within the election jurisdiction. The 
precincts to be tested shall be selected after Election Day on a random 
basis by the State Board of Elections, so that every precinct in the 
election jurisdiction has an equal mathematical chance of being 
selected." 

Within these examples are certain common elements that drive the 
conduct and consequences of the audit. These elements are: 

* precipitating condition(s) for the audit (e.g., candidate petition, 
tabulation discrepancy, decision of election official, or automatic); 

* criteria for the extent of the audit (e.g., the number or percentage 
of precincts or voting machines--typically from 1 percent to 10 
percent, or conditions for expanding the number of precincts or voting 
machines to be audited); 

* criteria for sampling votes, ballots, voting machines, and precincts; 

* instructions for examining electronic voting equipment and records 
(e.g., printed totals, voter verified paper ballots, or electronic 
disks or memory); and: 

* actions to be taken at the conclusion of the audit (e.g., resolve 
discrepancies, addressing wrongdoing, or notifying stakeholders or the 
public). 

In all, 26 survey respondents reported that they performed postelection 
audits for the 2006 general election, which represents a small increase 
from the 2004 general election.[Footnote 39] Although officials from 
these states generally attributed their audit activities to statutory 
or regulatory requirements, 4 states reported that they performed these 
audits even though they were not required. For example, officials from 
one state described how its local jurisdictions performed pre-and 
postelection audits on both their optical scan equipment and DREs to 
ensure that the number of votes matched the number of participating 
voters. Officials from another state told us that postelection audits 
were voluntary, and that 11 of 88 counties performed audits. In another 
state, we were told that an audit was conducted by one jurisdiction for 
a close race at the request of a political party. 

Officials from one of the states that did not require or perform 
postelection audits told us that they had nevertheless begun defining 
their postelection audit process, but later decided to stop until the 
outcome of federal election legislation was clear. Officials from 
another state told us they were considering adoption of a postelection 
audit process and that their secretary of state had appointed a task 
force that included county election commissioners, voting integrity 
group members, and legislators to study postelection audit procedures. 

Most postelection audits were performed by local election officials 
with guidance and procedures provided by the state. Of the 26 
respondents that reported conducting postelection audits, 16 reported 
that local jurisdictions were responsible for performing audits without 
the assistance of state election officials. In another 5 states, local 
jurisdictions shared responsibility with state officials. State 
officials were responsible for the audits in the other 3 states, 1 
territory, and the District (see fig. 21). Even when responsibility for 
conducting the audits did not reside at the state level, officials with 
a few states told us that audit requirements or guidance were produced 
at the state level. For instance, officials for one state explained 
that state-level standards were developed to provide guidance for 
audits and voting system audit capabilities, although the state does 
not require their use. Another state released requirements for a state 
directive for conducting audits and made it available to local 
officials on-line. In another state, the elections board is responsible 
for determining acceptable ballot-counting error rates for voting 
systems, developing audit procedures, and randomly selecting voting 
units to be audited, while local election officials actually perform 
the hand count. 

Figure 21: Responsibilities for Performing Postelection Audits Reported 
by States and Others for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Entities responsible for performing testing: Local jurisdictions: 
Number of respondents: 16. 

Entities responsible for performing testing: State and local 
jurisdictions; 
Number of respondents: 5. 

Entities responsible for performing testing: State[A]; 
Number of respondents: 5. 

Source: GAO 2008 survey and interviews of state, territory, and the 
District of Columbia election officials. 

[A] Includes responses from one territory and the District. 

[End of figure] 

Among the states that performed postelection audits, 3 reported that 
vendors or consultants were involved. For example, officials with one 
state told us that vendor representatives were on hand to address 
questions from local election officials as the jurisdiction conducted 
the audit. Academic institutions also assisted in audit activities. For 
example, state and local election officials with another state told us 
that they worked with a statistician from a local university to 
determine the attributes that governed the size of the audit to ensure 
a statistically significant result. Officials with another state said 
that they utilized university staff to analyze results from their 
postelection audit. 

Voting systems with a HAVA-compliant manual audit capacity are required 
to produce a permanent paper record, which provides the voter with an 
opportunity to change the ballot or correct any error before the 
permanent paper record is produced. Most states reported that they 
require this fundamental capability for their systems, and some 
designated this paper record as the official record for recount or 
verification of election conduct. The following two examples illustrate 
this: 

* Officials from one state told us that they used a real-time printed 
audit log at the central counting station to record every event, tally, 
correction, and report produced from the tabulation system. All the 
audit logs and reports are available for public viewing. The electronic 
tabulation results can also be printed to paper after the election for 
a possible recount. 

* Officials from another state told us that state statute requires 5 
percent of all voting machines used in the election to be audited. At 
least two races per machine are to be hand-verified using paper ballots 
or voter-verified paper audit trails against the election night totals 
from the machine. The results are posted on the secretary of state's 
Web site. 

For many states, the results of postelection audits could have 
implications for subsequent voting system use. Based on our survey, 26 
respondents to this question reported that results from postelection 
audits could result in revocation of approval for their voting systems. 
[Footnote 40] In some instances, differences of a specified magnitude 
between the manual vote count and the system-reported results could 
trigger additional review of the systems as well as system reapproval. 
However, only two states reported that they actually revoked voting 
system approval based on audit results. Further, officials from some 
states told us that they randomly audit selected jurisdictions to 
determine what problems were encountered. State officials from one of 
these states used these audit results to enforce the use of particular 
voting systems and to require manufacturer improvements. Officials in 
another state said that they used their audit results to improve their 
poll-worker training. 

Security Testing: 

Security testing is used to evaluate the effectiveness of implemented 
security measures or controls and to identify, validate, and assess 
security weaknesses so that they can be addressed. Such testing should 
be one component of an overall security program that also includes 
assigned security responsibilities, risk assessment, system 
requirements, planning, policies, and procedures. 

EAC's guidance for voting system security includes software security, 
password maintenance, personnel security, and physical security during 
system storage, transport, and at the polling place or operations 
center.[Footnote 41] In addition, NIST has drafted guidance for 
planning and conducting such testing, analyzing findings, and 
developing mitigation strategies.[Footnote 42] It recommends that 
security testing be based on an explicit security testing policy that 
defines roles and responsibilities, an established methodology, 
frequency of testing, and documentation requirements. 

According to our survey, just over one-half of survey respondents (29 
states, 2 territories, and the District) reported that they required 
security testing for the 2006 general election (see fig. 22). Of these 
29 states, 24 indicated that their statutes addressed security 
management. For example, one statute states that the state board of 
elections or its designated independent expert is required to review 
the source code provided by the voting system vendor as a pre-requisite 
to state voting system certification. The review is to include: 

"...security, application vulnerability, application code, wireless, 
security, security policies and processes, security/privacy program 
management, technology infrastructure and security controls, security 
organization and governance, and operational effectiveness, as 
applicable to that voting system." 

Figure 22: Security Testing Requirements Reported by States and Others 
for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a map of the United States indicating security testing 
requirements reported by states and others for the 2006 general 
election, as follows: 

Not required: 
Alabama: 
Alaska: 
American Samoa: 
Arkansas: 
California: 
Hawaii: 
Iowa: 
Kentucky: 
Maine: 
Mississippi: 
Nebraska: 
New Mexico: 
North Dakota: 
Oregon: 
Puerto Rico: 
Rhode Island: 
Tennessee: 

Required: 
Arizona: 
Colorado: 
Connecticut: 
Delaware: 
District of Columbia: 
Florida: 
Georgia: 
Guam: 
Idaho: 
Illinois: 
Indiana: 
Kansas: 
Louisiana: 
Maryland: 
Massachusetts: 
Minnesota: 
Missouri: 
Montana: 
Nevada: 
New Hampshire: 
New York: 
North Carolina: 
Oklahoma: 
Pennsylvania: 
South Carolina: 
South Dakota: 
Texas: 
U.S. Virgin Islands:
Virginia: 
Washington: 
West Virginia: 
Wisconsin: 
Wyoming: 

Don't know/no response: 
Michigan: 
New Jersey: 
Ohio: 
Utah: 
Vermont: 

Sources: GAO 2008 survey of state, territory, and the District of 
Columbia election officials; MapArt (map). 

[End of figure] 

In addition, a number of state officials that we interviewed told us 
that security testing was typically combined with other types of 
testing, such as acceptance testing and readiness testing, and thus 
they viewed it as implicitly covered by statutes requiring these tests. 

In addition to the 29 states, 2 territories, and the District that 
reported having requirements, 3 other states reported that they 
performed security testing, even though it was not required by statute 
or regulation. This means that 35 of 52 respondents reported conducting 
security testing for the 2006 general election. 

With respect to the responsibility for security testing, 26 states and 
one territory reported that local jurisdictions were responsible, in 
whole or in part. Of these, 7 states reported that security testing was 
performed at both the local and state levels. According to one 
official, state officials were invited to attend and participate in the 
testing at the local jurisdictions. Officials from one state explained 
that security testing is the responsibility of the county auditor, but 
security audits are performed by the state elections office. Security 
testing was conducted primarily by state officials in only 4 states, 1 
territory, and the District (see fig. 23). 

Figure 23: Responsibilities for Performing Security Testing Reported by 
States and Others for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Entities responsible for performing testing: Local jurisdictions[B]: 
Number of respondents: 20. 

Entities responsible for performing testing: State and local 
jurisdictions; 
Number of respondents: 7. 

Entities responsible for performing testing: State[A]; 
Number of respondents: 6. 

Source: GAO 2008 survey and interviews of state, territory, and the 
District of Columbia election officials. 

Note: One state reported only responsibilities for experts or 
consultants and another state reported responsibilities for an 
independent testing authority. 

[A] Includes response from one territory. 

[B] Includes responses from one territory and the District. 

[End of figure] 

Vendors, consultants, contractors, or other entities participated in 
security testing for 10 states and 2 territories in the 2006 election. 
In these cases, these entities typically shared responsibility for 
performing the test with either state or local election officials, or 
both. For example, officials in one state told us that state and local 
election officials walked through the security test with the 
consultant. 

Based on our interviews with officials in the 32 states, 2 territories, 
and the District that reported performing security testing for the 2006 
general election, the timing, scope, and activities for security 
testing were quite diverse. In particular, several states and 
territories focused on assessing the physical security of their systems 
and the facilities in which they were stored. For example, officials 
from one territory told us that they conducted "seal testing" to 
determine whether physical seals on voting equipment had been broken. 
Other states' security testing included more technical testing. For 
example, one state conducted system penetration testing and source code 
reviews to identify vulnerabilities. Officials from other states told 
us that they conducted system risk assessments. One state had 
documented policies and procedures that govern their security tests, 
which included state requirements for a security plan and security risk 
assessments for their voting systems. They assessed risks during the 
various phases of transporting its systems: in storage, in transit, and 
at the polling place. Another state used a third-party contractor to 
perform a risk assessment, which included evaluations of threats, 
vulnerabilities, security controls, and risks associated with the 
state's voting systems and possible impacts to the integrity of its 
elections process. In this case, the state's board of elections used 
the assessment results to develop a formal system security plan, 
policies, and procedures; establish a formal security training program 
for all election officials and contractor personnel; and establish a 
security officer position on the state board of elections. 

States, One Territory, and the District Face Various Testing Challenges 
and Have Adopted Approaches to Address Them: 

States, territories, and the District reported experiencing all eight 
of our survey's testing-related challenges relative to their voting 
systems for the 2006 general election, along with approaches for 
addressing them. These challenges were viewed by most respondents as 
minor in nature, although three were characterized as major by 5 or 
more respondents. 

The eight challenges can be grouped into three categories: (1) 
sufficiency of testing resources, (2) timeliness and thoroughness of 
testing execution, and (3) utilizing information from stakeholders (see 
fig. 24). Officials with some of the states, territories, and the 
District that we interviewed told us that these challenges remain for 
the 2008 general election. To address these challenges, state and other 
officials have begun to collaborate with one another to leverage their 
combined knowledge and skills, and thereby maximize their limited 
election resources and respective testing efforts. 

Figure 24: Testing Challenges Reported by States and Others for the 
2006 General Election: 

[Refer to PDF for image] 

This figure is a stacked vertical bar graph depicting the following 
data: 

Challenges: Resources: Having sufficient staff; 
Number of respondents, Major: 7; 
Number of respondents, Minor: 20. 

Challenges: Resources: Having sufficient training to perform testing 
activities; 
Number of respondents, Major: 3; 
Number of respondents, Minor: 19. 

Challenges: Resources: Having sufficient funding; 
Number of respondents, Major: 5; 
Number of respondents, Minor: 12. 

Challenges: Execution: Resolving problems in time for election; 
Number of respondents, Major: 4; 
Number of respondents, Minor: 19. 

Challenges: Execution: Completing testing prior to the election; 
Number of respondents, Major: 5; 
Number of respondents, Minor: 16. 

Challenges: Execution: Defining appropriate tests and data sets; 
Number of respondents, Major: 1; 
Number of respondents, Minor: 12. 

Challenges: Execution: Ensuring voters' concerns had been considered; 
Number of respondents, Major: 1; 
Number of respondents, Minor: 14. 

Challenges: Stakeholders: Getting information and/or access to testing 
information or results from other states/jurisdictions; 
Number of respondents, Major: 0; 
Number of respondents, Minor: 5. 

Source: GAO 2008 survey and interviews of state, territory, and the 
District of Columbia election officials. 

[End of figure] 

In addition to the eight challenges cited by survey respondents, 
election officials in a number of states and one territory identified 
an additional challenge that cuts across the three categories--managing 
voting system testing in a changing environment. 

Resource-Related Challenges: 

Of the resource-related challenges, the one most frequently reported 
for the 2006 general election, as well as the one most frequently 
viewed as being a major challenge, was having sufficient testing staff. 
Specifically, 25 states, a territory, and the District reported it as a 
challenge and, of these, 6 states and the territory saw it as a major 
challenge. According to election officials in some of these states, 
staffing shortfalls exist in both state and local election offices. 
They attributed the shortfalls to such factors as election officials 
being assigned collateral duties and election-related positions 
frequently being part-time. To address these shortfalls, officials from 
one state said that they have started offering flexible work schedules 
and shorter shifts to attract election workers. Others said that they 
addressed staffing shortfalls in local jurisdictions by sending state 
staff to assist the jurisdictions and ensuring that the staff they 
hired could fulfill the travel requirements. 

Another frequently cited challenge by 20 states, 1 territory, and the 
District is having sufficient training for the staff that perform 
testing activities. Moreover, 2 of these states and the territory 
considered this to be a major challenge. According to election 
officials in some of these states, some part-time election staff, 
particularly those in the local jurisdictions, do not have the 
technical expertise to conduct voting system testing. To fulfill their 
staff training needs, state officials described several approaches that 
they are taking. These include: 

* utilizing more experienced staff to fill knowledge gaps (e.g. 
recruiting former election officials with testing experience and hiring 
new staff with the pre-requisite computer expertise); 

* establishing training requirements and programs for election workers 
that include a component on testing (e.g. one state-wide training 
program for election officials included certification requirements to 
become an election official); and: 

* making technical training programs available to election workers 
throughout the year. 

The third resource-related challenge was having sufficient funding to 
conduct voting system testing (identified by 16 states and 1 
territory). Of these, almost one-third considered it a major challenge. 
Election officials that we interviewed related this challenge to the 
other two resource challenges by saying that election funding 
shortfalls have, in turn, limited the number of staff available for 
testing and the training that these staff members receive. According to 
one of these officials, they have attempted to address the challenge by 
persuading state executives to increase funding for voting system 
testing programs. Officials from one state said that they were planning 
to reduce testing costs by training state officials to program the 
voting systems, rather than relying on vendor technicians. 

Timely and Thorough Execution of Testing: 

Another category for survey responses on testing challenges is ensuring 
that testing is completed on time and is executed using appropriately 
defined tests and related data. Overall, 26 states, 1 territory, and 
the District reported facing at least one of the three challenges in 
this category for the 2006 general election. Of these, 19 states and 
the District reported two or more challenges and 9 states reported all 
three. 

The most-reported challenge for this category, and the second most 
reported challenge overall, was resolving problems discovered during 
testing in time for the election. This challenge was cited by 21 
states, 1 territory, and the District, and was considered by 4 states 
to be a major challenge. Officials from one state attributed this 
challenge to the small number of voting system vendors relative to the 
large number of states that required support, thus making it difficult 
to get vendor support to resolve problems with ballot programming that 
were discovered during readiness testing. To overcome this challenge, 
the officials told us that they are considering developing their own in-
house ballot programming capabilities to reduce vendor dependence. 

Another challenge in this category that was cited by 20 states and the 
District, and that was viewed by 5 as major, was completing testing in 
time for an election. According to state officials, the following 
factors contributed to this challenge: 

* Preparation for testing before Election Day depends on key events 
that occur very close to Election Day by statute, code, or regulation 
(e.g. two states reported a small window of time to certify candidates, 
finalize ballot information, print ballots, and conduct readiness 
testing before the election). 

* Delays with certain vendor deliverables reduced the time available to 
states and local jurisdictions for testing (e.g. a number of state 
officials reported delayed ballot definitions, test execution, or test 
results reporting). 

* The time needed to perform the volume of work associated with test 
preparation and execution and mismatches between this volume and 
available test resources (e.g. officials from one state said that 
producing the test ballots for optical scan machines was time consuming 
and labor intensive; officials with another state described the large 
amount of time and resources required to complete acceptance testing 
for its 400 voting machines). 

To ensure that testing deadlines were met or to reduce the impacts of 
missed deadlines, state election officials that we interviewed 
identified several approaches. For example, officials from one state 
told us that they increased the staff available to support testing 
during intensive testing periods and, in another one of these cases, 
the secretary of state requested that local jurisdictions supply the 
additional staff for the testing. We were also told that the state 
legislature amended the law to permit testing to begin earlier, thereby 
providing sufficient time to complete all testing activities before the 
election. Officials in another state said that they were developing an 
election management plan and a continuity of operations plan to better 
manage unexpected events that may surface during the election, for 
example, should pre-election testing not be completed on time. 

Another challenge cited was defining tests and datasets. In particular, 
about one-quarter of survey respondents saw this as a challenge, with 
one state reporting it as a major challenge. According to state 
officials, they experienced difficulty in selecting an appropriate 
sample size and ballot permutations to test. For example, officials in 
one state said that they were unsure what test would be appropriate for 
their ballot marking devices, whether it was necessary to test each 
ballot face (even those that may not occur within the precinct), and 
how many test ballots should be produced. Officials in another state 
said that it was a minor challenge to get their local jurisdictions to 
define appropriate test decks[Footnote 43] without relying on the 
voting system vendor, which was the prior practice. To address this 
challenge, officials with a number of states told us that they provided 
direction or guidance for defining tests and data, including sometimes 
providing the actual test scenarios and data. In one particular case, a 
state election board had adopted detailed testing procedures and 
checklists when the system was originally installed. These procedures 
and checklists have been refined over the years, and they are to be 
used by the local election boards before each election. 

Utilizing Information from Stakeholders: 

The last category of challenges in our survey related to obtaining or 
using information from either voters or from states, territories, and 
the District to improve elections. With respect to the voter-related 
information, 13 states, a territory, and the District reported that 
considering voter concerns related to testing systems was a challenge 
for the 2006 general election, although only one state viewed it as a 
major challenge. Election officials in one state told us that they were 
unable to address voters' concerns because testing revealed problems 
with reading the ballot coding. An official from another state told us 
that the challenge persists because a portion of the general public 
remains unsatisfied with the testing that the state has in place. 
Similarly, election officials from the territory said that the 
challenge will continue because some people will always oppose 
electronic voting. 

State election officials described various steps that have been taken 
to alleviate voter concerns. For example, one state included all 
stakeholders in the process it followed to select voting systems for 
use in elections. In addition, several other states required public 
notification of the time and place of voting system tests, and some 
states require their readiness tests to be open to the public. However, 
this approach actually introduced an additional challenge for one state 
because members of the public were not always interested in 
participating as test observers. 

With respect to the challenge of obtaining usable testing information 
from jurisdictions, other states, territories, and the District, 5 
states reported that this was a challenge and each of these states 
viewed the challenge as minor. According to officials in one state, 
their specific challenge was obtaining the information needed to verify 
whether local jurisdictions were actually performing testing as 
required. In another state, election officials said that county 
representatives were not always forthcoming with information about 
their testing activities. To address these information access 
challenges, state officials described steps taken to promote state-to- 
state, and state-to-jurisdiction information sharing. Examples include 
the following: 

* A number of states sponsored various meetings (e.g. best practice 
conferences, annual or quarterly meetings, regional meetings, working 
groups, and user groups) with local jurisdictions to share testing- 
related information. 

* One state held frequent phone communications with other states and 
local jurisdictions to discuss, for example, testing protocols and 
results. 

* Several states established statewide Web sites, message boards, or 
automated e-mail lists for communication and interaction among local 
jurisdictions, and between the state and these jurisdictions. 

* One state had a board of election commissioners with representatives 
from each of five county districts to actively share and solicit 
election-related information, including voting system information. 

Changing Voting System Testing Environment: 

In addition to the challenges explicitly identified in our survey, 
officials with a number of states raised another common challenge-- 
managing voting system testing in a changing environment. In this 
regard, officials primarily pointed to changes in response to legal 
actions, changes to statutory and administrative testing requirements, 
and changes to voting system technologies and products as contributing 
to this challenge. The following examples illustrate the challenge: 

* In one case, a state official said that potential changes to existing 
election law, would be a challenge to implement. In another case, a 
territory official said that development of federal requirements for 
voter-verifiable paper audit trails could, in turn, require changes to 
existing testing procedures. The official said that the extent of the 
changes is not yet known because EAC has yet to establish the 
requirements. Officials in some states did not view this as a 
challenge, however, because they had already incorporated voter- 
verified paper audit trail testing into their testing procedures when 
this capability was first added to their voting equipment. They also 
said that they have adjusted their statutory requirements to reflect 
this. 

States, Territories, and the District Generally Reported Minor Voting 
System Problems, Diverse Responses, and Challenges in Addressing Them: 

Most states, territories, and the District reported experiencing a 
range of problems with their voting systems during the 2006 general 
election. While the prevalence and impact of the problems varied, 
survey respondents generally characterized the problems as occurring to 
a little extent and with little impact. Examples of the most frequently 
reported problems are systems where paper jammed or was improperly fed 
or imprinted, systems that stopped operating during the election, 
systems that would not operate at all, systems with slow response time, 
and systems that inaccurately reported vote totals. 

The extent of the respondents' awareness of system problems is unclear 
because less than one-half of them had statutory or administrative 
requirements for local jurisdictions or others to report problems. 
Rather, officials we interviewed told us that they relied on local 
jurisdictions, voters, and voting system vendors to voluntarily report 
problems. A majority of states and others reported that they evaluated 
the problems after the election, although their approaches varied. The 
most frequently cited approach was reviewing system logs and reports. 
Other approaches included audits, investigations, recounts, and 
retesting of voting systems. Election officials reported that 
responsibility for evaluating problems rested either with state 
officials or local jurisdictions, but that responsibility was rarely 
shared. They also reported that actions to correct problems were 
largely the responsibility of both state and local officials, although 
voting system vendors were another significant participant. Many 
respondents reported they took corrective action by developing and 
implementing new policies and procedures. 

Almost half of the states and the District reported facing multiple 
challenges in managing the voting system problems that arose during the 
2006 general election related to problem assessment and the 
implementation of corrective actions. The two most widely-reported 
challenges were determining the cause of errors or malfunctions, and 
identifying, evaluating, and selecting corrective actions. A handful of 
respondents that reported experiencing challenges indicated that all 
nine of the categories in our survey applied to them. However, about 
half of the states and the territories indicated that they either did 
not experience any of the challenges or that the categories did not 
apply to their election environments. Officials from states that did 
report experiencing challenges described steps they took to respond to 
these, including sharing information on problems among election 
officials. 

Types and Prevalence of Voting System Problems Varied among States and 
Others for the 2006 General Election: 

Of the 52 survey respondents, 38 states, 1 territory, and the District 
reported one or more types of problems for the 2006 general election. 
The three most common problems, as identified by about one-half of the 
respondents or more, were paper that jammed or was improperly fed or 
imprinted in voting equipment, systems that stopped operating during 
the election, and systems that would not operate at all. About one- 
fifth of the respondents cited slow system response time on Election 
Day as a problem. Three types of problems related to voting accuracy 
were experienced by fewer than a dozen respondents--voter ballot 
selections not recorded, votes incorrectly credited to candidates or 
measures, and votes tabulated incorrectly. Also, less than 10 
respondents reported problems with ballot displays, system interactive 
functions that assist voters in casting votes, or recording a system 
audit trail (see fig. 25). 

Figure 25: Voting System Problems Reported by States and Others for the 
2006 General Election: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Type of voting system problem: Paper jammed, or was improperly fed or 
imprinted; 
Number of respondents: 35. 

Type of voting system problem: System stopped operating during the 
election; 
Number of respondents: 31. 

Type of voting system problem: System would not operate at all; 
Number of respondents: 25. 

Type of voting system problem: Slow system response time; 
Number of respondents: 12. 

Type of voting system problem: Votes not tabulated correctly; 
Number of respondents: 10. 

Type of voting system problem: Ballot not displayed correctly; 
Number of respondents: 7. 

Type of voting system problem: Vote not recorded at all; 
Number of respondents: 6. 

Type of voting system problem: Vote not recorded correctly; 
Number of respondents: 6. 

Type of voting system problem: Interactive features malfunctioned; 
Number of respondents: 5. 

Type of voting system problem: Audit trail not recorded; 
Number of respondents: 5. 

Source: GAO 2008 survey and interviews of state, territory, and the 
District of Columbia election officials. 

[End of figure] 

Most states and others that reported experiencing problems also 
reported that each type of problem was experienced to only a little 
extent, with a few exceptions.[Footnote 44] Specifically, of the ten 
types of problems, only five types were reported as being experienced 
to a moderate extent, and three to a great extent (see fig. 26). 
Furthermore, 11 respondents experienced a given problem type to either 
a moderate or great extent.[Footnote 45] The problem that was most 
frequently experienced to at least a moderate extent was also the most- 
experienced problem overall--paper-related jams, feeding, or imprinting 
problems. Three other problems were experienced to a moderate or great 
extent by more than one respondent--systems that would not operate, 
slow system response time, and systems that stopped operating. For 
example, officials in one state told us that their systems stopped 
operating in the middle of the election in high population areas. 
Officials in another state told us that their systems could not 
transmit totals from polling places to tabulation centers for 
tabulation of election results. 

Figure 26: Extent of Voting System Problems Reported by States and 
Others for the 2006 General Election by Problem Type: 

[Refer to PDF for image] 

This figure is a stacked vertical bar graph depicting the following 
data: 

Type of voting system problem: Paper jammed, or was improperly fed or 
imprinted; 
Number of respondents, problem occurred to a great extent: 0; 
Number of respondents, problem occurred to a moderate extent: 7; 
Number of respondents, problem occurred to a little extent: 28. 

Type of voting system problem: System stopped operating during the 
election; 
Number of respondents, problem occurred to a great extent: 1; 
Number of respondents, problem occurred to a moderate extent: 3; 
Number of respondents, problem occurred to a little extent: 27. 

Type of voting system problem: System would not operate at all; 
Number of respondents, problem occurred to a great extent: 1; 
Number of respondents, problem occurred to a moderate extent: 1; 
Number of respondents, problem occurred to a little extent: 23. 

Type of voting system problem: Slow system response time; 
Number of respondents, problem occurred to a great extent: 1; 
Number of respondents, problem occurred to a moderate extent: 2; 
Number of respondents, problem occurred to a little extent: 9. 

Type of voting system problem: Votes not tabulated correctly; 
Number of respondents, problem occurred to a great extent: 0; 
Number of respondents, problem occurred to a moderate extent: 1; 
Number of respondents, problem occurred to a little extent: 9. 

Type of voting system problem: Ballot not displayed correctly; 
Number of respondents, problem occurred to a great extent: 0; 
Number of respondents, problem occurred to a moderate extent: 0; 
Number of respondents, problem occurred to a little extent: 7. 

Type of voting system problem: Vote not recorded at all; 
Number of respondents, problem occurred to a great extent: 0; 
Number of respondents, problem occurred to a moderate extent: 0; 
Number of respondents, problem occurred to a little extent: 6. 

Type of voting system problem: Vote not recorded correctly; 
Number of respondents, problem occurred to a great extent: 0; 
Number of respondents, problem occurred to a moderate extent: 0; 
Number of respondents, problem occurred to a little extent: 6. 

Type of voting system problem: Interactive features malfunctioned; 
Number of respondents, problem occurred to a great extent: 0; 
Number of respondents, problem occurred to a moderate extent: 0; 
Number of respondents, problem occurred to a little extent: 5. 

Type of voting system problem: 
Number of respondents, problem occurred to a great extent: 
Number of respondents, problem occurred to a moderate extent: 
Number of respondents, problem occurred to a little extent: 

Source: GAO 2008 survey and interviews of state, territory, and the 
District of Columbia election officials. 

[End of figure] 

Looking across the 40 respondents that reported experiencing any type 
of voting system problems, approximately two-thirds (28) only 
experienced problems to a little extent, while almost one-third (12) 
reported instances of problems that occurred to a moderate or great 
extent (see fig. 27). 

Figure 27: Extent of Voting System Problems Reported by States and 
Others for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a pie-chart depicting the following data: 

Extent of Voting System Problems Reported by States and Others for the 
2006 General Election: 
To a little extent: 28; 
To a moderate/great extent[A]: 12; 
Not applicable, don't know or no data provided: 8; 
To no extent: 4. 

Source: GAO 2008 survey and interviews of state, territory, and the 
District of Columbia election officials. 

[A] All states that reported problems that occurred to a moderate or 
great extent also reported problems that occurred to a little extent. 
These states are only shown in the moderate/great category. 

[End of figure] 

States, Territories, and the District Largely Relied on Others to 
Report, Evaluate, and Correct Election Day Voting System Problems 
through a Variety of Approaches: 

The majority of survey respondents reported having either statutory or 
administrative requirements for local jurisdictions to report voting 
system problems during the 2006 general election. Many of these 
respondents also reported having policies and procedures in place to 
ensure that the requirements would be met. However, the problem- 
reporting requirements, policies, and procedures described by election 
officials varied as to their scope and detail. Whether required or not, 
the majority of states and one territory reported receiving information 
on voting system problems from local jurisdictions, and many also 
received reports from voting system vendors. While most states also 
reported obtaining information on voting system problems from voters, 
state and territory officials that we interviewed told us that these 
problems were not always about specific system errors or malfunctions. 
About one-half of respondents also participated in evaluations of 
problems, and many of these collaborated with local jurisdictions in 
doing so. 

States and Territories Employed a Variety of Problem-Reporting 
Requirements, Policies, and Procedures: 

Based on survey responses, 27 states and 3 territories required local 
jurisdictions to report voting system errors or malfunctions that 
occurred during the 2006 general election; 16 states, 1 territory, and 
the District did not (see fig. 28).[Footnote 46] Further, many of the 
30 respondents that did require reporting also indicated that they had 
policies and procedures to guide the reporting. 

Figure 28: Voting System Problem Reporting Requirements Reported by 
States and Others for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a map of the United States depicting voting system 
problem reporting requirements reported by states and others for the 
2006 general election,as follows: 

Did not require: 
Arkansas: 
Arizona: 
Connecticut: 
District of Columbia: 
Guam: 
Iowa: 
Kansas: 
Kentucky: 
Minnesota: 
Mississippi: 
Missouri: 
Montana: 
New Hampshire: 
Oregon:
Tennessee: 
Vermont: 
Washington: 
Wisconsin: 

Required: 
Alaska: 
California: 
Colorado: 
Delaware: 
Hawaii: 
Idaho: 
Illinois: 
Indiana: 
Louisiana: 
Maine: 
Maryland: 
Massachusetts: 
Nebraska: 
Nevada: 
New Jersey: 
New Mexico: 
New York: 
North Carolina: 
North Dakota: 
Oklahoma: 
Pennsylvania: 
Puerto Rico: 
Rhode Island: 
South Carolina: 
South Dakota: 
Texas: 
U.S. Virgin Islands: 
Virginia: 
West Virginia: 
Wyoming: 

Don't know/no response: 
Alabama: 
Florida: 
Georgia: 
Michigan: 
Ohio: 
Utah: 

Sources: GAO 2008 survey of state, territory and the District of 
Columbia election officials; MapArt (map). 

[End of figure] 

Of the 30 states and territories that had reporting requirements for 
voting system problems, 8 states provided statutes detailing these 
reporting requirements. Our review of these requirements found 
variation in their scope and specificity. In general, they range from a 
basic reporting obligation for local jurisdictions to detailed 
reporting responsibilities, data items, and procedures directed at 
several levels of election administration, up to and including state 
election officials. For instance, 

"...the county auditor shall receive and handle complaints...by any 
voter or precinct official involving...irregularities of any kind in 
voting. The county auditor shall refer complaints to the secretary of 
state or the proper prosecuting authority, as the county auditor deems 
appropriate." 

"The precinct election officials shall immediately cease using any 
malfunctioning voting equipment and report the problem to the 
commissioner...The commissioner shall keep a written record of all 
known malfunctions and their resolution." 

"Each county clerk shall collect the following information regarding 
each primary and general election, on a form provided by the Secretary 
of State and made available at each polling place in the county, each 
polling place for early voting in the county, the office of the county 
clerk and any other location deemed appropriate by the Secretary of 
State: 

A report on each malfunction of any mechanical voting system, 
including, without limitation: 

(1) Any known reason for the malfunction; 

(2) The length of time during which the mechanical voting system could 
not be used; 

(3) Any remedy for the malfunction which was used at the time of the 
malfunction; and: 

(4) Any effect the malfunction had on the election process." 

In addition, the reporting requirements in statutes and directives are 
not limited to local jurisdictions. For example, 2 states require 
voting system vendors to notify state officials of defects or 
malfunctions with their systems: 

"The vendor shall promptly notify the State Board of Elections and the 
county board of elections of any county using its voting system of any 
decertification of the same system in any state, of any defect in the 
same system known to have occurred anywhere, and of any relevant defect 
known to have occurred in similar systems." 

"A vendor (or the political subdivision, if no private vendor supports 
their system) must give notice to the Secretary of State within 24 
hours of a malfunction of its voting system software or equipment in an 
election held in this state. … the Secretary of State shall determine 
whether further information on the malfunction is required. At the 
request of the Secretary of State, a vendor … must submit a report … 
detailing the reprogramming (or any other actions) necessary to redress 
a voting system malfunction...Failure to submit a report within the 
required period shall be grounds to decertify the system." 

To govern the collection and documentation of voting system problems 
for the 2006 general election, more than half of respondents (26 states 
and 2 territories) reported having policies and procedures for problem 
reporting; 18 states and 2 territories did not.[Footnote 47] Most 
states and both of the territories that reported having such policies 
and procedures also reported state-level reporting requirements (20 
states and 2 territories). In addition, 6 states had problem-reporting 
policies and procedures, even though they did not have statutory or 
administrative reporting requirements. 

State election officials that we interviewed described a range of 
policies and procedures governing how they implemented statutory and 
administrative problem-reporting requirements. For example, officials 
with 10 states and the 2 territories told us that they maintained a log 
of the calls received about election-related system malfunctions. In 
addition, officials for 7 states, 1 territory, and the District stated 
that they either maintained voting system problem reports in a file or 
were developing a database for this information. 

Local Jurisdictions, Voters, and Voting System Vendors Were the Most 
Common Sources for Reports of Voting System Problems: 

Three sources of information on voting system problems during the 2006 
general election were most frequently cited by respondents--local 
jurisdictions, voters, and vendors. Specifically, 40 states and 1 
territory reported receiving information on problems from their local 
jurisdictions. State officials told us that they received this 
information in various forms, ranging from phone calls to formal 
written reports. In addition, 37 states, 2 territories, and the 
District reported being notified of election-related problems directly 
by voters; however, several officials that we interviewed stated that 
voter-reported problems usually did not pertain to voting system errors 
or malfunctions, but rather to election processes, such as voter 
registration and polling place operations. Voting system vendors also 
reported problems, according to responses from 23 states, 2 
territories, and the District (see fig. 29). Several states used the 
term "remote monitoring" to refer to the calls that they received about 
problems from local jurisdictions, voters, and vendors. 

Figure 29: Sources of Information on Voting System Problems Reported by 
States and Others for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Sources of information on voting system problems: Local jurisdictions; 
Number of respondents: 41. 

Sources of information on voting system problems: Voters; 
Number of respondents: 40; 

Sources of information on voting system problems: Vendors; 
Number of respondents: 26. 

Sources of information on voting system problems: HAVA complaint 
procedures; 
Number of respondents: 18. 

Sources of information on voting system problems: Other states, the 
EAC, or other federal entities; 
Number of respondents: 7. 

Source: GAO 2008 survey of state, territory, and the District of 
Columbia election officials. 

[End of figure] 

Two less-frequently cited sources of voting system problems in 2006 
were HAVA administrative complaint procedures (18 respondents) and 
other government entities (7 respondents).[Footnote 48] According to 
HAVA, the complaint procedures are to be the mechanism for reporting 
deficiencies in meeting the act's voting system standards requirements 
(e.g., voter verification of ballot selections, voter changes to and 
correction of ballot selections), as well as other HAVA 
provisions.[Footnote 49] Notwithstanding the fact that 16 states, 1 
territory, and the District cited the complaint procedures as a source 
of voting system problems, several state officials that we interviewed 
said that few of the HAVA complaints that they received could actually 
be linked to a voting system malfunction or error. 

Evaluation and Correction of Voting System Problems Was Primarily 
Conducted by Either Local Jurisdictions or States: 

The majority of survey respondents reported conducting one or more 
types of evaluations of voting system problems for the 2006 general 
election to gain additional information on voting system errors or 
malfunctions. Of the five types of evaluations referred to in our 
survey--investigations, audits, recounts, system retests, and reviews 
of system logs or reports--39 respondents conducted one or more of 
them, while 13 did not. Of the 39, 8 states and 1 territory conducted 
only one type, while the remaining 30 respondents conducted two or more 
types. Overall, the most widely-performed type was reviewing system 
logs and reports (25 states, 2 territories, and the District), followed 
in order by audits, investigations, recounts, and retests (see fig. 
30). 

Figure 30: Actions Taken in Evaluating Voting System Problems as 
Reported by States and Others for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Evaluation activity: Reviewed voting system logs or reports; 
Number of respondents: 28. 

Evaluation activity: Conducted audits; 
Number of respondents: 23. 

Evaluation activity: Conducted investigations; 
Number of respondents: 21. 

Evaluation activity: Conducted recounts; 
Number of respondents: 19. 

Evaluation activity: Retested voting systems; 
Number of respondents: 17. 

Source: GAO survey of state, territory, and the District of Columbia 
election officials and analysis of responses. 

[End of figure] 

State officials also reported that for the 2006 general election these 
evaluations were more commonly conducted by either local jurisdictions 
or by the states, rather than by both. This was true across all but one 
type of evaluation. Overall, local jurisdictions were solely involved 
in conducting the evaluations in 13 states; state officials were solely 
involved in 8 states. Another 12 states reported at least one 
evaluation that involved both local and state officials.[Footnote 50] 
With respect to individual types of evaluation, most were again 
conducted either by local officials or by state officials. For 
instance, reviewing voting system logs or reports was more often the 
responsibility of state level officials (13 respondents) than local 
jurisdictions (10 respondents); only 5 states involved both 
organizations (see fig. 31). 

Figure 31: Participation by States and Local Jurisdictions in Problem 
Evaluation Activities as Reported by States and Others for the 2006 
General Election: 

[Refer to PDF for image] 

This figure is a multiple vertical bar graph depicting the following 
data: 

Evaluation activity: Reviewed voting system logs or report; 
Number of respondents, Local jurisdictions only: 10; 
Number of respondents, Both state and local jurisdictions: 5; 
Number of respondents, States only[A]: 13. 

Evaluation activity: Conducted audits; 
Number of respondents, Local jurisdictions only: 10; 
Number of respondents, Both state and local jurisdictions: 3; 
Number of respondents, States only[A]: 10. 

Evaluation activity: Conducted investigations; 
Number of respondents, Local jurisdictions only: 7; 
Number of respondents, Both state and local jurisdictions: 6; 
Number of respondents, States only[A]: 8. 

Evaluation activity: Conducted recounts; 
Number of respondents, Local jurisdictions only: 12; 
Number of respondents, Both state and local jurisdictions: 5; 
Number of respondents, States only[A]: 2. 

Evaluation activity: Retested voting systems; 
Number of respondents, Local jurisdictions only: 7; 
Number of respondents, Both state and local jurisdictions: 4; 
Number of respondents, States only[A]: 6. 

Source: GAO 2008 survey of state, territory, and the District of 
Columbia election officials and analysis of responses. 

[A] Includes territories and the District, where applicable. 

[End of figure] 

According to officials from 2 states, they are involved in evaluations 
of problems because they have either experienced substantial voting 
system problems in a past election or have always participated directly 
in such activities as a matter of policy and practice. For example, 
officials for one of these states told us that because of the problems 
they experienced in 2004, they now have vendor-trained government 
technicians that are deployed to polling places to directly assist in 
evaluating system malfunctions. This state has also installed an 
automated system at polling places to provide direct communications 
between state officials and poll workers about system problems. 
Officials for one territory told us that an election board composed of 
elected representatives from each of its local jurisdictions was 
responsible for establishing policies and managing activities related 
to voting system problems. In this way, local jurisdictions and 
territory officials have worked together to evaluate and address 
problems that occurred. In contrast, officials in another state told us 
that they did not see a role for the state in evaluating voting system 
performance during an election. In another state, officials told us 
that state participation in problem evaluation and resolution was not 
necessary because the problems that local jurisdictions had encountered 
in the 2006 election did not require much evaluation and were easy to 
remedy. 

For the 2006 general election, the majority of states and territories, 
and the District, reported taking action to correct reported problems. 
Many reported developing and implementing new policies and procedures 
(15 states, 1 territory, and the District). In this regard, state 
officials that we interviewed said that these policies and procedures 
related to, among other things, 

* voting system operations, 

* logic and accuracy testing, and: 

* problem prevention and correction. 

Several states also reported that they addressed their problems by 
changing their voting method (7 states) and a few added a paper-based 
audit trail to their system (2 states). A few states required their 
systems to be reapproved, while another fined its vendor when a system 
failed to meet state requirements. 

According to survey responses, state and local election officials were 
equally involved in implementing actions to correct reported problems 
(23 respondents apiece). In addition, state executives, such as the 
secretary of state, were frequently involved (19 respondents), as were 
voting system vendors (16 respondents). Several states also included 
experts or consultants in these activities (see fig. 32). 

Figure 32: Responsibilities for Corrective Actions to Address Voting 
System Problems as Reported by States and Others for the 2006 General 
Election: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Responsibility for corrective action: State election personnel; 
Number of respondents: 23. 

Responsibility for corrective action: Local counties or jurisdictions; 
Number of respondents: 23. 

Responsibility for corrective action: State executives (e.g. Secretary 
of State); 
Number of respondents: 19. 

Responsibility for corrective action: Vendors or manufacturers; 
Number of respondents: 16. 

Responsibility for corrective action: Consultants or subject matter 
experts; 
Number of respondents: 5. 

Responsibility for corrective action: State attorney general or 
equivalent; 
Number of respondents: 1. 

Source: GAO 2008 survey of state, territory, and the District of 
Columbia election officials. 

[End of figure] 

Information from election officials we interviewed about who was 
involved in correcting reported problems was consistent with these 
survey responses. According to officials from 22 states, 1 territory, 
and the District, they typically provided technical assistance to local 
jurisdictions in resolving problems, and 11 said that they also 
established resolution teams consisting of state and local election 
officials and vendor representatives. In addition, officials from 18 
states told us that vendors are the primary parties involved in 
resolving problems; officials from 4 states added that they also hired 
consultants to obtain technical expertise independent of the vendor, 
for instance, when retesting equipment that malfunctioned in an 
election. Officials for a few states told us that they have entered 
into relationships with academic institutions to support resolution of 
voting system problems and state studies indicated similar 
collaboration. For example, officials in one state told us that they 
rely on an election center at their state university to assist in 
overseeing voting system performance activities across the state. 

Overall, information sharing between a state that experienced voting 
system problems and other voting system stakeholders occurred much more 
frequently with vendors or local jurisdictions than it did with other 
states or EAC. Specifically, of the 40 respondents that reported 
experiencing voting system problems in the 2006 general election, 24 
(23 states and 1 territory) indicated that they communicated with 
vendors about the problems, while 19 states indicated that they 
communicated with their local jurisdictions. In contrast, only 8 
respondents indicated that they communicated with other states, and 2 
communicated with EAC (see fig. 33). 

Figure 33: Recipients of Communications about Voting System Problems as 
Reported by States and Others for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a vertical bar graph depicting the following data: 

Recipient of problem information sharing: Vendor; 
Number of respondents: 24. 

Recipient of problem information sharing: Local jurisdiction; 
Number of respondents: 19. 

Recipient of problem information sharing: Other states; 
Number of respondents: 8. 

Recipient of problem information sharing: EAC; 
Number of respondents: 2. 

Source: GAO 2008 survey of state, territory, and the District of 
Columbia election officials. 

[End of figure] 

Based on survey responses and interviews with state election officials, 
communication with other organizations about voting system problems has 
been influenced by a number of factors some of which are related to the 
challenges discussed in the following section. These factors include 
competing demands on a limited number of election staff at both the 
local and state levels; limitations of the information collected about 
problems (e.g., typically anecdotal and after-the-fact or of limited 
usefulness); the diversity of voting system environments among states; 
and the focus on obtaining timely and accurate election results rather 
than real-time sharing of problems. 

States and the District Reported Many Challenges in Addressing Voting 
System Problems, As Well As Approaches to Overcoming Them: 

Almost half of the survey respondents reported facing a range of 
challenges in evaluating and correcting voting system problems for the 
2006 general election.[Footnote 51] The most frequently cited 
challenges were determining the causes of problems and identifying, 
evaluating, and selecting corrective actions. Examples of other 
frequently cited challenges were having sufficient human resources and 
funds to implement corrective actions. All of the respondents that 
reported facing a challenge, except for one, also reported that they 
experienced more than one challenge; a handful of respondents 
experienced virtually all nine challenges identified in our survey. To 
overcome these challenges, officials with the states and the District 
described a number of actions that they have taken. 

A Number of States and the District Experienced a Range of Challenges 
in Assessing and Correcting Voting System Problems: 

Of the 52 survey respondents, 21 states and the District reported that 
they faced at least one of the nine challenges in our survey, which 
fall into two categories. The first category is problem assessment 
challenges. These involve the identification and evaluation of problems 
and are heavily dependent on the availability and quality of 
information. The second category is corrective action challenges. These 
involve the identification of the corrective actions to be implemented 
and the resources required to implement them. The remaining 26 states 
and all 4 territories reported that they either did not experience any 
of the nine challenges, or that they did not see their applicability to 
their voting system environments. 

The two most widely-reported challenges were both in the problem 
assessment category--determining the cause of errors or malfunctions 
(18 respondents) and identifying, evaluating, and selecting corrective 
actions (17 respondents). The least-reported challenges were both in 
the corrective action category of challenges--having sufficient funding 
(9 respondents) and communicating problem resolution progress with 
stakeholders (10 respondents). (See fig. 34.) 

Figure 34: Challenges Reported by States and Others in Addressing 
Voting System Problems for the 2006 General Election: 

[Refer to PDF for image] 

This figure is a stacked vertical bar graph depicting the following 
data: 

Challenges: Problem assessment: Collecting data; 
Number of respondents, Major: 1; 
Number of respondents, Minor: 11. 

Challenges: Problem assessment: Gathering information on circumstances; 
Number of respondents, Major: 1; 
Number of respondents, Minor: 13. 

Challenges: Problem assessment: Determining causes; 
Number of respondents, Major: 1; 
Number of respondents, Minor: 17. 

Challenges: Problem assessment: Identifying, evaluating, and selecting 
corrective actions; 
Number of respondents, Major: 1; 
Number of respondents, Minor: 16. 

Challenges: Problem correction: Executing corrective actions; 
Number of respondents, Major: 1; 
Number of respondents, Minor: 13. 

Challenges: Problem correction: Sufficient staff; 
Number of respondents, Major: 2; 
Number of respondents, Minor: 12. 

Challenges: Problem correction: Sufficient staff training; 
Number of respondents, Major: 0; 
Number of respondents, Minor: 13. 

Challenges: Problem correction: Sufficient funding; 
Number of respondents, Major: 1; 
Number of respondents, Minor: 8. 

Challenges: Problem correction: Communicating progress to stakeholders; 
Number of respondents, Major: 1; 
Number of respondents, Minor: 9. 

Source: GAO 2008 survey of state, territory, and the District of 
Columbia election officials. 

[End of figure] 

Almost all of the 22 respondents that reported facing a challenge also 
indicated that they faced more than one of them. Specifically, 16 
reported that they faced between two and six of the challenges, while 6 
reported that they faced eight or more challenges. In addition, 17 of 
the states that reported facing challenges also reported having 
statutory or administrative requirements for reporting voting system 
problems for the 2006 general election. In contrast, the 15 states that 
reported no challenges or provided "not applicable" responses were also 
states that reported not having problem reporting requirements. No 
relationship was evident between the respondents that cited the most 
types of challenges and the respondents that reported having 
experienced voting system problems to either a moderate or great 
extent. 

During our interviews with state election officials, we obtained 
examples of specific challenges in assessing voting system problems. 
Two are provided below: 

* Collecting data on voting system errors or malfunctions. While 
officials of one state told us that they maintain an electronic county 
incident log to collect extensive information on voting system errors 
and malfunctions occurring in elections, they also said that they do 
not have sufficient staff to ensure the completeness of the logs or to 
use the data for statewide problem management. 

* Determining the cause of errors or malfunctions. State officials also 
told us that much of the information reported by poll workers and 
voters tends to be stated in terms that are not always useful to state 
technical staff in understanding and diagnosing a given problem. 

To overcome the challenges that they face, state officials described a 
number of actions that they have taken. For example: 

* Officials from 2 states said that they have established user groups 
made up of election officials and vendors to increase information 
sharing about identified problems and actions for addressing them. 

* Officials in another state told us that they are sponsoring bi-annual 
conferences with county election officials in which information about 
election management practices, including problem management, is shared. 

* Officials in another state said that they conduct an annual voting 
system seminar for local jurisdictions that includes separate sessions 
on different kinds of voting systems and problems that have been 
experienced with them. 

* Officials from 2 other states told us that they interact with local 
election officials by phone and e-mail to discuss voting system 
problems and management approaches. 

* Officials from several states have also told us that they 
communicated with the nationwide election community by contributing 
studies and reports about their systems that include information about 
identified problems. 

States, Territories, and the District Are Largely Satisfied with 
Federal Voting System Resources and Services, but Their Use Varies: 

States, territories, and the District expressed general satisfaction 
with selected voting system resources developed and made available by 
EAC, namely federal testing and certification of voting systems, 
voluntary voting system guidelines, accredited voting system testing 
laboratories, and election administration and voting system management 
guidance. However, use of these resources varies among states, 
territories, and the District, in part due to the availability, 
applicability, or potential cost of the resources. Although, several 
states see federal certification as the foundation for their respective 
voting system approval processes, several that require federal 
certification as a condition of system approval raised concerns over 
the length of time it takes to complete the federal certification 
process. 

States, territories, and the District have mixed views on the federal 
voluntary guidelines. Some reported satisfaction with the guidelines' 
depth and comprehensiveness, while others were dissatisfied with their 
overly technical content or lack of integration with current state 
certification processes. A few states reported that they have used 
federally accredited testing laboratories to support state-level system 
approval, and those that did reported both challenges and benefits. 
Finally, most states and others reported that they are satisfied with 
guidance developed and published by EAC--such as quick start management 
guides--and they reported making use of the guidance. 

Most States and the District Rely on Some Component of Federal 
Certification, but Time to Obtain Certification Is Affecting Several 
States' 2008 Election Preparations: 

Most states and the District rely on some component of federal 
certification--standards, accredited laboratory testing, or 
certification--to augment state processes for approving new voting 
systems or upgrading existing ones. As called for under HAVA, EAC has 
established a federal certification process for voting systems that 
includes (1) publishing guidelines against which voting systems can be 
evaluated; (2) policies, procedures, and criteria for accrediting 
laboratories to test voting systems; and (3) defining policies, 
procedures, and criteria for testing and certifying voting systems. 
According to statutes, regulations, and follow-up with election 
officials from states, territories, and the District, 38 states and the 
District require that their voting systems be either tested to federal 
standards,[Footnote 52] tested by a federally accredited laboratory, or 
federally certified.[Footnote 53] Election officials explained that 
requiring federal certification helps to raise confidence in state- 
approved voting systems because, in many cases, the level of federal 
testing exceeds that performed at the state level. As such, officials 
believe that federal certification provides important assurances about 
the systems' accuracy, security, and reliability. 

Of the 38 respondents that reported relying on some component of 
federal certification, 18 also reported plans to purchase new systems 
or make upgrades to their existing systems for use in the 2008 
election, thus requiring federal system certification.[Footnote 54] In 
several cases, states were planning to make upgrades to address known 
system shortcomings found during a previous election, such as audio 
errors, incorrect ballot positioning, and scanner workload 
inefficiencies. In other cases, states were purchasing new systems to 
comply with new legislation or to introduce a new facet to their 
election process, such as vote by mail for absentee voting. As of May 
2008, none of these systems had been certified by EAC and, as a result, 
some states expressed concerns regarding the length of time it takes 
for systems to obtain federal certification. Consequently, these states 
now face difficult decisions in fielding upgraded or new voting systems 
that meet state requirements in time for the 2008 general election. 
[Footnote 55] 

The affected states are considering several approaches to address their 
dependency on federal certification for the 2008 general election. 
According to officials for some of the affected states, they may 
implement operational procedures to temporarily address voting system 
flaws and shortcomings, or they may delay the implementation of a new 
system or system upgrade until after the 2008 election. For example, 
one state official told us about a county that is currently using an 
optical scan system, but had planned to move to vote by mail for the 
2008 general election. To accommodate the large number of ballots 
expected to result from this change and to process these ballots 
faster, the county intended to replace its optical scan system with a 
new digital scan central count system. However, since the system was 
not federally certified in time for the state to approve it, state 
officials told us that the county now expects to use the same system it 
used in 2006 and to delay the move to vote by mail until 2009. Table 8 
identifies the different approaches that state officials told us they 
may take to address their federal certification requirement for the 
2008 election. 

Table 8: States' Approaches for Addressing Federal Certification 
Requirements for the 2008 Election: 

Number of states: 13; 
Approach: Delay the implementation of a new system or system upgrade 
and use the same voting systems that were used in the 2006 election. 

Number of states: 3; 
Approach: Revise state requirements to allow for state certification of 
a voting system without prior federal certification. 

Number of states: 1; 
Approach: Not purchase a vendor voting system upgrade and instead 
revise operational procedures for the systems used during the 2006 
general election. 

Number of states: 1; 
Approach: Allow each local jurisdiction to decide. 

Source: GAO summary of information provided by states we interviewed. 

[End of table] 

Officials from other states that reported a reliance on federal 
certification of voting systems also expressed concern over the length 
of time it takes to complete the federal certification process. Most of 
these states and the District expect to continue using voting systems 
in the 2008 election that were previously qualified under the National 
Association of State Election Directors (NASED) program, and thus do 
not expect to need federal certification for these systems prior to 
2008. Nevertheless, election officials in a few of these situations 
were concerned that the time needed to complete the federal 
certification process could affect future elections in which federal 
certification of their systems may be needed. For example, one election 
official reported that they would like to purchase a new optical scan 
voting system in 2009; however, they believe the federal certification 
process has been extremely slow and are concerned that such a purchase 
may be impacted. 

In addition to concerns over the time needed to complete the federal 
certification process, several states reported that the future costs of 
testing systems to federal standards could impact their ability to 
purchase or maintain the systems. Federal certification costs do not 
directly affect the states, territories, or the District as they are 
paid by the voting system manufacturers; however, one state official 
told us that these costs are likely to be passed down to states and 
local jurisdictions in the costs of purchasing and maintaining the 
systems as manufacturers look to recoup those expenses. According to 
one state official and representatives of several voting system 
manufacturers, the cost for voting system qualification under NASED was 
roughly $500,000, whereas the cost of testing a voting system to the 
same standards under the federal certification process is exceeding $2 
million. Another state official expressed the view that testing systems 
to the 2007 voluntary guidelines will increase the cost of federal 
certification as these guidelines are more voluminous and demanding 
than the former standards. 

Notwithstanding these state concerns, several state officials told us 
that the federal certification process provides a foundation upon which 
their respective states' testing can build. For example, an official 
from one state explained that because they do not have the in-house 
expertise to conduct the testing performed at the federal level, they 
require federal certification and review available results prior to 
testing a system to state-specific standards. In addition, several 
state officials expressed appreciation for the effort EAC has made to 
ensure that voting systems are properly tested. 

Satisfaction with Voluntary Voting System Guidelines Is Mixed: 

As described earlier, the voluntary voting system guidelines are a set 
of federal standards against which voting systems can be tested to 
determine if they provide the basic functionality, accessibility, 
accuracy, reliability, and security capabilities needed for federal 
certification. The voluntary guidelines may also be used in whole, in 
part, or not at all as the basis for state and local testing and 
approval of voting systems. EAC issued the initial voluntary guidelines 
in December 2005 as an update to voting system standards developed in 
2002 by the Federal Election Commission, and they became the sole basis 
for federal certification testing in December 2007. A draft of the next 
version of the guidelines was submitted to EAC in August 2007. This 
draft contains new and expanded material in the areas of reliability 
and quality, usability and accessibility, and security. 

Over one-half of the survey respondents were generally satisfied with 
one or more aspects of the voluntary guidelines--their 
comprehensiveness, clarity, or ease of use.[Footnote 56] Of the 45 
respondents that expressed views regarding the guidelines, 22 states, 3 
territories, and the District reported being either moderately or very 
satisfied with at least one of these aspects. Further, over one-half of 
these 26 respondents were moderately or very satisfied with all three 
of these aspects. The most common reason for satisfaction was the 
guidelines' comprehensiveness (23 of 26), while satisfaction with 
clarity and ease of use were slightly less prevalent (19 of 26 each). 
Several election officials that we interviewed generally shared the 
view that the guidelines are more comprehensive than the 2002 voting 
system standards. For instance, one state official told us that the 
2007 voluntary voting system guidelines addresses security capabilities 
in greater depth than the 2002 voting system standards. Other election 
officials expressed satisfaction because the 2005 voluntary guidelines 
helped them to develop state testing without duplicating federal 
testing. 

Notwithstanding that over one-half of respondents were either satisfied 
with or neutral about the voluntary guidelines, 14 states reported 
being either moderately or very dissatisfied with the 
comprehensiveness, clarity, or ease of use of the guidelines. Of these, 
3 were moderately or very dissatisfied with all three aspects. The most 
common reason for dissatisfaction was that the voluntary guidelines 
were not easy to use (10 of 14); dissatisfaction with clarity and 
comprehensiveness were not as prevalent (8 of 14 and 7 of 14, 
respectively). States expressed a variety of reasons for their 
dissatisfaction. For example, officials from one state told us that the 
amount of time that it takes to approve a version of the guidelines is 
lengthy and impacts their ability to implement a new version into their 
election management processes in a timely manner. Officials from 
another state said that the guidelines were too subjective, which made 
it difficult to perform testing against its requirements. In addition, 
a few other state officials stated concerns with the guidelines during 
our interviews. Officials from one state stated that the voluntary 
guidelines need to be more integrated with the current state 
certification processes and less technical. They also stated that 
election workers did not always have enough technical background to 
make the best use of the guidelines, and information technology staff 
was not always available to provide assistance. Moreover, 2 state 
officials told us that they were unfamiliar with the technical content 
of the guidelines and were therefore unable to discuss how their states 
could align state approval testing with federal testing. Officials from 
another state told us that, beyond concerns with the 2005 voluntary 
guidelines, the 2007 draft voluntary guidelines may be too demanding 
for any voting system to be certified within a reasonable time frame. 

Few States Are Using Voting System Testing Laboratories and These 
States Have Mixed Views: 

Reliable testing, systematic reporting of test results, and diligent 
problem resolution are critical to ensuring that voting systems are 
accurate, secure, and reliable. Prior to the passage of HAVA, voting 
systems were tested against the 1990 and 2002 voting system standards 
by NASED-accredited independent testing authorities. Three laboratories 
were accredited under this program.[Footnote 57] Under HAVA, EAC and 
NIST were assigned distinct but related responsibilities for developing 
a national laboratory accreditation program to replace the NASED 
program. In general, NIST is to focus on assessing laboratories' 
technical qualifications, while EAC is to augment the institute's 
assessment results and accreditation recommendations with its own 
review of related laboratory testing documentation to reach an 
accreditation decision. Voting system testing laboratories are 
accredited to develop test plans and procedures, conduct analyses and 
tests, and report results against specific versions of the voting 
system standards and voluntary guidelines. The laboratories perform 
these functions under contract with either voting system manufacturers 
for federal certification, or under contract with states or local 
jurisdictions for state approval or some type of election testing. EAC 
had accredited four testing laboratories as of July 2008, and a fifth 
was under evaluation at that time. 

Based on survey and interview responses, 2 states contracted with a 
testing laboratory to test voting systems in support of their 
respective voting system approval processes. One state contracted with 
a laboratory to verify that its voting systems were in compliance with 
state standards.[Footnote 58] This state also is using the laboratory's 
expertise to write a customized test plan for state approval testing. 
The remaining state contracted with a testing laboratory to help with 
state approval of a system that was undergoing federal certification 
but that was not likely to be certified in time for use in the 2008 
general election. Officials with this state told us that when they 
determined that EAC was not likely to certify a new version of their 
system in time for their 2008 primary election, they approached a 
testing laboratory to participate in and oversee the state testing of 
the system. The system was later certified by the state's secretary of 
state based on the laboratory's findings. 

Election officials from these 2 states reported various concerns and 
benefits in working with a voting system testing laboratory. For 
example, one state cited the high cost of working with the 
laboratories, the extensive level of vendor involvement with the 
laboratories, and the limited scope of the laboratory's testing as 
concerns. According to one state official, one concern is ensuring that 
the laboratory tested every requirement that the state provided. 
According to this official, the state had to work closely with one 
laboratory to ensure that all requirements were met. Despite voicing 
concerns, however, each state reported that they were satisfied with 
the laboratories' efforts and their commitments to testing to state 
standards and working closely with the states to meet requirements. 

Though very few states use a testing laboratory directly as part of a 
state approval process, several election officials told us that they 
nevertheless review laboratory test plans and reports as part of their 
respective approval processes.[Footnote 59] Several of these officials 
told us they had reviewed test plans and results produced under the 
NASED process, and viewed them as useful, but they did not yet have 
opinions on EAC-accredited laboratory test plans and results. 

States and Others Use Federal Guidance in Various Ways and Are 
Generally Satisfied: 

EAC has published guidance on a range of topics to assist state and 
local election officials in managing and administering elections. This 
guidance includes a number of quick start management guides, election 
management guidelines, best practices, and other related reports. For 
example, in October 2007, EAC released a Quick Start Management Guide 
for Acceptance Testing. This guide provides a general introduction to 
the purpose of acceptance testing at the state and local levels. It 
also provides more specific technical recommendations for physical, 
diagnostic, and functional analysis tests to be performed as part of 
acceptance testing. Table 9 lists EAC guidance documents that are 
either under development or issued as of May 2008 relative to voting 
systems. 

Table 9: EAC's Guidance Applicable to Voting Systems: 

Subject: Election Management Guidelines; 
Release date: Ongoing. 

Subject: Best Practices in Election Administration Tool Kit; 
Release date: July 2004. 

Subject: Quick Start Management Guides: New Voting Systems; 
Release date: June 2006. 

Subject: Quick Start Management Guides: Ballot Preparation/Printing and 
Pre-Election Testing; 
Release date: September 2006. 

Subject: Quick Start Management Guides: Voting System Security; 
Release date: September 2006. 

Subject: Quick Start Management Guides: Voting System Certification; 
Release date: August 2007. 

Subject: Quick Start Management Guides: Acceptance Testing; 
Release date: October 2007. 

Subject: Quick Start Management Guides: Contingency and Disaster 
Planning; 
Release date: October 2007. 

Subject: Quick Start Management Guides: Developing an Audit Trail; 
Release date: March 2008. 

Subject: Quick Start Management Guides: Central Count Optical Scan 
Ballots; 
Release date: May 2008. 

Source: GAO analysis based on EAC Web site documentation. 

[End of table] 

Officials from almost every state, territory, and the District that we 
interviewed stated that they received EAC guidance either through the 
mail or via e-mail. Further, of the 46 survey respondents that provided 
views on EAC's quick start management guides, 33 reported that they 
were very or moderately satisfied with the guides. In addition, of the 
state officials that we interviewed who said they were satisfied with 
federal guidance, several described how they used the guidance. For 
example, one state official told us that the quick start management 
guides were sent to the counties, and the Quick Start Management Guide 
for Voting System Security was referenced in the state's security 
directive. In contrast, officials with 2 states expressed moderate 
dissatisfaction with the quick start management guides, noting that 
they were too simplistic. The remaining 11 states were neither 
satisfied nor dissatisfied. 

Our interviews with officials from states, territories, and the 
District provided additional insights on the uneven use of EAC 
guidance. Specifically, some election officials told us they used EAC 
guidance in developing both state and local Election Day policies. For 
example, officials from one territory said that they reviewed both the 
best practice documents and quick start management guide related to 
poll workers and used this guidance when developing its poll worker 
training policy. In contrast, some election officials responded that 
they do not yet have the resources to develop Election Day policies and 
so could not use the guides. One of these officials added that the 
guides may be used at some point in the future, but did not know how or 
when this would happen. Finally, election officials with other states 
told us they did not use EAC guidance because they already had well- 
developed policies and procedures. For example, one state began work on 
a poll worker training guide in 2004 and developed a policy on its own 
before the federal guidance was developed. Later, as state officials 
received guidance from EAC, they reviewed the poll worker training 
information but determined that the existing state plan already covered 
all of the points addressed in EAC guidance. 

Concluding Observations: 

States, territories, and the District play a pivotal role in voting 
system approval, testing, and problem management. To their credit, most 
of these entities reported that they required and established 
mechanisms for evaluating and approving voting systems prior to 
adoption by local jurisdictions. While the exact manner in which they 
execute these mechanisms has varied, many report that they have 
incorporated similar core elements into their approaches and processes. 
These elements can provide the few states that have not adopted an 
approval program with useful frameworks for them to learn from and 
possibly leverage. Moreover, the range of efforts is a valuable 
resource for any of these entities that are interested in making 
improvements to their existing programs or in adopting shared services. 

In this regard, most states, territories, and the District reported 
augmenting their approval of voting systems with some type of 
additional testing, which has provided opportunities to anticipate and 
address potential voting system problems before they affect election 
results, as well as a basis for others to learn from. Although some 
types of tests have been more common than others, a notable subset of 
states report requirements for testing throughout the voting system 
life cycle--from acquisition to postelection--and report the 
establishment of testing programs to accomplish this. The testing 
frameworks that these entities have in place can assist others in 
defining and refining testing activities. To the extent that effective 
testing programs are in place, they can serve to identify and correct 
voting system problems before they can affect an election. 

Nevertheless, even approved and well-tested systems can experience 
problems. To effectively address problems, objective, timely, and 
complete information about problems is needed to make informed 
decisions that mitigate impacts to elections and avoid repeat 
occurrences. Since local jurisdictions typically have been responsible 
for identifying, assessing, and responding to Election Day voting 
system problems--but states report that many have not been required to 
report these problems--states, territories, and the District may not 
have a complete picture of the extent of election problems that stem 
from voting systems, rather than from human errors. To states' and 
territories' credit, several report that they have adopted one or more 
mechanisms for systematically recording, tracking, and informing others 
about voting system problems, and are thus better positioned to help 
jurisdictions manage problems as they arise. Furthermore, systematic 
collection and review of problems has the potential to provide added 
benefits by allowing states and territories to identify problems that 
affect multiple jurisdictions, share approaches for troubleshooting and 
problem resolution, and inform other states and territories that use 
similar systems. 

States, territories, and the District face a number of challenges 
relative to acquiring, testing, operating, and maintaining voting 
systems. In general, these challenges are not unlike those faced by any 
technology acquirer or user--adoption and consistent application of 
standards for system capabilities and performance; rigorous and 
disciplined performance of testing activities; reliable measures and 
objective data on systems' performance; and integration of the people, 
process, and technology during system acquisition and operation. These 
challenges are heightened by other conditions common to many technology 
environments: decentralized and distributed responsibilities, evolving 
system standards and requirements, and funding opportunities and 
constraints. In addition, they are compounded by conditions unique to 
the elections environment, such as the need for transparency; the level 
of technical knowledge and skills among those responsible for 
acquiring, testing, and operating voting systems; the timing of the 
election cycles; and the degree of public attention to and scrutiny of 
voting systems. 

How well states, territories, and the District implement their voting 
approval, testing, and problem management efforts both within their own 
election environments and collectively will largely determine how well 
voting systems perform on Election Day nationwide. EAC has a major role 
to play in assisting these entities in accomplishing their voting 
system performance goals by providing resources and services. Although 
EAC is still in the initial stages of delivering some of these services 
and resources, the commission's efforts are largely viewed positively 
by most states and territories, and by the District. As EAC moves 
forward in providing services and resources, it will be important for 
it to continue communicating and coordinating with states and 
territories about their critical needs. 

We are also sending copies of this report to the Ranking Member of the 
Senate Committee on Rules and Administration, the Chairman and Ranking 
Member of the House Committee on House Administration, the Chairmen and 
Ranking Members of the Subcommittees on Financial Services and General 
Government, Senate and House Committees on Appropriations, and the 
Chairman and Ranking Member of the House Committee on Oversight and 
Government Reform. We are also sending copies to the Commissioners and 
Executive Director of the Election Assistance Commission, state and 
territory election officials and the election officials for the 
District of Columbia, and other interested parties. In addition, the 
report will be made available without charge on GAO's Web site at 
[hyperlink, http://www.gao.gov]. 

If you have any questions regarding this report, please contact me at 
(202) 512-3439 or at hiter@gao.gov. Contact points for our Offices of 
Congressional Relations and Public Affairs may be found on the last 
page of this report. Key contributors to this report are listed in 
appendix II. 

Sincerely, 

Signed by: 

Randolph C. Hite: 
Director, Information Technology Architecture and Systems Issues: 

[End of section] 

Appendix I: Objectives, Scope, and Methodology: 

Relative to the 50 states, 4 U.S. territories, and the District of 
Columbia (District), our objectives were to determine (1) what voting 
methods and systems they are using in federal elections and what 
changes are underway; (2) how they certify or otherwise approve voting 
systems for use in federal elections; (3) what other steps they take to 
ensure that voting systems are accurate, reliable, and secure; (4) how 
they identify, evaluate, and respond to voting system problems; and (5) 
how they view federal voting system-related resources and services. 

Three U.S. territories and one commonwealth were selected for this 
review--American Samoa, Guam, the Commonwealth of Puerto Rico, and the 
U.S. Virgin Islands--based on their federally mandated requirement to 
comply with the provisions of Help America Vote Act of 2002. For the 
purpose of this report, the term "territory" refers to all four 
entities. 

For all five objectives, we conducted a Web-based survey (GAO-08- 
1147SP) of the 50 states, 4 territories, and the District, which 
largely gathered data about the 2006 general election. To develop our 
survey, we reviewed existing reports about the election process, 
including previous and ongoing GAO work. The studies included those 
done by national or state organizations and state or local governments 
relative to prior elections. We also reviewed state statutes and other 
citations provided in response to a question on states' legal 
requirements for elections from our survey of state election practices 
issued on June 6, 2006.[Footnote 60] In addition, we contacted subject 
matter experts in elections and voting systems to gain views on themes 
and issues for topic areas and the applicability of these across the 
states. We designed the draft questionnaire in close collaboration with 
subject matter experts and participated in pretesting and refining 
subsequent drafts of the questionnaire. For the purpose of our survey, 
each question was asked from a state's perspective. In some instances, 
states were asked to respond about practices of local jurisdictions. 
U.S. territories were instructed to complete all survey questions that 
pertained to their territory's circumstances, including the appropriate 
equivalent for local jurisdictions. The scope of this work did not 
include verifying states' survey responses with local election 
officials. We conducted pretests with representatives of 5 states to 
help further refine our questions, develop new questions, clarify any 
ambiguous portions of the survey, and identify any potentially biased 
questions. These pretests were conducted in-person and by telephone 
with election officials from states with varying election system 
characteristics. 

Prior to fielding our state survey, we contacted the secretaries of 
state or other responsible state-level officials, as well as officials 
from the territories and the District to confirm the contact 
information for each one's director of elections or comparable 
official. We launched our Web-based survey for the states and the 
District on December 10, 2007, and for the territories on January 7, 
2008. We received all responses by April 18, 2008. Log-in information 
to the Web-based survey was e-mailed to directors of elections or 
comparable officials. We sent one follow-up e-mail message to all 
nonrespondents after the questionnaire was online for 4 weeks. After 
another 2 weeks, we contacted by telephone or e-mail all those who had 
not completed the questionnaire. We obtained responses from 47 states, 
all 4 territories, and the District (a 95 percent response rate). Three 
states (Michigan, New Jersey, and Utah) chose not to respond to our 
survey. Even so, the total number of responses to individual questions 
may be fewer than 52, depending upon how many states and territories, 
including the District, were eligible or chose to respond to a 
particular question. In particular, survey respondents who indicated 
they did not have a voting system approval requirement were given the 
option of skipping all subsequent questions related to approval. In 
this regard, one territory reported that they did not have an approval 
requirement because they did not utilize electronic voting systems. 

Because our survey was not a sample survey, but rather a census of 47 
states, the District, and all 4 territories, there are no sampling 
errors; however, the practical difficulties of conducting any survey 
may introduce nonsampling errors. For example, differences in how a 
particular question is interpreted, the sources of information 
available to respondents, or the types of people who do not respond can 
introduce unwanted variability into the survey results. We included 
steps in both the data collection and data analysis stages for the 
purpose of minimizing such nonsampling errors. We examined the survey 
results and performed computer analyses to identify inconsistencies and 
other indications of error. Where these occurred, survey respondents 
were contacted to provide clarification and the response was modified 
to reflect the revised information. For one survey question, which 
asked respondents to provide information on the extent to which they 
encountered errors or malfunctions with voting systems, we contacted 
all question respondents to clarify whether they encountered errors or 
malfunctions to "little extent" or "no extent," and reported responses 
to this question based on the clarified responses. Where notable 
inconsistencies or limited response rates existed for particular 
questions or topics, these responses were deemed unreliable and 
therefore not reported. A second, independent analyst checked the 
accuracy of all computer analyses. 

Statute citations were obtained from respondents to our survey. The 
statutes from these citations were reviewed to determine the 
specificity of the requirements and whether any commonalities existed 
among them. For the 3 states that did not respond to our survey, we 
obtained relevant statutes to determine their respective requirements 
for voting system approval, voting system testing, voting system 
problem management, and use of federal resources and services. Where 
appropriate, we reported on the requirements of the three states, based 
on our review of statutes. 

Where possible, the results of some questions in the 2001 and 2005 
surveys that GAO conducted after the 2000 and 2004 general elections 
were compared with results in the 2008 survey. For these previous 
surveys, GAO also surveyed state election officials from all 50 states 
and the District. These two surveys had a 100 percent response rate. 
The terminology of comparable questions regarding states' involvement 
in local jurisdiction selection of voting systems, states' requirements 
to certify or otherwise approve voting systems, and states' 
requirements to perform testing on voting systems prior to Election 
Day, was reviewed. Although the terminology of these questions was not 
identical, we believe the questions we asked the states are comparable 
because the structure and intent of the questions are alike. We were 
not able to able to make comparisons for the territories because our 
previous reports did not collect information from them. 

For all objectives, we also contacted state, territory, and District 
election officials to better understand and illustrate their approaches 
and issues, and obtained and reviewed relevant documentation from these 
officials, the Web sites they identified, and survey responses. We 
visited 9 states and the District, and interviewed by telephone 
officials from 20 states and two territories. Although the information 
obtained from these contacts with election officials cannot be 
generalized to other states and territories, the states and territories 
that we interviewed either in person or by telephone were chosen based 
on a wide variety of characteristics. These characteristics included 
voting methods and systems used, geographic characteristics, and 
aspects of election administration. Regarding election administration, 
we sought to have a mix of states and territories where the following 
varied: approval requirements, types of testing performed prior to 
Election Day, and requirements for problem management. Visits to states 
also were determined based on the election officials' availability due 
to the 2008 primary election season. We also contacted 18 states and 2 
territories by e-mail to obtain clarifications regarding survey 
responses. We obtained and reviewed available documentation on the 
requirements, processes, and technology of election administration for 
each state, territory, and the District to provide context for survey 
and interview responses. A summary of the contact method used for each 
state, territory, and the District is shown in table 10. The scope of 
this work did not include contacting local jurisdiction election 
officials about their voting system management practices; however, 
local officials participated in a few of our interviews with state 
election officials. 

Table 10: Method Used to Contact States, Territories, and the District: 

Contact method: Visit; 
Contact list: California; Colorado; Georgia, Louisiana; Mississippi; 
Nevada; Pennsylvania; Texas; Washington, D.C.; and Wisconsin. 

Contact method: Telephone interview; 
Contact list: Alaska; Arizona; Arkansas; Delaware; Florida; Guam; 
Hawaii; Idaho; Illinois; Iowa; Kansas; Kentucky; Maryland; New Mexico; 
New York; North Carolina; North Dakota; Ohio; Rhode Island; U.S. Virgin 
Islands; Vermont; and Virginia. 

Contact method: E-mail; 
Contact list: Alabama; American Samoa; Connecticut; Indiana; Maine; 
Massachusetts; Minnesota; Missouri; Montana; Nebraska; New Hampshire; 
Oklahoma; Oregon; Puerto Rico; South Carolina; South Dakota; Tennessee; 
Washington; West Virginia; and Wyoming. 

Source: GAO. 

[End of table] 

[End of section] 

We conducted this performance audit from October 2007 to September 2008 
in accordance with generally accepted government auditing standards. 
Those standards require that we plan and perform the audit to obtain 
sufficient, appropriate evidence to provide a reasonable basis for our 
findings and conclusions based on our audit objectives. We believe that 
the evidence obtained provides a reasonable basis for our findings and 
conclusions based on our audit objectives. 

[End of section] 

Appendix II: GAO Contact and Staff Acknowledgments: 

GAO Contact: 

Randolph C. Hite, (202) 512-3439 or hiter@gao.gov: 

Staff Acknowledgments: 

In addition to the individual named above, key contributions to this 
report were made by Paula Moore (Assistant Director), Gerard Aflague, 
Mathew Bader, Justin Booth, Scott Borre, Ashley Brooks, Neil Doherty, 
Michele Fejfar, Nancy Glover, Peggy Hegg, Dave Hinchman, Michael 
Holland, Valerie Hopkins, Ashfaq Huda, James MacAulay, Lee McCracken, 
Donald Sebers, Sushmita Srikanth, and Jeffrey Woodward. 

We gratefully acknowledge the time and cooperation of officials from 
the following governments who assisted us by pretesting our survey or 
hosting discussions at their offices: California, Colorado, the 
District of Columbia, Georgia, Louisiana, Maryland, Mississippi, 
Nevada, Oregon, Pennsylvania, Texas, Vermont, Washington, and 
Wisconsin. We also sincerely appreciate the efforts of the officials 
with all the states, territories, and the District of Columbia, who 
provided survey responses, additional information, documentation, and 
views on their voting systems and related management activities. 

[End of section] 

Glossary: 

This glossary is provided for reader convenience. It is not intended as 
a definitive, comprehensive glossary of election-related terms. 

Absentee and Early Voting: 

Programs that permit eligible persons to vote prior to Election Day. 
Absentee voting is generally conducted by mail in advance of Election 
Day and early voting is generally in-person voting in advance of 
Election Day at specific polling locations. 

Acceptance Testing: 

The examination of voting systems and their components by the 
purchasing election authority in a simulated-use environment to 
validate performance of delivered units in accordance with procurement 
activities. 

Ballot: 

The official presentation of all of the contests to be decided in a 
particular election--including candidates for specific offices, and 
measures to be decided--in printed, electronic display, audio, or 
tactile formats. 

Ballot Marking Device: 

Electronic devices used to mark an optical scan ballot, interpret and 
communicate the ballot selections to the voter for verification, and 
then print a voter-verified ballot to be processed by a precinct-based 
or central count optical scanner. Ballot marking devices do not store 
or tabulate votes electronically. 

Certification: 

Written assurance that a product, process, or service conforms to 
specified requirements. 

Federal certification. The process by which the Election Assistance 
Commission validates the compliance of a voting system with federal 
voluntary voting system standards and provides written assurance of 
conformance. 

State certification or approval. The process by which a state examines 
and possibly tests a voting system to determine its compliance with 
state requirements. The process includes activities undertaken by a 
state authority to (1) initially determine that a voting system has met 
or exceeded all minimum standards established for use in its elections, 
(2) grant reapproval after re-examination or retesting if modifications 
or enhancements are made to a system, and (3) revoke approval when a 
voting system fails to fulfill state requirements. 

Direct Recording Electronic (DRE): 

A voting method that captures votes electronically, without the use of 
paper ballots. Systems that employ this voting method use electronic 
components for ballot presentation, vote capture, vote recording, and 
tabulation. 

Election Assistance Commission (EAC): 

Commission established by the Congress in 2002 to help improve the 
administration of federal elections by--among other things-- 
administering the distribution of federal funds to the states for the 
replacement of older voting technologies, providing voluntary guidance 
to states on implementing certain provisions of the Help America Vote 
Act of 2002 (HAVA), serving as a national clearinghouse of state 
experiences in implementing such guidance and operating voting systems 
in general, conducting studies, and helping to develop voluntary 
standards and testing for election equipment. 

Election Day Parallel Testing: 

Testing to verify the accurate performance of a voting system through 
random selection and systematic evaluation of operational systems 
during an election. 

Election Jurisdictions: 

Counties, cities, townships, and villages that have responsibility for 
election administration. 

Election Management System: 

The set of processing functions and databases within a voting system 
that defines, develops, and maintains election databases; performs 
election definitions and setup functions; formats ballots; counts 
votes; consolidates and reports results; and maintains audit trails. 
Election management systems integrate the functions associated with 
readying vote-casting and tallying equipment for a given election with 
other election management functions. 

Federal Election Commission (FEC): 

Commission established by the Congress in 1975 to administer and 
enforce the Federal Election Campaign Act--the statute that governs the 
financing of federal elections. To carry out this role, FEC discloses 
campaign finance information; enforces provisions of the law, such as 
limits and prohibitions on contributions; and oversees the public 
funding of presidential elections. In 1990, it adopted the first 
federal voluntary voting system standards. 

Independent Testing Authorities: 

Independent testing organizations accredited by the National 
Association of State Election Directors (NASED) to perform voting 
system qualification testing. 

Lever Machines: 

Mechanical voting devices that make use of a ballot that is composed of 
an array of levers. Voters cast their votes by pulling down on levers 
next to the candidates' names or ballot issues of their choice. After 
making the ballot selections, the voter moves a handle that 
simultaneously opens the privacy curtain, records the vote, and resets 
the levers. 

National Association of State Election Directors (NASED): 

An independent, nongovernmental organization of state election 
officials. This organization formed a national program to test and 
qualify voting systems to the federal standards. 

Optical Scan: 

Voting method that uses electronic technology to tabulate paper 
ballots. An optical scan system is made up of computer-readable paper 
ballots, appropriate marking devices (writing instruments), privacy 
booths, and a computerized device that reads and tabulates the ballots. 

Paper Ballots: 

Printed material which displays the names of candidates and information 
on ballot measures to be voted on in elections. Voters generally 
complete their paper ballots in the privacy of a voting booth and 
record their choices by placing marks in boxes corresponding to the 
candidates' names and ballot issues. After making their choices, voters 
drop the ballots into sealed ballot boxes; the ballots are later 
manually counted and tabulated. 

Postelection Audit Testing: 

Postelection testing to review and reconcile election records to 
confirm correct conduct of an election or uncover evidence of problems 
with voting equipment or election processes. The audit includes 
verifying the accuracy of voting units and reconciling voting system 
records with information provided by the poll workers. This test can 
include all election equipment and results for one or more local 
jurisdictions, or the entire state, but it may focus on a sample of 
voting units and their outputs. 

Punch Card: 

Voting method that makes use of a ballot, a vote-recording device that 
keeps the ballot in place and allows the voter to punch holes in it, a 
privacy booth, and a computerized tabulation device. The voter inserts 
a machine-readable card with prescored numbered boxes representing 
ballot choices into the vote-recording device and uses a stylus to 
punch out the appropriate prescored boxes. The ballot must be properly 
aligned in the vote-recording device for the holes in the ballot card 
to be punched all the way through. Punch card ballots are counted by a 
computerized tabulation machine. 

Readiness Testing: 

Testing to verify that voting equipment is functioning properly, 
usually by confirming that predictable outputs are produced from 
predefined inputs. Readiness testing is typically conducted in the 
weeks leading up to Election Day. Also referred to as logic and 
accuracy testing. 

Recount: 

Some states authorize certain persons (e.g., defeated candidates and 
voters) to request an election recount under specified circumstances, 
such as a tie vote, a margin of victory that is within a specified 
percentage or number of votes, or alleged inaccuracies in the vote 
count. The scope and method of such recounts can vary to include, for 
example, partial recounts of certain precincts, complete recounts of 
all ballots, machine recounts, and hand recounts for the office or 
issue in question. Some states provide for mandatory (or automatic) 
recounts under certain conditions. 

Vote-By-Phone: 

Voting method that uses electronic and telecommunications technologies, 
including a standard touch-tone telephone and a printer, to mark a 
paper ballot, interpret and communicate the ballot selections to the 
voter for verification, and then print a voter-verified ballot to be 
processed. A vote-by-phone system does not store or tabulate votes 
electronically. 

Vote Tabulation: 

The counting of votes, either by hand or by electronic machine, from 
ballots cast at polling places on Election Day and those cast in 
person, by mail, or electronically prior to or on Election Day. 
Tabulation may occur at the polling place or at a central location. 
Tabulation activities also may include determining whether and how to 
count ballots that cannot be read by the vote-counting equipment; 
certifying the final vote counts; and performing recounts, if required. 

Voter-Verified Paper Audit Trail: 

A human-readable printed record of all of a voter's selections, 
presented to the voter to view and check for accuracy. 

Voting Method: 

The classes or types of machines used in a voting system. There are 
seven types of voting methods used in U.S. elections: hand-counted 
paper ballot, lever, punch card, direct recording electronic, ballot 
marking device, optical scan, and vote-by-phone. 

Voting System: 

The people, processes, and technology associated with any specific 
method of casting and counting votes. The technology component of a 
voting system is the mechanical, electromechanical, or electronic 
equipment; software; firmware; documentation; and other components 
required for election management activities. This includes ballot 
layout, vote casting, tabulation, transmission of results, and 
management of voting systems. 

Voting System Security Testing: 

Testing to verify that technical security controls embedded in voting 
equipment operate as intended, as well as ensuring that security 
policies and procedures governing the testing, operation, and use of 
the systems are properly defined and implemented by the responsible 
officials before an election. 

Voting System Standards: 

A set of minimum functional and performance requirements for electronic 
voting systems, which may include specified test procedures to be used 
to ensure that voting equipment meets the requirements. The FEC issued 
the first voluntary voting system standards in 1990 and revised them in 
2002. In 2002, HAVA assigned responsibility for updating the federal 
voluntary voting system standards to EAC. The federal voluntary voting 
system standards issued by EAC in December 2005 were known as the 
Voluntary Voting System Guidelines. EAC has recently issued a draft of 
the 2007 guidelines for public comment. 

Voting System Testing Laboratory: 

An organization that has been evaluated and approved as competent to 
test voting systems by EAC and the National Voluntary Laboratory 
Accreditation Program operated by the National Institute of Standards 
and Technology. 

[End of section] 

Related GAO Products: 

Elections: All Levels of Government Are Needed to Address Electronic 
Voting System Challenges. [hyperlink, http://www.gao.gov/cgi-
bin/getrpt?GAO-07-741T]. Washington, D.C.: April 18, 2007. 

Elections: The Nation's Evolving Election System as Reflected in the 
November 2004 General Election. [hyperlink, http://www.gao.gov/cgi-
bin/getrpt?GAO-06-450]. Washington, D.C.: June 6, 2006. 

Elections: Federal Efforts to Improve Security and Reliability of 
Electronic Voting Systems Are Under Way, but Key Activities Need to Be 
Completed. [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-05-956]. 
Washington, D.C.: September 21, 2005. 

Elections: Electronic Voting Offers Opportunities and Presents 
Challenges. [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-04-975T]. 
Washington, D.C.: July 20, 2004. 

Elections: A Framework for Evaluating Reform Proposals. [hyperlink, 
http://www.gao.gov/cgi-bin/getrpt?GAO-02-90]. Washington, D.C.: October 
15, 2001. 

Elections: Perspectives on Activities and Challenges Across the Nation. 
[hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-02-3]. Washington, 
D.C.: October 15, 2001. 

Voters with Disabilities: Access to Polling Places and Alternative 
Voting Methods. [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-02-
107]. Washington, D.C.: October 15, 2001. 

Elections: Status and Use of Federal Voting Equipment Standards. 
[hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-02-52]. Washington, 
D.C.: October 15, 2001. 

Elections: The Scope of Congressional Authority in Election 
Administration. [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-01-
470]. Washington, D.C.: March 13, 2001. 

[End of section] 

Footnotes: 

[1] See, for example, GAO, Elections: Perspectives on Activities and 
Challenges Across the Nation, [hyperlink, http://www.gao.gov/cgi-
bin/getrpt?GAO-02-3] (Washington, D.C.: Oct. 15, 2001); Elections: 
Status and Use of Federal Voting Equipment Standards, GAO-02-52 
(Washington, D.C.: Oct. 15, 2001); Elections: A Framework for 
Evaluating Reform Proposals, [hyperlink, http://www.gao.gov/cgi-
bin/getrpt?GAO-02-90] (Washington, D.C.: Oct. 15, 2001); Elections: 
Federal Efforts to Improve Security and Reliability of Electronic 
Voting Systems Are Under Way, but Key Activities Need to Be Completed, 
[hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-05-956] (Washington, 
D.C.: Sept. 21, 2005); Elections: The Nation's Evolving Election System 
as Reflected in the November 2004 General Election, [hyperlink, 
http://www.gao.gov/cgi-bin/getrpt?GAO-06-450] (Washington, D.C.: June 
6, 2006); and Elections: All Levels of Government Are Needed to Address 
Electronic Voting System Challenges, [hyperlink, http://www.gao.gov/cgi-
bin/getrpt?GAO-07-741T] (Washington, D.C.: April 18, 2007). 

[2] Pub. L. No. 107-252, 116 Stat. 1666 (2002). 

[3] GAO, Elections: 2007 Survey of State Voting System Programs, 
[hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-08-1147SP] 
(Washington, D.C.: Sept. 25, 2008). 

[4] For the purpose of this report, the term "survey respondent" refers 
to all entities who responded to a survey question and "territory" 
refers to the three territories and one commonwealth. The term "states 
and others" refers to some portion of the 50 states and at least one 
territory or the District. 

[5] For the three states that did not respond to our survey, we 
obtained and reviewed relevant statutes to determine their respective 
requirements and where appropriate, we reported on these requirements. 

[6] These stages provide vote casting opportunities through absentee 
voting, early voting, and Election Day voting at polling places. 

[7] Penetration testing is where evaluators attempt to circumvent the 
security features of a system, using common tools and techniques, and 
based on their understanding of the system design and implementation, 
in order to identify methods of gaining access to a system. 

[8] See the Related GAO Products page at the end of this report for a 
list of GAO reports on voting systems since 2001. These products can be 
found on our Web site at [hyperlink, http://www.gao.gov]. 

[9] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-02-3]. 

[10] Provisional voting is also generally used by states to address 
certain voter eligibility issues encountered at the polling place on 
Election Day. A provisional ballot cast by an individual with an 
eligibility issue would not typically be counted until the individual's 
eligibility to vote under state law has been verified. 

[11] Two older voting methods--lever machine and punch card--are no 
longer widely used. 

[12] Precinct count optical scan equipment sits on a ballot box with 
two compartments for scanned ballots--one for accepted ballots (i.e., 
those that are properly filled out) and one for rejected ballots (i.e., 
blank ballots, ballots with write-ins, or those accepted because of a 
forced override). In addition, an auxiliary compartment in the ballot 
box is used for storing ballots if an emergency arises (e.g., loss of 
power or machine failure) that prevents the ballots from being scanned. 

[13] Prior to HAVA, no federal agency was assigned or assumed 
responsibility for testing and certifying voting systems against the 
federal standards. Instead, the National Association of State Election 
Directors (NASED), through its Voting Systems Committee, assumed this 
responsibility by accrediting independent test authorities, which in 
turn tested equipment against the standards. This program was 
discontinued in July 2006. 

[14] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-07-741T]. 

[15] See the Related GAO Products page at the end of the report for a 
list of these reports. 

[16] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-02-3]. 

[17] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-02-52]. 

[18] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-02-90]. 

[19] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-02-107]. 

[20] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-05-956]. 

[21] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-06-450]. 

[22] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-07-741T]. 

[23] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-07-741T]. 

[24] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-06-450]. 

[25] Qualifying approval is either approving a voting system due to 
special circumstances or adding additional conditions or procedures 
that must be met to fully comply with state requirements and to permit 
the system's use. 

[26] We previously reported that 45 states and the District, and 42 
states and the District, had certification programs for the 2000, see 
[hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-02-3] and 2004 
general elections, see [hyperlink, http://www.gao.gov/cgi-
bin/getrpt?GAO-06-450], respectively. Our previous reports did not 
collect information on territory requirements for voting system 
certification. 

[27] Respondents that do not report a requirement for voting system 
approval were excluded from this survey question. 

[28] Although survey respondents that did not report a requirement for 
system approval were excluded from this survey question, one state and 
one territory that did not have approval requirements nevertheless 
reported performing testing as part of approval. 

[29] Such testing was not applicable for the majority of these states 
because they do not use an electronic poll book as part of their voting 
system. An electronic poll book is an electronic mechanism, including 
stand-alone software, by which an election official at a polling place, 
at the time an individual seeks to vote, may obtain information on the 
individual's eligibility to vote, whether the mechanism is operated by 
integration with a voting system or independently. 

[30] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-05-956]. 

[31] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-06-450]. 

[32] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-06-450]. For the 
2004 general election, states and the District reported performing each 
of the following tests: acceptance (26 states and the District); 
readiness (49 states and the District); Election Day parallel (13 
states); postelection audit (22 states and the district); and security 
(24 states and the District). Our results included responses from all 
50 states and the District. We did not include the territories in that 
survey. 

[33] The specifications may reference a system configuration previously 
approved or certified by the state and may include capabilities or 
configurations required for a particular election. 

[34] U.S. Election Assistance Commission, Acceptance Testing, undated. 
[hyperlink, http://www.eac.gov/election/quick-start-management-
guides/]. 

[35] This included one state that identified acceptance testing 
responsibilities for the county auditor, rather than local election 
officials. 

[36] U.S. Election Assistance Commission, Quick Start Management Guide 
for Ballot Preparation/Printing and Pre-Election Testing, September 
2006. [hyperlink, http://www.eac.gov/election/quick-start-management-
guides]. 

[37] An election definition specifies the contests and questions that 
will appear on the ballot for a particular election. The electronic 
definition in a voting system may generate the ballot display (DREs), 
translate voter selections into ballot marks (ballot marking devices, 
vote-by-phone), or correctly match voter selections to ballot choices 
for vote tabulation (DREs, optical scan machines). 

[38] U.S. Election Assistance Commission, Pre-election and Parallel 
Testing, undated. [hyperlink, http://www.eac.gov/election/quick-start-
management-guides]. 

[39] In 2006, we reported that 22 states and the District performed 
postelection audits for the 2004 general election. 

[40] Respondents that do not report a requirement for voting system 
approval were excluded from this survey question. 

[41] U.S. Election Assistance Commission, Quick Start Management Guide 
for Voting System Security, September 2006. [hyperlink, 
http://www.eac.gov/election/quick-start-management-guides]. 

[42] National Institute of Standards and Technology, Technical Guide to 
Information Security Testing (Draft), Special Publication 800-115 
(Draft), November 2007. [hyperlink, 
http://csrc.nist.gov/publications/drafts/sp800-115/Draft-SP800-
115.pdf]. 

[43] Test decks are used to determine whether the voting equipment 
(hardware and software) reads and tabulates the marks on a ballot or 
touches on a screen with 100 percent accuracy. 

[44] Survey response choices were "great extent," "moderate extent," 
"little or no extent," "not applicable," and "don't know." We contacted 
all respondents to clarify whether problems were encountered to a 
"little extent" or "no extent." To determine the extent to which each 
type of problem was experienced, state officials told us they 
considered such factors as the number of machines that malfunctioned, 
the number of voters affected, and the difficulty they had in 
identifying and resolving the problem. 

[45] One state identified a problem that occurred to a great extent in 
the category of "Other"--voter assistance terminals that often failed 
to read ballots. 

[46] The remaining respondents either checked "Don't know" or did not 
respond to this question. 

[47] The remaining respondents either checked "Don't know" or did not 
respond to this question. 

[48] An additional survey choice was "on-site monitoring" of voting 
system problems, where state election officials traveled to local 
jurisdictions to observe system operations and problems first-hand. 

[49] Section 402 of HAVA requires that states establish complaint 
procedures to address deficiencies in the voting system requirements of 
HAVA Title III. 

[50] The remainder of states reported that they either did not conduct 
any of the evaluations or did not know whether a particular action was 
taken. 

[51] Some of the states we surveyed reported that the challenges were 
not applicable to their election environment. 

[52] Statutes or regulations require testing voting systems to the 1990 
or 2002 voting system standards, or 2005 voluntary voting system 
guidelines. 

[53] Prior to July 2006, NASED reviewed testing results from 
independent testing laboratories and granted qualification to systems 
that met federal standards, either the 1990 or 2002 voting system 
standards. 

[54] Vendors began submitting voting systems to federally accredited 
laboratories for review and testing against either the 2005 voluntary 
guidelines or the 2002 voting system standards in February 2007. As of 
May 2008, EAC had registered 12 manufacturers and accepted 
certification applications for 9 different voting systems; none of 
these systems had received full EAC certification. 

[55] EAC has taken steps to inform states and others of the status of 
the voting systems that are undergoing federal certification and, in 
May 2008, notified election officials nationwide that it does not 
expect to expedite the certification process because doing so might 
lower the quality of testing and jeopardize confidence in the program. 

[56] Our survey did not differentiate between the 2005 and draft 2007 
voluntary voting system guidelines. As such, survey responses could 
express satisfaction or dissatisfaction with either set of standards. 

[57] These laboratories were CIBER, Inc., SysTest Labs, and Wyle Labs. 

[58] This state has no statutory requirement for federal certification 
prior to state approval of a voting system. 

[59] Generally, test plans outline the approach a testing laboratory 
expects to take in testing a system to the federal guidelines; test 
reports describe the system being tested (including hardware and 
software specifications) and summarize the testing activities performed 
and the results, including any deficiencies with the system or its 
documentation. 

[60] [hyperlink, http://www.gao.gov/cgi-bin/getrpt?GAO-06-450]. 

[End of section] 

GAO's Mission: 

The Government Accountability Office, the audit, evaluation and 
investigative arm of Congress, exists to support Congress in meeting 
its constitutional responsibilities and to help improve the performance 
and accountability of the federal government for the American people. 
GAO examines the use of public funds; evaluates federal programs and 
policies; and provides analyses, recommendations, and other assistance 
to help Congress make informed oversight, policy, and funding 
decisions. GAO's commitment to good government is reflected in its core 
values of accountability, integrity, and reliability. 

Obtaining Copies of GAO Reports and Testimony: 

The fastest and easiest way to obtain copies of GAO documents at no 
cost is through GAO's Web site [hyperlink, http://www.gao.gov]. Each 
weekday, GAO posts newly released reports, testimony, and 
correspondence on its Web site. To have GAO e-mail you a list of newly 
posted products every afternoon, go to [hyperlink, http://www.gao.gov] 
and select "E-mail Updates." 

Order by Mail or Phone: 

The first copy of each printed report is free. Additional copies are $2 
each. A check or money order should be made out to the Superintendent 
of Documents. GAO also accepts VISA and Mastercard. Orders for 100 or 
more copies mailed to a single address are discounted 25 percent. 
Orders should be sent to: 

U.S. Government Accountability Office: 
441 G Street NW, Room LM: 
Washington, D.C. 20548: 

To order by Phone: 
Voice: (202) 512-6000: 
TDD: (202) 512-2537: 
Fax: (202) 512-6061: 

To Report Fraud, Waste, and Abuse in Federal Programs: 

Contact: 

Web site: [hyperlink, http://www.gao.gov/fraudnet/fraudnet.htm]: 
E-mail: fraudnet@gao.gov: 
Automated answering system: (800) 424-5454 or (202) 512-7470: 

Congressional Relations: 

Ralph Dawn, Managing Director, dawnr@gao.gov: 
(202) 512-4400: 
U.S. Government Accountability Office: 
441 G Street NW, Room 7125: 
Washington, D.C. 20548: 

Public Affairs: 

Chuck Young, Managing Director, youngc1@gao.gov: 
(202) 512-4800: 
U.S. Government Accountability Office: 
441 G Street NW, Room 7149: 
Washington, D.C. 20548: