Studying Elections: Data Quality and Pitfalls in Measuring the Effects of Voting Technologies
Working Paper No.:  21
Date Published:  2008-11-30

Author(s):

R. Michael Alvarez, California Institute of Technology

Stephen Ansolabehere, Massachusetts Institute of Technology

Charles Stewart III, Massachusetts Institute of Technology

Abstract:

Professor Geralyn Miller reminds us of the range of voting administration practices across the United States. We use this variability to study the average performance of various types of voting equipment throughout the country (Ansolabehere and Stewart n.d.). Professor Miller suggests that the performance of equipment is, in fact, quite variable across states. Aparticular technology that performs poorly nationwide might perform well in a particular setting—either because the technology is well suited to the peculiarities of the setting or because a locality has been proficient in overcoming shortcomings that vex other jurisdictions. In making this point, Professor Miller examines two states, Wyoming and Pennsylvania, in the 2000 election.

While we are sensitive to the general point Miller makes, her article does not in fact demonstrate it. Instead, careful consideration of this paper raises a separate, but equally important matter—the content and quality of local and state election reports. The data she employs run up against problems that face all researchers doing this type of analysis. Rather than mount a full-scale critique of Miller’s findings, we think it more constructive to focus on the two major data problems in her article, as an illustration of precisely how difficult it is to conduct this type of research. The most serious errors in Miller’s article are not readily apparent to most researchers doing voting technology research. Indeed, as we will show, Miller stumbled upon one error that we committed and were publicly chastised for.

The two states that Miller studies illustrate separate and important data problems. Pennsylvania illustrates that states do not report all the data necessary to study the performance of voting technologies. Wyoming illustrates that not all states report all they seem to be reporting.

Beyond data collection concerns, there is also a basic issue of research design. Single cross-sectional studies of individual states in fact have little statistical power. The number of counties is simply too small to arrive at meaningful estimates of average effects of technologies, let alone the interactive or varying effects of technologies used. Research on election administration needs to go beyond looking at single elections in order to establish the point that voting technology performance varies across states. That lesson is most clearly borne out in our prior research on this subject, in which many puzzling results emerge in cross-sections that are resolved in panel studies.

Attachment

Studying Elections: Data Quality and Pitfalls in Measuring the Effects of Voting Technologies  (Size: 130 KB)