Editor’s note: Campaigns are crunch time for election administrators, folks campaigning for candidates or electoral reforms, and voter protection groups. But this past cycle, the election was also crunch time for teams of academics conducting research on the 2024 election, funded by Public Agenda’s Democracy Renewal Project (DRP).
As a non-academic, I was curious to better understand how research on elections works, and what some of these researchers were investigating. So, Democracy Notes partnered with Public Agenda to bring answers to those questions to you!
We hope you enjoy this first conversation about the methods researchers can use to study an election.
What’s all this about?
Public Agenda connects research to action in service of a healthier democracy. We built the Democracy Renewal Project (DRP) to answer questions that matter to people doing the practical work of strengthening democratic processes, institutions, and cultures.
In the first DRP cycle, we funded ten research teams who are using rigorous methods to produce practical evidence on how we can increase access to electoral participation while strengthening trust in elections. We timed our grantmaking so researchers could study the 2024 presidential election, and right now they are wrapping up data analysis. We eagerly await their findings. But even before we have answers to specific research questions, we can learn from their experience conducting research during an election cycle.
Now that the 2024 election is several months behind us, most Americans are naturally focused on the actions of the people who won those elections, not least because we are experiencing rapid and dramatic change. That makes sense. But the pace and magnitude of change should remind us that the primary way Americans can act on their satisfaction or dissatisfaction with elected officials is through elections. So now is a great time to dig into the latest high-quality election research.
In the coming weeks, we’ll be sharing three interviews we conducted with DRP researchers and advisors. We hope you find their methods and insights useful! Here’s the first one.
The interview
Don Green, J.W. Burgess Professor of Political Science at Columbia University, literally wrote the book (or, in his case, the books) on field experiments in political science (see, among others, Field Experiments: Design, Analysis, and Interpretation published in 2012 and Get Out The Vote: How to Increase Voter Turnout, co-authored with Alan S Gerber, now in its fifth edition).
Don is a brilliant and generous advisor to Public Agenda’s Democracy Renewal Project. I have condensed and revised our conversation.
Don Green: How about we start with the question, “What is an experiment?”
In an experiment, the units of observation — in other words, the things we are studying — are randomly assigned to either a treatment or a control condition.
Emily Sandusky, Director, Public Agenda: Tell me more about treatment conditions in the social sciences.
DG: Let's consider experiments in the world of politics. Social scientists often partner with organizations that want to accomplish something like increasing voter turnout, recruiting volunteers, or raising money.
Imagine a fundraising effort. A group might send out this or that piece of mail. Each of these messages is a treatment. The researcher's job is to randomly assign treatments to potential donors, then to tally up and compare the amount of money that comes in.
Because the treatments are randomly assigned, we know that random chance — rather than shared preferences or demographic characteristics, for example — is the only thing that differentiates individuals who receive fundraising message A from fundraising message B. The larger the study, the more confident the researcher can be that the messages are the cause of any differences in money raised across the experimental groups.
ES: It sounds like you're describing a field experiment.
DG: Yes, that one is a field experiment because it takes place in a naturalistic setting. Researchers also conduct experiments in the course of conducting surveys. Researchers might randomly assign different groups of respondents to different fundraising messages and observe which messages elicit interest in making a donation.
Field experiments and survey experiments vary in how closely they approximate the kind of setting, subjects, interventions, and outcomes that a group or researcher wants to understand.
If you're a survey researcher and you're interested in a survey response, then there's no reason to go in the field. On the other hand, if you're interested in the effects of an intervention on voter turnout, a survey may not capture the outcome you really care about, which is voting. Instead, you’d have to rely on responses to a survey question about intention to vote or other expressions of interest in voting.
ES: How do researchers know whether someone actually voted?
DG: Good question. Imagine that a labor union is trying to mobilize its members to vote. The union has a list of members. The first step is to match names and addresses on the members list to an official voter file, a public record that indicates who is registered to vote.
The experimenter’s job is to randomly assign some registered union members to receive the mailing and others to a control group that won’t get the mailing. After the election, the researcher would examine the updated voter file, which also indicates who voted (but not the candidate they voted for) to compare voting rates among union members who did or did not get the get-out-the-vote mailing.
ES: Around election time, we hear a ton about polling. What can we learn from experiments that we don't learn from the polls and vice versa?
DG: A poll gives a snapshot of public opinion. The extent to which the snapshot reliably reflects what is actually going on depends on the methods used to select the sample and things like the response rate and other features of the poll. Now we know that one of the challenges that polls have faced in the last 20 or 30 years is rapidly declining response rates; that makes it tricky to draw inferences about the state of a race. That’s a nagging problem. But, there's also a useful role that polls can play in terms of gauging opinion over time. So you might say, well, response rates are low, but they might help gauge over-time trends if the nonresponse problem is similar over time.
Polls answer descriptive questions, but they rarely speak convincingly about cause and effect.
ES: What is it about the design of social science experiments that allows researchers to identify causality?
DG: First, the drawback to observational data like opinion polls is that we very often don't know how to form a fair comparison between the treated and the untreated. When a treatment, like seeing a political ad, is not randomly assigned, we cannot be sure that the treated and untreated groups are not different from each other in ways that might generate misleading conclusions. Members of one group might be more interested in politics. We can use clever statistical techniques to approximate an experiment with random assignment, but they're always going to require some strong assumptions.The fact that those assumptions are lurking in the background means that a determined skeptic might not be completely convinced that the treatment led to the observed outcome, rather than some pre-existing difference between treated and untreated groups. In fact, they may not be convinced at all.
A strength of experimental studies is that random assignment allows for a fair comparison, which gives us an unbiased reading of the effect of a treatment, at least in the experimental context, which may be quite narrow. As an example, an experiment might test whether receiving a political message that is distributed by mail in a single state affects voter turnout.
Critics of field experiments may not be satisfied with an answer that focuses on the specific treatment or experimental context. These critics, in essence, are calling for many, many more experiments. They are arguing that we should cast our net much more broadly and look at different kinds of people in different circumstances and get a fuller picture of cause and effect. Most experimenters view this kind of healthy skepticism as inspiration for an ever more ambitious experimental research agenda.
Emily Sandusky is a Director at Public Agenda.
Have thoughts on this piece? You can submit a pitch to Democracy Takes here.