Omar Al-Ubaydli, Faith Fatchen, John A List
Cited by*: Downloads*:

Field experiments are a useful empirical tool that can be deployed in any sub-discipline - including institutional economics - to enhance the sub-discipline's empirical insights. However, we here argue that there exist fundamental barriers to the use of field experiments in understanding the impact of institutions on economic growth. Despite these obstacles, we present some significant scholarly contributions that merit exposition, while also proposing some future methods for using field experiments within institutional economics. While field experiments may be limited in answering questions in institutional economics with macroeconomic outcomes, there is great potential in employing field experiments to answer micro founded questions.
John A List, Lina Ramirez, Julia Seither, Jaime Unda, Beatriz Vallejo
Cited by*: Downloads*:

Misinformation represents a vital threat to the societal fabric of modern economies. While the supply side of the misinformation market has begun to receive increased scrutiny, the demand side has received scant attention. We explore the demand for misinformation through the lens of augmenting critical thinking skills in a field experiment during the 2022 Presidential election in Colombia. Data from roughly 2.000 individual suggest that our treatments enhance critical thinking, causing subjects to more carefully consider the truthfulness of potential misinformation. We furthermore provide evidence that reducing the demand of fake news can deliver on the dual goal of reducing the spread of fake news by encouraging reporting of misinformation.
John A List
Cited by*: Downloads*:

In 2019, I put together a summary of data from my field experiments website that pertained to natural field experiments (Harrison and List, 2024). Several people have asked me for updates. In this document I update all figures and numbers to show the details for 2023. I also include the description from the original paper below.
John A List
Cited by*: Downloads*:

In 2019 I put together a summary of data from my field experiments website that pertained to framed field experiments (see List 2024). Several people have asked me if I have an update. In this document I update all figures and numbers to show the details for 2023. I also include the description from the 2019 paper below.
John A List
Cited by*: Downloads*:

Social scientists have increasingly turned to the experimental method to understand human behavior. One critical issue that makes solving social problems difficult is "scaling" the idea from a small group to a larger group in more diverse situations. The urgency of scaling policies impacts us every day, whether it is protecting the health and safety of a community or enhancing the opportunities of future generations. Yet, a common result is that when we scale ideas most experience a "voltage drop": upon scaling, the benefit-cost profile depreciates considerably. To combat voltage drops, we must optimally generate policy-based evidence. Optimality requires answering two crucial questions: what information to generate and in what sequence. The economics underlying the science of scaling provides insights into these questions, which are in some cases at odds with conventional approaches. For example, there are important situations wherein I advocate flipping the traditional social science research model to an approach that, from the beginning, produces the type of policy-based evidence that the science of scaling demands. To do so, I propose augmenting efficacy trials by including relevant tests of scale in the original discovery process, which forces the scientist to naturally start with a recognition of the big picture: what information do I need to haves caling confidence?
John A List
Cited by*: Downloads*:

In 2019, I put together a summary of data from my field experiments website that pertained to artefactual field experiments. Several people have asked me if I have an update. In this document I update all figures and numbers to show the details for the year 2023. I also include the description from the 2019 paper below. The definition of artefactual field experiments comes originally from Harrison and List (2004) and is advanced in List (2024).
Amanda Agan, Bo Cowgill, Laura Gee
Cited by*: Downloads*:

Correspondence audit studies have sent almost one-hundred-thousand resumes without informing subjects they are in a study - increasing realism, but without being fully transparent. We study the potential trade-offs of this lack of transparency by running a hiring field experiment with recruiters in a natural setting. One group of recruiters is told they are screening for an employer, and another is told they are part of an academic study. Job applicants' gender is randomly assigned. When subjects are told they are in an experiment, callback rates and willingness-to-pay for male candidates decline relative to female candidates (with no detectable change for female candidates). This suggests that telling subjects they are in an experiment would underestimate gender inequality.
John A List
Cited by*: Downloads*:

ASSA 2023 presentation
John A List
Cited by*: Downloads*:

"Putting Economic Research into Practice at Businesses (to Inform Science and Major Social Challenges)" Slides from ASSA NABE
Gary Charness, Brian Jabarian, John A List
Cited by*: Downloads*:

We investigate the potential for Large Language Models (LLMs) to enhance scientific practice within experimentation by identifying key areas, directions, and implications. First, we discuss how these models can improve experimental design, including improving the elicitation wording, coding experiments, and producing documentation. Second, we discuss the implementation of experiments using LLMs, focusing on enhancing causal inference by creating consistent experiences, improving comprehension of instructions, and monitoring participant engagement in real time. Third, we highlight how LLMs can help analyze experimental data, including pre-processing, data cleaning, and other analytical tasks while helping reviewers and replicators investigate studies. Each of these tasks improves the probability of reporting accurate findings. Finally, we recommend a scientific governance blueprint that manages the potential risks of using LLMs for experimental research while promoting their benefits. This could pave the way for open science opportunities and foster a culture of policy and industry experimentation at scale.
  • 1 of 1