Eszter Czibor, David Jimenez-Gomez, John A List
Cited by*: Downloads*:

What was once broadly viewed as an impossibility - learning from experimental data in economics - has now become commonplace. Governmental bodies, think tanks, and corporations around the world employ teams of experimental researchers to answer their most pressing questions. For their part, in the past two decades academics have begun to more actively partner with organizations to generate data via field experimentation. While this revolution in evidence-based approaches has served to deepen the economic science, recently a credibility crisis has caused even the most ardent experimental proponents to pause. This study takes a step back from the burgeoning experimental literature and introduces 12 actions that might help to alleviate this credibility crisis and raise experimental economics to an even higher level. In this way, we view our "12 action wish list" as discussion points to enrich the field.
Erwin Bulte, John A List, Daan van Soest
Cited by*: Downloads*:

Incomplete contracts are the rule rather than the exception, and any incentive scheme faces the risk of improving performance on incented aspects of a task at the detriment of performance on non-incented aspects. Recent research documents the effect of loss-framed versus gain-framed incentives on incentivized behavior, but how do such incentives affect overall performance? We explore potential trade-offs by conducting field experiments in an artificial "workplace". We explore two types of incentive spillovers: those contemporaneous to the incented task and those subsequent to the incented task. We report three main results. First, consonant with the extant literature, a loss aversion incentive induces greater effort on the incented task. Second, offsetting this productivity gain, we find that the quality of work decreases if quality is not specified in the incentive contract. Third, we find no evidence of harmful spillover effects to subsequent tasks; if anything, the loss aversion incentive induces more effort in subsequent tasks. Taken together, our results highlight that measuring and accounting for incentive spillovers are important when considering their overall impact.
Michal Krawczyk
Cited by*: Downloads*:

Several studies have identified the "better than average" effect - the tendency of most people to think they are better than most other people on most dimensions. The effect would have profound consequences (see e.g. Barber and Odean (2001)). These findings are predominantly based on non-incentivized, non-verifiable self-reports. The current study looks at the impact of incentives to judge one's abilities accurately in a framed field experiment. Nearly 400 students were asked to predict whether they would do better or worse than average in an exam. The most important findings are that subjects tend to show more confidence when incentivized and when asked before the exam rather than afterwards. The first effect shows particularly in females.
John A List, Dana L Suskind
Cited by*: Downloads*:

Op-ed
John A List
Cited by*: Downloads*:

While empirical economics has made important strides over the past half century, there is a recent attack that threatens the foundations of the empirical approach in economics: external validity. Certain dogmatic arguments are not new, yet in some circles the generalizability question is beyond dispute, rendering empirical work as a passive enterprise based on frivolity. Such arguments serve to caution even the staunchest empirical advocates from even starting an empirical inquiry in a novel setting. In its simplest form, questions of external validity revolve around whether the results of the received study can be generalized to different people, situations, stimuli, and time periods. This study clarifies and places the external validity crisis into perspective by taking a unique glimpse into the grandest of trials: The External Validity Trial. A key outcome of the proceedings is an Author Onus Probandi, which outlines four key areas that every study should report to address external validity. Such an evaluative approach properly rewards empirical advances and justly recognizes inherent empirical limitations.
Matthew A. Kraft, John A List, Jeffrey A Livingston, Sally Sadoff
Cited by*: Downloads*:

In-person tutoring programs can have large impacts on K-12 student achievement, but high program costs and limited local supply of tutors have hampered scale-up. Online tutoring provided by volunteers can potentially reach more students in need. We implemented a randomized pilot program of online tutoring that paired college volunteers with middle school students. We estimate consistently positive but statistically insignificant effects on student achievement, 0.07s for math and 0.04s for reading. While our estimated effects are smaller than those for many higher-dosage in-person programs, they are from a significantly lower-cost program delivered within the challenging context of the COVID-19 pandemic.
John A List
Cited by*: Downloads*:

In 2019 I put together a summary of data from my field experiments website that pertained to framed field experiments (see List 2024). Several people have asked me if I have an update. In this document I update all figures and numbers to show the details for 2023. I also include the description from the 2019 paper below.
John A List
Cited by*: Downloads*:

In 2019 I put together a summary of data from my field experiments website that pertained to framed field experiments. Several people have asked me if I have update. In this document I update all figures and numbers to show the details for 2021. I also include the description from the 2019 paper below.
Greg Allenby, Russell Belk, Catherine Eckel, Robert Fisher, Ernan Haruvy, John A List, Yu Ma, Peter Popkowski Leszczyc, Yu Wang, Sherry Xin Li
Cited by*: Downloads*:

We offer a unified conceptual, behavioral, and econometric framework for optimal fundraising that deals with both synergies and discrepancies between approaches from economics, consumer behavior, and sociology. The purpose is to offer a framework that can bridge differences and open a dialogue between disciplines in order to facilitate optimal fundraising design. The literature is extensive, and our purpose is to offer a brief background and perspective on each of the approaches, provide an integrated framework leading to new insights, and discuss areas of future research.
Andrea Morone, Paola Tiranzoni
Cited by*: Downloads*:

This study presents an analysis of hypothetical bias in WTA valuation connected with a bargaining game setting, in a field experiment context. The field of the experiment is the History Channel television "Pawn Stars". We collected a unique dataset that allowed us to analyze not only the gap between real and hypothetical WTA but also how they affect the bargaining game and vice versa. The general aim of this paper is to study the hypothetical bias related to subjects' WTA, and the factors that mostly affect it. The main results, of our paper, show that the hypothetical bias is positive, and it depends mainly on the price range and the type of good.