Greg Allenby, Russell Belk, Catherine Eckel, Robert Fisher, Ernan Haruvy, John A List, Yu Ma, Peter Popkowski Leszczyc, Yu Wang, Sherry Xin Li
Cited by*: Downloads*:

We offer a unified conceptual, behavioral, and econometric framework for optimal fundraising that deals with both synergies and discrepancies between approaches from economics, consumer behavior, and sociology. The purpose is to offer a framework that can bridge differences and open a dialogue between disciplines in order to facilitate optimal fundraising design. The literature is extensive, and our purpose is to offer a brief background and perspective on each of the approaches, provide an integrated framework leading to new insights, and discuss areas of future research.
Matthew A. Kraft, John A List, Jeffrey A Livingston, Sally Sadoff
Cited by*: Downloads*:

In-person tutoring programs can have large impacts on K-12 student achievement, but high program costs and limited local supply of tutors have hampered scale-up. Online tutoring provided by volunteers can potentially reach more students in need. We implemented a randomized pilot program of online tutoring that paired college volunteers with middle school students. We estimate consistently positive but statistically insignificant effects on student achievement, 0.07s for math and 0.04s for reading. While our estimated effects are smaller than those for many higher-dosage in-person programs, they are from a significantly lower-cost program delivered within the challenging context of the COVID-19 pandemic.
Christopher Cotton, Brent R Hickman, John A List, Joseph Price, Sutanuka Roy
Cited by*: Downloads*:

We conduct a field experiment across three diverse school districts to structurally identify student motivation and study productivity parameters in a model of adolescent human capital development. By observing study time, homework task completion, and test results, we can identify individual and demographic variations in motivation and study time effectiveness. Struggling students typically do not lack motivation but rather struggle to convert study time into completed assignments and proficiency improvements. The study also attending a higher-performing school is associated with both higher productivity and higher motivation relative to peers with similar observables in lower-performing schools. Counterfactual analyses estimates that school quality differences account for a substantial share of the racial differences in test scores, and considers the impact of alternative policies aimed at reducing racial performance gaps.
John A List
Cited by*: Downloads*:

In 2019 I put together a summary of data from my field experiments website that pertained to framed field experiments. Several people have asked me if I have update. In this document I update all figures and numbers to show the details for 2021. I also include the description from the 2019 paper below.
Omar Al-Ubaydli, John A List, Dana L Suskind
Cited by*: Downloads*:

Policymakers are increasingly turning to insights gained from the experimental method as a means of informing public policies. Whether-and to what extent-insights from a research study scale to the level of the broader public is, in many situations, based on blind faith. This scale-up problem can lead to a vast waste of resources, a missed opportunity to improve people's lives, and a diminution in the public's trust in the scientific method's ability to contribute to policymaking. This study provides a theoretical lens to deepen our understanding of the science of how to use science. Through a simple model, we highlight three elements of the scale-up problem: (1) when does evidence become actionable (appropriate statistical inference); (2) properties of the population; and (3) properties of the situation. We argue that until these three areas are fully understood and recognized by researchers and policymakers, the threats to scalability will render any scaling exercise as particularly vulnerable. In this way, our work represents a challenge to empiricists to estimate the nature and extent of how important the various threats to scalability are in practice, and to implement those in their original research.
Rocco Caferra, Roberto Dell'Anno, Andrea Morone
Cited by*: Downloads*:

This paper aims to unmask the inadequacy of the review process of a sample of fee-charging journals in economics. We submitted a bait-manuscript to 104 academic economic journals to test whether there is a difference in the peer-review process between Article Processing Charges (APC) journals and Traditional journals which do not require a publication fee. The submitted bait-article was based on completely made-up data, with evident errors in terms of methodology, literature, reporting of results, and quality of language. Nevertheless, about half of the APC journals fell in the trap. Their editors accepted the article in the journals and required to pay the publication fee. We conclude that the Traditional model has a more effective incentive-mechanism in selecting articles, based on quality standards. Otherwise, we confirm that the so-called "Predatory Journals" - i.e. academic journals which accept papers without a quality check - exploit the APC scheme to increase their profits. They are also able to enter whitelists (e.g. Scopus, COPE). Accordingly, poor-quality articles published on APC journals shed the lights on the weakness of methodologies based on a mechanical inclusion of academic journals in scientific database indexes, succeeding in being considered for bibliometric evaluations of research institutions or scholars' productivity.
John A List
Cited by*: Downloads*:

While empirical economics has made important strides over the past half century, there is a recent attack that threatens the foundations of the empirical approach in economics: external validity. Certain dogmatic arguments are not new, yet in some circles the generalizability question is beyond dispute, rendering empirical work as a passive enterprise based on frivolity. Such arguments serve to caution even the staunchest empirical advocates from even starting an empirical inquiry in a novel setting. In its simplest form, questions of external validity revolve around whether the results of the received study can be generalized to different people, situations, stimuli, and time periods. This study clarifies and places the external validity crisis into perspective by taking a unique glimpse into the grandest of trials: The External Validity Trial. A key outcome of the proceedings is an Author Onus Probandi, which outlines four key areas that every study should report to address external validity. Such an evaluative approach properly rewards empirical advances and justly recognizes inherent empirical limitations.
Omar Al-Ubaydli, John A List, Claire Mackevicius, Min Sok Lee, Dana L Suskind
Cited by*: Downloads*:

Policymakers are increasingly turning to insights gained from the experimental method as a means to inform large scale public policies. Critics view this increased usage as premature, pointing to the fact that many experimentally-tested programs fail to deliver their promise at scale. Under this view, the experimental approach drives too much public policy. Yet, if policymakers could be more confident that the original research findings would be delivered at scale, even the staunchest critics would carve out a larger role for experiments to inform policy. Leveraging the economic framework of Al-Ubaydli et al. (2019), we put forward 12 simple proposals, spanning researchers, policymakers, funders, and stakeholders, which together tackle the most vexing scalability threats. The framework highlights that only after we deepen our understanding of the scale up problem will we be on solid ground to argue that scientific experiments should hold a more prominent place in the policymaker's quiver.
Eszter Czibor, David Jimenez-Gomez, John A List
Cited by*: Downloads*:

What was once broadly viewed as an impossibility - learning from experimental data in economics - has now become commonplace. Governmental bodies, think tanks, and corporations around the world employ teams of experimental researchers to answer their most pressing questions. For their part, in the past two decades academics have begun to more actively partner with organizations to generate data via field experimentation. While this revolution in evidence-based approaches has served to deepen the economic science, recently a credibility crisis has caused even the most ardent experimental proponents to pause. This study takes a step back from the burgeoning experimental literature and introduces 12 actions that might help to alleviate this credibility crisis and raise experimental economics to an even higher level. In this way, we view our "12 action wish list" as discussion points to enrich the field.
Greer K Gosnell, John A List, Robert D Metcalfe
Cited by*: Downloads*:

Increasing evidence indicates the importance of management in determining firms' productivity. Yet, causal evidence regarding the effectiveness of management practices is scarce, especially for high-skilled workers in the developed world. In an eight-month field experiment measuring the productivity of captains in the commercial aviation sector, we test four distinct management practices: (i) performance monitoring; (ii) performance feedback; (iii) target setting; and (iv) prosocial incentives. We find that these management practices -particularly performance monitoring and target setting- significantly increase captains' productivity with respect to the targeted fuel-saving dimensions. We identify positive spillovers of the tested management practices on job satisfaction and carbon dioxide emissions, and captains overwhelmingly express desire for deeper managerial engagement. Both the implementation and the results of the study reveal an uncharted opportunity for management researchers to delve into the black box of firms and rigorously examine the determinants of productivity amongst skilled labor.