Steven D Levitt, John A List, Susanne Neckermann, Sally Sadoff
Cited by*: 0 Downloads*: 161

Research on behavioral economics has established the importance of factors such as reference dependent preferences, hyperbolic preferences, and the value placed on non-financial rewards. To date, these insights have had little impact on the way the educational system operates. Through a series of field experiments involving thousands of primary and secondary school students, we demonstrate the power of behavioral economics to influence educational performance. Several insights emerge. First, we find that incentives framed as losses have more robust effects than comparable incentives framed as gains. Second, we find that non-financial incentives are considerably more cost-effective than financial incentives for younger students, but were not effective with older students. Finally, and perhaps most importantly, consistent with hyperbolic discounting, all motivating power of the incentives vanishes when rewards are handed out with a delay. Since the rewards to educational investment virtually always come with a delay, our results suggest that the current set of incentives may lead to under-investment. For policymakers, our findings imply that in the absence of immediate incentives, many students put forth low effort on standardized tests, which may create biases in measures of student ability, teacher value added, school quality, and achievement gaps.
Steven D Levitt, John A List, Sally Sadoff
Cited by*: 8 Downloads*: 46

We test the effect of performance-based incentives on educational achievement in a low-performing school district using a randomized field experiment. High school freshmen were provided monthly financial incentives for meeting an achievement standard based on multiple measures of performance including attendance, behavior, grades and standardized test scores. Within the design, we compare the effectiveness of varying the recipient of the reward (students or parents) and the incentive structure (fixed rate or lottery). While the overall effects of the incentives are modest, the program has a large and significant impact among students on the threshold of meeting the achievement standard. These students continue to outperform their control group peers a year after the financial incentives end. However, the program effects fade in longer term follow up, highlighting the importance of longer term tracking of incentive programs.
Ufuk Akcigit, Fernando Alvarez, Stephane Bonhomme, George M Constantinides, Douglas W Diamond, Eugene F Fama, David W Galenson, Michael Greenstone, Lars Peter Hansen, Uhlig Harald, James J Heckman, Ali Hortacsu, Emir Kamenica, Greg Kaplan, Anil K Kashyap, Steven D Levitt, John A List, Robert E Lucas Jr., Magne Mogstad, Roger Myerson, Derek Neal, Canice Prendergast, Raghuram G Rajan, Philip J Reny, Azeem M Shaikh, Robert Shimer, Hugo F Sonnenschein, Nancy L Stokey, Richard H Thaler, Robert H Topel, Robert Vishny, Luigi Zingales
Cited by*: 0 Downloads*: 207

No abstract available
Steven D Levitt, John A List, Chad Syverson
Cited by*: 7 Downloads*: 12

Productivity improvements within establishments (e.g., factories, mines, or retail stores) are an important source of aggregate productivity growth. Past research has documented that learning by doing-productivity improvements that occur in concert with production increases-is one source of such improvements. Yet little is known about the specific mechanisms through which such learning occurs. We address this question using extremely detailed data from an assembly plant of a major auto producer. Beyond showing that there is rapid learning by doing at the plant, we are able to pinpoint the processes by which these improvements have occurred.
Steven D Levitt, John A List
Cited by*: 51 Downloads*: 30

We can think of no question more fundamental to experimental economics than understanding whether, and under what circumstances, laboratory results generalize to naturally occurring environments. In this paper, we extend Levitt and List (2006) to the class of games in which financial payoffs and doing the right thing are not necessarily in conflict. We argue that behaviour is crucially linked to not only the preferences of people, but also the properties of the situation. By doing so, we are able to provide a road map of the psychological and economic properties of people and situations that might interfere with generalizability of laboratory result from a broad class of games.
Steven D Levitt, John A List
Cited by*: 321 Downloads*: 96

A critical question facing experimental economists is whether behavior inside the laboratory is a good indicator of behavior outside the laboratory. To address that question, we build a model in which the choices that individuals make depend not just on financial implications, but also on the nature and extent of scrutiny by others, the particular context in which a decision is embedded, and the manner in which participants and tasks are selected. We present empirical evidence demonstrating the importance of these various factors. To the extent that lab and naturally occurring environments systematically differ on any of these dimensions, the results obtained inside and outside the lab need not correspond. Focusing on experiments designed to measure social preferences, we discuss the extent to which the existing laboratory results generalize to naturally-occurring markets. We summarize cases where the lab may understate the importance of social preferences as well as instances in which the lab might exaggerate their importance. We conclude by emphasizing the importance of interpreting laboratory and field data through the lens of theory.
Steven D Levitt, John A List, David H Reiley
Cited by*: 12 Downloads*: 11

The minimax argument represents game theory in its most elegant form: simple but with stark predictions. Although some of these predictions have been met with reasonable success in the field, experimental data have generally not provided results close to the theoretical predictions. In a striking study, Palacios-Huerta and Volij (2007) present evidence that potentially resolves this puzzle: both amateur and professional soccer players play nearly exact minimax strategies in laboratory experiments. In this paper, we establish important bounds on these results by examining the behavior of four distinct subject pools: college students, bridge professionals, world-class poker players, who have vast experience with high-stakes randomization in card games, and American professional soccer players. In contrast to Palacios-Huerta and Volij's results, we find little evidence that real-world experience transfers to the lab in these games--indeed, similar to previous experimental results, all four subject pools provide choices that are generally not close to minimax predictions. We use two additional pieces of evidence to explore why professionals do not perform well in the lab: (1) complementary experimental treatments that pit professionals against preprogrammed computers, and (2) post-experiment questionnaires. The most likely explanation is that these professionals are unable to transfer their skills at randomization from the familiar context of the field to the unfamiliar context of the lab.