Basil Halperin, Benjamin Ho, John A List, Ian Muir
Cited by*: Downloads*:

We use a theory of apologies to analyze a nationwide field experiment involving 1.5 million Uber ridesharing consumers who experienced late rides. Several insights emerge. First, apologies are not a panacea: the efficacy of an apology and whether it may backfire depend on how the apology is made. Second, across treatments, money speaks louder than words - the best form of apology is to include a coupon for a future trip. Third, in some cases sending an apology is worse than sending nothing at all, particularly for repeated apologies. For firms, caveat venditor should be the rule when considering apologies.
Bharat Chandar, Ali Hortacsu, John A List, Ian Muir, Jeffrey M Wooldridge
Cited by*: Downloads*:

Field experiments conducted with the village, city, state, region, or even country as the unit of randomization are becoming commonplace in the social sciences. While convenient, subsequent data analysis may be complicated by the constraint on the number of clusters in treatment and control. Through a battery of Monte Carlo simulations, we examine best practices for estimating unit-level treatment effects in cluster-randomized field experiments, particularly in settings that generate short panel data. In most settings we consider, unit-level estimation with unit fixed effects and cluster-level estimation weighted by the number of units per cluster tend to be robust to potentially problematic features in the data while giving greater statistical power. Using insights from our analysis, we evaluate the effect of a unique field experiment: a nationwide tipping field experiment across markets on the Uber app. Beyond the import of showing how tipping affects aggregate outcomes, we provide several insights on aspects of generating and analyzing cluster-randomized experimental data when there are constraints on the number of experimental units in treatment and control.
Bharat Chandar, Uri Gneezy, John A List, Ian Muir
Cited by*: Downloads*:

Even though social preferences affect nearly every facet of life, there exist many open questions on the economics of social preferences in markets. We leverage a unique opportunity to generate a large data set to inform the who's, what's, where's, and when's of social preferences through the lens of a nationwide tipping field experiment on the Uber platform. Our field experiment generates data from more than 40 million trips, allowing an exploration of social preferences in the ride sharing market using bid data. Combining experimental and natural variation in the data, we are able to establish tipping facts as well as provide insights into the underlying motives for tipping. Interestingly, even though tips are made privately, and without external social benefits or pressure, more than 15% of trips are tipped. Yet, nearly 60% of people never tip, and only 1% of people always tip. Overall, the demand-side explains much more of the observed tipping variation than the supply-side.
Ariel Goldszmidt, John A List, Robert D Metcalfe, Ian Muir, Jenny Wang
Cited by*: Downloads*:

The value of time determines relative prices of goods and services, investments, productivity, economic growth, and measures of income inequality. Economists in the 1960s began to focus on the value of non-work time, pioneering a deep literature exploring the optimal allocation and value of time. By leveraging key features of these classic time allocation theories, we use a novel approach to estimate the value of time (VOT) via two large-scale natural field experiments with the ridesharing company Lyft. We use random variation in both wait times and prices to estimate a consumer's VOT with a data set of more than 14 million observations across consumers in US cities. We find that the VOT is roughly $19 per hour (or 75% (100%) of the after-tax mean (median) wage rate) and varies predictably with choice circumstances correlated with the opportunity cost of wait time. Our VOT estimate is larger than what is currently used by the US Government, suggesting that society is under-valuing time improvements and subsequently under-investing public resources in time-saving infrastructure projects and technologies.
John A List, Ian Muir, Gregory Sun
Cited by*: Downloads*:

This study investigates how to use regression adjustment to reduce variance in experimental data. We show that the estimators recommended in the literature satisfy an orthogonality property with respect to the parameters of the adjustment. This observation greatly simplifies the derivation of the asymptotic variance of these estimators and allows us to solve for the efficient regression adjustment in a large class of adjustments. Our efficiency results generalize a number of previous results known in the literature. We then discuss how this efficient regression adjustment can be feasibly implemented. We show the practical relevance of our theory in two ways. First, we use our efficiency results to improve common practices currently employed in field experiments. Second, we show how our theory allows researchers to robustly incorporate machine learning techniques into their experimental estimators to minimize variance.
Pradhi Aggarwal, Alec Brandon, Ariel Goldszmidt, Justin Holz, John A List, Ian Muir, Gregory Sun, Thomas Yu
Cited by*: Downloads*:

Prior research finds that, conditional on an encounter, minority civilians are more likely to be punished by police than white civilians. An open question is whether the actual encounter is related to race. Using high-frequency location data of rideshare drivers operating on the Lyft platform in Florida, we estimate the effect of driver race on traffic stops and fines for speeding. Estimates obtained across traditional and machine learning approaches show that, relative to a white driver traveling the same speed, minorities are 24 to 33 percent more likely to be stopped for speeding and pay 23 to 34 percent more in fines. We find no evidence that these estimates can be explained by racial differences in accident and re-offense rates. Our study provides key insights into the total effect of civilian race on outcomes of interest and highlights the potential value of private sector data to help inform major social challenges.
John A List, Ian Muir, Devin Pope, Gregory Sun
Cited by*: Downloads*:

Left-digit bias (or 99-cent pricing) has been discussed extensively in economics, psychology, and marketing. Despite this, we show that the rideshare company, Lyft, was not using a 99-cent pricing strategy prior to our study. Based on observational data from over 600 million Lyft sessions followed by a field experiment conducted with 21 million Lyft passengers, we provide evidence of large discontinuities in demand at dollar values. Approximately half of the downward slope of the demand curve occurs discontinuously as the price of a ride drops below a dollar value (e.g. $14.00 to $13.99). If our short run estimates persist in the longer run, we calculate that Lyft could increase its profits by roughly $160M per year by employing a left-digit bias pricing strategy. Our results showcase the robustness of an important behavioral bias for a large, modern company and its persistence in a highly-competitive market.
Aaron Bodoh-Creed, Brent R Hickman, John A List, Ian Muir, Gregory Sun
Cited by*: Downloads*:

In this paper, we provide a suite of tools for empirical market design, including optimal nonlinear pricing in intensive-margin consumer demand, as well as a broad class of related adverse selection models. Despite significant data limitations, we are able to derive informative bounds on demand under counterfactual price changes. These bounds arise because empirically plausible DGPs must respect the Law of Demand and the observed shift(s) in aggregate demand resulting from a known exogenous price change(s). These bounds facilitate robust policy prescriptions using rich, internal data sources similar to those available in many real-world applications. Our partial identification approach enables viable nonlinear pricing design while achieving robustness against worst-case deviations from baseline model assumptions. As a side benefit, our identification results also provide useful, novel insights into optimal experimental design for pricing RCTs.
  • 1 of 1