Asa Palley is an Assistant Professor of Operations and Decision Technologies at the Kelley School of Business at Indiana University. He received a Ph.D. in Decision Sciences at the Fuqua School of Business at Duke University, where he also completed a Certificate in College Teaching. Previously, he earned an A.B. from Bowdoin College, an M.S. in Applied Mathematics and Scientific Computation from the University of Maryland at College Park, and an M.S. in Mathematics from Carnegie Mellon University.
My research uses methods from decision analysis, operations research, and behavioral economics to model and provide insights into managerial decision problems. I aim to develop simple and effective prescriptive solutions to equip managers with better information and improve decision-making. My work has been published in the journals Management Science, Experimental Economics, and Risk Analysis.
Industry Expertise (1)
Areas of Expertise (3)
Wisdom of Crowds
Judgment and Decision Making
Duke University: Ph.D. 2016
Carnegie Mellon University: M.S. 2010
University of Maryland: M.S. 2009
Extracting the Wisdom of Crowds When Information is SharedManagement Science
2018 Using the wisdom of crowds -- combining many individual judgments to obtain an aggregate estimate -- can be an effective technique for improving judgment accuracy. In practice, however, accuracy is limited by the presence of correlated judgment errors, which often emerge because information is shared. To address this problem, we propose an elicitation procedure in which respondents are asked to provide both their own best judgment and an estimate of the average judgment that will be given by all other respondents. We develop an aggregation method, called pivoting, which separates individual judgments into shared and private information and then recombines these results in the optimal manner. In several studies, we investigate the method and examine the accuracy of the aggregate estimates. Overall, the empirical data suggest that the pivoting method provides an effective judgment aggregation procedure that can significantly outperform the simple crowd average.
Extending the Wisdom of Crowds: Quantifying Uncertainty Using the Mean and Variance of a Collection of Point EstimatesIndiana University
2017 The wisdom of crowds—combining information from a collection of individual judgments—offers a useful technique to quantify an unknown variable. Averaging point estimates has proven to be effective in reducing error in the consensus estimate. However, in many managerial problems, the decision maker requires an assessment of the full distribution of uncertainty rather than just a single number. In practice, some managers have used dispersion in point estimates as a cue to uncertainty in the variable of interest, but a characterization of the exact relationship between the two has not yet been established. Using a stylized Bayesian model of overlapping information spread across a collection of judges, I show that the variance of the variable of interest can be estimated with a multiple of the variance of the individual judgments, and establish an analytical expression for the consensus predictive distribution. I then present a procedure that can be used to learn about this variance inflation factor using past judgments and realizations, and derive the resulting distribution for the new variable of interest. This aggregation method is easy to implement in practice to estimate a predictive distribution from a collection of individual judgments. Application of the procedure to forecasts available through the Survey of Professional Forecasters suggests that the resulting predictions are well-calibrated and outperform natural existing alternatives.
Lossed in Translation: An Off-the-Shelf Method to Recover Probabilistic Beliefs from Loss-Averse AgentsExperimental Economics
2015 Strictly proper scoring rules are designed to truthfully elicit subjective probabilistic beliefs from risk neutral agents. Previous experimental studies have identified two problems with this method: (i) risk aversion causes agents to bias their reports towards the probability of 1/2, and (ii) for moderate beliefs agents simply report 1/2. Applying a prospect theory model of risk preferences, we show that loss aversion can explain both of these behavioral phenomena. Using the insights of this model, we develop a simple off-the-shelf probability assessment mechanism that encourages loss-averse agents to report true beliefs. In an experiment, we demonstrate the effectiveness of this modification in both eliminating uninformative reports and eliciting true probabilistic beliefs.
Sequential Search and Learning from Rank Feedback: Theory and Experimental EvidenceManagement Science
2013 This paper studies the effect of limited information in a sequential search setting where a single selection is to be made from a set of random potential options. We consider both a full-information problem, where the decision maker observes the exact value of each option as she searches, and a partial-information problem, in which the decision maker only learns the rank of the current option relative to the options that have already been observed. We develop a model which allows for a sharp contrast between search behavior in the two information settings, both theoretically and empirically. We present the results of an experiment that tests, and supports, the key prediction of our model analysis – limited information induces longer search. Our data further suggest systematic deviations from the theoretical benchmarks in both informational settings. Importantly, subjects in our partial-information conditions are prone to stop prematurely during early stages of the search process and to sub-optimally continue the search during late stages. We propose a simple model that succinctly captures the interplay of two symmetric choice and judgment biases that have asymmetric (but opposing) effects on the length of search.