Gareth James

John H. Harland Dean and Professor of Information Systems & Operations Management Emory University, Goizueta Business School

  • Atlanta GA

Data is the sword of the 21st century, those who wield it well, the Samurai. -Jonathan Rosenberg, adviser to Larry Page & former Google SVP

Contact

Emory University, Goizueta Business School

View more experts managed by Emory University, Goizueta Business School

Social

Biography

Gareth James became the John H. Harland Dean of Goizueta Business School in July 2022. Renowned for his visionary leadership, statistical mastery, and commitment to the future of business education, James brings vast and versatile experience to the role. His collaborative nature and data-driven scholarship offer fresh energy and focus aimed at furthering Goizueta’s mission: to prepare principled leaders to have a positive influence on business and society.

James is a dynamic scholar and leader. His extensive published works include numerous articles, conference proceedings, and book chapters focused on statistical and machine learning methodologies. James is also co-author of the extremely successful textbook, An Introduction to Statistical Learning.

James brings a powerful optimism and contagious enthusiasm to further the work Goizueta is doing, not only through the school’s stellar scholarship, but also by continuing to build strong bridges to the business community. He believes in the central role that business plays in society and the impact that Goizueta has in preparing the thinkers and innovators of tomorrow. His ambition to drive excellence and strengthen Goizueta’s future is fueled by his experience in data-informed decision making, strategy, and support.

James joins the Goizueta family from the USC Marshall School of Business where he served in a multitude of pivotal roles. While interim dean (2019-2020), he led the school’s COVID-19 response. He served as vice dean of faculty, as well as deputy dean (2020-2022), a position created to retain him at school-level leadership.

A noted researcher, his work has been cited more than 30,000 times. James has led multiple National Science Foundation research grants and has served as an associate editor for five top research journals. The recipient of two Dean’s Research Awards from the Marshall School of Business, he is an elected Fellow of both the American Statistical Association and the Institute of Mathematical Statistics.

James is also a superb teacher and mentor. In addition to the Evan C. Thompson Faculty Teaching and Learning Innovation Award, he is a three-time winner of the Marshall School of Business’ Golden Apple Award for best instructor in the full-time MBA program. He has also been awarded Marshall and USC’s highest honors for mentoring junior colleagues and graduate students, including the Dean’s Ph.D. Advising, USC Mellon, Evan C. Thompson and Provost’s Mentoring awards.

Education

Stanford University

PhD

Statistics

The University of Auckland

BComm

Finance

The University of Auckland

BS

Statistics

Areas of Expertise

Statistical Problems in Marketing
Functional Data Analysis
Statistical Methodology
High Dimensional Regression

Publications

Asymmetric error control under imperfect supervision: a label-noise-adjusted Neyman-Pearson umbrella algorithm

Journal of the American Statistical Association

S Yao, B Rava, X Tong, G James

2023-12-09

Label noise in data has long been an important problem in supervised learning applications as it affects the effectiveness of many widely used classification methods. Recently, important real-world applications, such as medical diagnosis and cybersecurity, have generated renewed interest in the Neyman–Pearson (NP) classification paradigm, which constrains the more severe type of error (e.g., the Type I error) under a preferred level while minimizing the other (e.g., the Type II error).

View more

Heteroscedasticity-Adjusted Ranking and Thresholding for Large-Scale Multiple Testing

Journal of the American Statistical Association

L Fu, B Gang, GM James, W Sun

2022-12-04

Standardization has been a widely adopted practice in multiple testing, for it takes into account the variability in sampling and makes the test statistics comparable across different study units. However, despite conventional wisdom to the contrary, we show that there can be a significant loss in information from basing hypothesis tests on standardized statistics rather than the full data.

View more

Irrational exuberance: Correcting bias in probability estimates

Journal of the American Statistical Association

GM James, P Radchenko, B Rava

2022-08-17

We consider the common setting where one observes probability estimates for a large number of events, such as default risks for numerous bonds. Unfortunately, even with unbiased estimates, selecting events corresponding to the most extreme probabilities can result in systematically underestimating the true level of uncertainty. We develop an empirical Bayes approach “excess certainty adjusted probabilities” (ECAP), using a variant of Tweedie’s formula, which updates probability estimates to correct for selection bias.

View more

Show All +

Working Papers/Projects

Functional Data Analysis

The key tenet of this Functional Data Analysis (FDA) is to treat the measurements of a function or curve not as multiple data points but as a single observation of the function as a whole. This approach allows one to more fully exploit the structure of the data. FDA is an inherently multidisciplinary area and is becoming increasingly important as technological changes make it more common to observe functional data. A common FDA situation involves a functional regression problem where one might observe a response Y and a functional predictor X(t) which is measured over time or some other domain. The goal would then be to build a model to predict Y based on X(t). Fitting such a model is more challenging than for standard linear regression because the predictor is now an infinite dimensional object.

High Dimensional Regression

Traditionally statistics has involved getting information from relatively small data sets involving perhaps on the order of a hundred observations and ten predictors or independent variables. However, recent technological advances in areas as diverse as web based advertising, finance, supermarket bar code readers (linked to customer cards) and even micro-arrays in genetics, have led to an entirely new type of data called High Dimensional Data. This data typically has anywhere from ten to a few hundred observations but possibly up to tens of thousands of variables. Dealing with such data poses very significant statistical and computational challenges. Trying to find the one or two important variables among thousands with only say 10 observations is roughly analogous to the traditional “finding a needle in a hay stack” with the added challenge that you only get 10 guesses before your time is up. HDD has become one of the most important areas of research in statistics.

Statistical Problems in Marketing

There are many interesting statistical problems in the marketing field. My major goal here is to incorporate new ideas from the statistical literature to provide solutions to practical marketing problems. For example, in one paper I used methods from the functional data analysis literature to accurately predict market penetration of 21 new products over 70 different countries. In another paper my coauthors and I suggested a new statistical methodology for predicting the trajectory of new technologies over time. We collected data over time for an extensive set of technologies and showed that our approach was overall more accurate than well known laws such as Moore's Law.

Research Spotlight

5 min

Expert Perspective: Mitigating Bias in AI: Sharing the Burden of Bias When it Counts Most

Whether getting directions from Google Maps, personalized job recommendations from LinkedIn, or nudges from a bank for new products based on our data-rich profiles, we have grown accustomed to having artificial intelligence (AI) systems in our lives. But are AI systems fair? The answer to this question, in short—not completely. Further complicating the matter is the fact that today’s AI systems are far from transparent. Think about it: The uncomfortable truth is that generative AI tools like ChatGPT—based on sophisticated architectures such as deep learning or large language models—are fed vast amounts of training data which then interact in unpredictable ways. And while the principles of how these methods operate are well-understood (at least by those who created them), ChatGPT’s decisions are likened to an airplane’s black box: They are not easy to penetrate. So, how can we determine if “black box AI” is fair? Some dedicated data scientists are working around the clock to tackle this big issue. One of those data scientists is Gareth James, who also serves as the Dean of Goizueta Business School as his day job. In a recent paper titled “A Burden Shared is a Burden Halved: A Fairness-Adjusted Approach to Classification” Dean James—along with coauthors Bradley Rava, Wenguang Sun, and Xin Tong—have proposed a new framework to help ensure AI decision-making is as fair as possible in high-stakes decisions where certain individuals—for example, racial minority groups and other protected groups—may be more prone to AI bias, even without our realizing it. In other words, their new approach to fairness makes adjustments that work out better when some are getting the short shrift of AI. Gareth James became the John H. Harland Dean of Goizueta Business School in July 2022. Renowned for his visionary leadership, statistical mastery, and commitment to the future of business education, James brings vast and versatile experience to the role. His collaborative nature and data-driven scholarship offer fresh energy and focus aimed at furthering Goizueta’s mission: to prepare principled leaders to have a positive influence on business and society. Unpacking Bias in High-Stakes Scenarios Dean James and his coauthors set their sights on high-stakes decisions in their work. What counts as high stakes? Examples include hospitals’ medical diagnoses, banks’ credit-worthiness assessments, and state justice systems’ bail and sentencing decisions. On the one hand, these areas are ripe for AI-interventions, with ample data available. On the other hand, biased decision-making here has the potential to negatively impact a person’s life in a significant way. In the case of justice systems, in the United States, there’s a data-driven, decision-support tool known as COMPAS (which stands for Correctional Offender Management Profiling for Alternative Sanctions) in active use. The idea behind COMPAS is to crunch available data (including age, sex, and criminal history) to help determine a criminal-court defendant’s likelihood of committing a crime as they await trial. Supporters of COMPAS note that statistical predictions are helping courts make better decisions about bail than humans did on their own. At the same time, detractors have argued that COMPAS is better at predicting recidivism for some racial groups than for others. And since we can’t control which group we belong to, that bias needs to be corrected. It’s high time for guardrails. A Step Toward Fairer AI Decisions Enter Dean James and colleagues’ algorithm. Designed to make the outputs of AI decisions fairer, even without having to know the AI model’s inner workings, they call it “fairness-adjusted selective inference” (FASI). It works to flag specific decisions that would be better handled by a human being in order to avoid systemic bias. That is to say, if the AI cannot yield an acceptably clear (1/0 or binary) answer, a human review is recommended. To test the results for their “fairness-adjusted selective inference,” the researchers turn to both simulated and real data. For the real data, the COMPAS dataset enabled a look at predicted and actual recidivism rates for two minority groups, as seen in the chart below. In the figures above, the researchers set an “acceptable level of mistakes” – seen as the dotted line – at 0.25 (25%). They then compared “minority group 1” and “minority group 2” results before and after applying their FASI framework. Especially if you were born into “minority group 2,” which graph seems fairer to you? Professional ethicists will note there is a slight dip to overall accuracy, as seen in the green “all groups” category. And yet the treatment between the two groups is fairer. That is why the researchers titled their paper “a burden shared is a burdened halved.” Practical Applications for the Greater Social Good “To be honest, I was surprised by how well our framework worked without sacrificing much overall accuracy,” Dean James notes. By selecting cases where human beings should review a criminal history – or credit history or medical charts – AI discrimination that would have significant quality-of-life consequences can be reduced. Reducing protected groups’ burden of bias is also a matter of following the laws. For example, in the financial industry, the United States’ Equal Credit Opportunity Act (ECOA) makes it “illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance,” as the Federal Trade Commission explains on its website. If AI-powered programs fail to correct for AI bias, the company utilizing it can run into trouble with the law. In these cases, human reviews are well worth the extra effort for all stakeholders. The paper grew from Dean James’ ongoing work as a data scientist when time allows. “Many of us data scientists are worried about bias in AI and we’re trying to improve the output,” he notes. And as new versions of ChatGPT continue to roll out, “new guardrails are being added – some better than others.” “I’m optimistic about AI,” Dean James says. “And one thing that makes me optimistic is the fact that AI will learn and learn – there’s no going back. In education, we think a lot about formal training and lifelong learning. But then that learning journey has to end,” Dean James notes. “With AI, it never ends.” Gareth James is the John H. Harland Dean of Goizueta Business School. If you're looking to connect with him simply click on his icon now to arrange an interview today.

Gareth James

In the News

The Avatar as Instructor

AACSB  online

2024-04-01

Emory University’s Goizueta Business School offers insights into why and how artificial intelligence can be used to deliver educational content.

View More

Emory Goizueta’s New Dean Wants To Take The B-School From ‘Strong’ To ‘Premier’

Poets & Quants  online

2022-02-28

Gareth James will officially take the helm at the Goizueta School in July after 24 years at USC Marshall. He takes over a program with “impressive ambitions,” one “whose very name represents an important legacy for both Emory and the Atlanta region.”

View More

Emory names new dean of Goizueta Business School

Atlanta Business Chronicle  

2022-02-14

Gareth James has been named the new dean of Emory University's Goizueta Business School.

View More