Don Zhang

Associate Professor Louisiana State University

  • Baton Rouge LA

Dr. Zhang studies risk and decision making.

Contact

Louisiana State University

View more experts managed by Louisiana State University

Areas of Expertise

Workplace Risk Taking
Judgment and Decision-making
Job Interviews
Psychometrics
Employee Recruitment and Selection
Personality and Individual Differences

Biography

Dr. Zhang has three major areas of research. First, he is interested in the antecedents of risk-taking at work, examining both stable individual differences as well as situational factors that may encourage or inhibit risky behaviors on the job. He is particularly interested in the double-edged effects of workplace risk taking. Second, Dr. Zhang conducts research on the employment interview process, focusing on the perspectives and behaviors of both applicants and interviewers. His work has examined dubious interviewing tactics used by hiring managers, while exploring the implications of these practices on recruitment success. Finally, Dr. Zhang leverages insights from judgment and decision-making (JDM) to inform organizational research on work behaviors. He strives to build translational linkages between foundational decision research and applied topics in industrial-organizational psychology. He has received over $2.8 million in NSF Grants and is the recipient of the NSF CAREER Award. His work has been featured on news outlets such as The Guardian, New York Magazine, and The Wall Street Journal. To learn more about Dr. Zhang's research program, he can be contacted at zhang1@lsu.edu. His lab website, Risk and Decision Making Lab, provides additional details on his various projects and opportunities to join the lab as a research assistant or graduate student.

Research Focus

Workplace Risk-Taking & Employee Selection

Dr. Zhang’s research focuses on workplace risk-taking, judgment and decision-making, and employee recruitment and selection. He applies organizational surveys, experimental interview studies, and psychometric modeling to reveal how personality and situational factors drive risky behavior and to sharpen hiring practices and decision quality at work.

Answers

How can understanding decision-making help organizations build healthier, more successful workplaces?
Don Zhang

Decision-making happens at every level of an organization, from C-suite strategy to an employee's daily tasks. Decades of research show that human judgment is prone to biases due to our reliance on mental shortcuts, or heuristics. Understanding and improving decision quality benefits all areas of organizational functioning, leading to sounder strategies, better hires, and more effective policies. Decision science offers a large toolbox of strategies that can enhance choices at both the micro-level (e.g., hiring) and macro-level (e.g., firm strategy). By using these tools, everyone can improve their own work and the overall effectiveness of the organization. Ultimately, applying the science of decision-making helps an organization shift from a reactive, "gut-feel" culture to a more deliberate, evidence-based one.

What are the biggest mistakes hiring managers make in interviews—and how do they affect who gets the job?
Don Zhang

The single biggest mistake is treating an interview like an informal chat rather than a structured assessment. When managers “go with their gut,” they open the door to unconscious biases that have a profound effect on who gets hired. This lack of structure allows biases like the halo effect (where one positive trait overshadows everything else) and affinity bias (our tendency to favor people like us) to take over. As a result, managers often hire the person they like the most, not the person who is most qualified. The solution is a structured interview. This involves two simple but powerful steps: First, Define the criteria first: Identify the essential knowledge, skills, and abilities (KSAs) for the role before looking at a single resume. Next, Use a consistent rubric: Ask all candidates the same job-related questions and score their answers using a pre-defined rating scale. This process transforms hiring from a subjective guessing game into an evidence-based decision, leading to fairer outcomes and a more talented workforce.

Education

Michigan State University

B.S.

Psychology

Bowling Green State University

M.A.

Industrial and Organizational Psychology

Bowling Green State University

Ph.D.

Industrial and Organizational Psychology

Accomplishments

Reviewer of the Year, Journal of Business and Psychology

2021

Media Appearances

4 LSU professors awarded National Science Foundation's most prestigious early-career grant

The Advocate  online

2022-07-24

Don Zhang
Zhang will use $430,000 to study risk-taking behavior in the workplace.

The impetus behind his work, Zhang said, comes from research he and his colleagues have conducted over the past five to 10 years profiling people who take risks and trying to measure risk preferences.

"What I'm aiming to do, with the generous support of NSF, is to take a lot of the good work that we've done so far and apply it to the work context," he said. "Trying to understand, if you're an employee who is a risk-taker, what does that mean in terms of your performance and behavior at work."

View More

Articles

The illusion of validity: how effort inflates the perceived validity of interview questions

European Journal of Work and Organizational Psychology

2023

Interviewers are often confident in the validity of their interview questions. What drives this confidence and is it justified? In three studies, we found that question creators judged their own interview questions as more valid than when the same questions are judged by an evaluator. We also found that effort expenditure inflated the perceived validity of interview questions but not question quality. Question creators’ perceptions of validity were primarily driven by their self-confidence, and not the question quality. As an intervention, we nudged participants into holding more favourable attitudes towards better questions (i.e., structured questions) by allowing them to choose a subset of them from a pre-written list.

View more

Eliciting risk preferences: is a single item enough?

Journal of Risk Research

2023

Economists and psychologists frequently use single-item measures of risk preferences despite potential limitations in reliability and criterion validity compared to their multi-item counterparts. This can be particularly problematic when individual differences in risk preferences are used to predict real-world economic, health, and financial outcomes. In this paper, we compare a popular single-item measure of risk preference, the General Risk Question (GRQ), to multi-item measures of domain-general and specific risk preference measures. I

View more

Improving the statistical performance of oblique bifactor measurement and predictive models: An augmentation approach

Structural Equation Modeling: A Multidisciplinary Journal

2024

Oblique bifactor models, where group factors are allowed to correlate with one another, are commonly used. However, the lack of research on the statistical properties of oblique bifactor models renders the statistical validity of empirical findings questionable. Therefore, the present study took the first step to examine the statistical properties of oblique bifactor models through Monte Carlo simulations. Study 1 showed that the classic oblique bifactor measurement models had severe convergence issues in many conditions. Even for converged replications, both factor loading and group factor correlation estimates were severely biased.

View more

Show All +

Affiliations

  • Society for Judgment and Decision Making (SJDM)
  • Society of Industrial and Organizational Psychology (SIOP)

Research Grants

Center for Promotion of Academic Careers through Motivational Opportunities to Develop Emerging Leaders in STEM (LS-PAC MODELS).

Louis Stokes Regional Center of Excellence

2018- 2023

Media

Social