Secondary Titles (2)
- Associate Editor Journal of Applied Psychology
- GRE Technical Advisory Committee
Industry Expertise (5)
Writing and Editing
Areas of Expertise (6)
Human Resource Management
Counterproductive Work Behavior
Organizational Citizenship Behavior
Fairness/Validity of Pre-Employment Tests
Distinguished Early Career Contributions (professional)
Science Award from the Society for Industrial and Organizational Psychology
Kelley School of Business Research Award – Associate Professor (professional)
Awarded by the Kelley School of Business at Indiana University
Finalist for the Faculty Distinguished Teaching Award from the Kelley School of Business Doctoral Student Association (professional)
Awarded by the Kelley School of Business at Indiana University
Early Career Achievement Award (professional)
Awarded by the Academy of Management Human Resources Division
University of Minnesota: Ph.D., Industrial/Organizational Psychology 2007
Whitworth College: B.A., Psychology 2000
A common form of missing data is caused by selection on an observed variable (e.g., Z). If the selection variable was measured and is available, the data are regarded as missing at random (MAR). Selection biases correlation, reliability, and effect size estimates when these estimates are computed on listwise deleted (LD) data sets. On the other hand, maximum likelihood (ML) estimates are generally unbiased and outperform LD in most situations, at least when the data are MAR. The exception is when we estimate the partial correlation. In this situation, LD estimates are unbiased when the cause of missingness is partialled out. In other words, there is no advantage of ML estimates over LD estimates in this situation. We demonstrate that under a MAR condition, even ML estimates may become biased, depending on how partial correlations are computed. Finally, we conclude with recommendations about how future researchers might estimate partial correlations even when the cause of missingness is unknown and, perhaps, unknowable.
Although the majority of empirical commitment research has adopted a variable-centered approach, the person-centered or profiles approach is gaining traction. One challenge in the commitment profiles literature is that names are attached to profiles based on the within-study comparison among profiles and their relative levels and shapes. Thus, it is possible that different studies name the same profiles differently or different profiles similarly because of the context of the other profiles in the study. A meta-analytic approach, combined with multilevel latent profile analysis (LPA) that accounts for both within- and between-sample variability, is used in this study to examine the antecedents and outcomes of commitment profiles. This helps solve the naming problem by examining multiple data sets (K = 40) with a large sample (N = 16,052), obtained by contacting commitment researchers who voluntarily supplied primary data to bring further consensus about the phenomenology of profiles. LPA results revealed 5 profiles (Low, Moderate, AC-dominant, AC/NC-dominant, and High). Meta-analytic results revealed that high levels of bases of commitment were associated with value-based profiles whereas low levels were associated with weak commitment profiles. Additionally, value-based profiles were associated with older, married, and less educated participants than the weak commitment profiles. Regarding outcomes of commitment, profiles were found to significantly relate to focal behaviors (e.g., performance, tenure, and turnover) and discretionary behaviors (e.g., organizational citizenship behaviors). Value-based profiles were found to have higher levels of both focal and discretionary behaviors for all analyses. Implications for the commitment and profile literature are discussed.
Substantial mean score differences and significant adverse impact have long motivated the question of whether cognitive ability tests are biased against certain non-White subgroups. This article presents a framework for understanding the interrelated issues of adverse impact and test bias, with particular focus on two forms of test bias especially relevant for personnel selection: differential validity and differential prediction. Ethical and legal reasons that organizations should be concerned about differential validity/prediction are discussed. This article also serves as a critical review of the research literature on differential validity/prediction. The general conclusion is that available evidence supports the existence of differential validity/prediction in the form of correlation/slope and intercept differences between White and non-White subgroups. Implications for individuals and organizations are outlined, and a future research agenda is proposed highlighting the need for new, better data; new, better methods of testing for differential validity/prediction; and investigation of substantive factors causing differential validity/prediction.
Predictive bias studies have generally suggested that cognitive ability test scores overpredict job performance of African Americans, meaning these tests are not predictively biased against African Americans. However, at least 2 issues call into question existing over-/underprediction evidence: (a) a bias identified by Aguinis, Culpepper, and Pierce (2010) in the intercept test typically used to assess over-/underprediction and (b) a focus on the level of observed validity instead of operational validity. The present study developed and utilized a method of assessing over-/underprediction that draws on the math of subgroup regression intercept differences, does not rely on the biased intercept test, allows for analysis at the level of operational validity, and can use meta-analytic estimates as input values. Therefore, existing meta-analytic estimates of key parameters, corrected for relevant statistical artifacts, were used to determine whether African American job performance remains overpredicted at the level of operational validity. African American job performance was typically overpredicted by cognitive ability tests across levels of job complexity and across conditions wherein African American and White regression slopes did and did not differ. Because the present study does not rely on the biased intercept test and because appropriate statistical artifact corrections were carried out, the present study’s results are not affected by the 2 issues mentioned above. The present study represents strong evidence that cognitive ability tests generally overpredict job performance of African Americans.
This study investigates the personality processes involved in the debate surrounding the use of cognitive ability tests in college admissions. In Study 1, 108 undergraduates (Mage = 18.88 years, 60 women, 80 Whites) completed measures of social dominance orientation (SDO), testing self-efficacy, and attitudes regarding the use of cognitive ability tests in college admissions; SAT/ACT scores were collected from the registrar. Sixty-seven undergraduates (Mage = 19.06 years, 39 women, 49 Whites) completed the same measures in Study 2, along with measures of endorsement of commonly presented arguments about test use. In Study 3, 321 American adults (Mage = 35.58 years, 180 women, 251 Whites) completed the same measures used in Study 2; half were provided with facts about race and validity issues surrounding cognitive ability tests. Individual differences in SDO significantly predicted support for the use of cognitive ability tests in all samples, after controlling for SAT/ACT scores and test self-efficacy and also among participants who read facts about cognitive ability tests. Moreover, arguments for and against test use mediated this effect. The present study sheds new light on an old debate by demonstrating that individual differences in beliefs about hierarchy play a key role in attitudes toward cognitive ability test use.
Much of the recent research on counterproductive work behaviors (CWBs) has used multi-item self-report measures of CWB. Because of concerns over self-report measurement, there have been recent calls to collect ratings of employees' CWB from their supervisors or coworkers (i.e., other-raters) as alternatives or supplements to self-ratings. However, little is still known about the degree to which other-ratings of CWB capture unique and valid incremental variance beyond self-report CWB. The present meta-analysis investigates a number of key issues regarding the incremental contribution of other-reports of CWB. First, self- and other-ratings of CWB were moderately to strongly correlated with each other. Second, with some notable exceptions, self- and other-report CWB exhibited very similar patterns and magnitudes of relationships with a set of common correlates. Third, self-raters reported engaging in more CWB than other-raters reported them engaging in, suggesting other-ratings capture a narrower subset of CWBs. Fourth, other-report CWB generally accounted for little incremental variance in the common correlates beyond self-report CWB. Although many have viewed self-reports of CWB with skepticism, the results of this meta-analysis support their use in most CWB research as a viable alternative to other-reports.
We meta-analyzed the correlations between voluntary employee lateness, absenteeism, and turnover to (i) provide the most comprehensive estimates to date of the interrelationships between these withdrawal behaviors; (ii) test the viability of a withdrawal construct; and (iii) evaluate the evidence for competing models of the relationships between withdrawal behaviors (i.e., alternate forms, compensatory forms, independent forms, progression of withdrawal, and spillover model). Corrected correlations were .26 between lateness and absenteeism, .25 between absenteeism and turnover, and .01 between lateness and turnover. These correlations were even smaller in recent studies that had been carried out since the previous meta-analyses of these relationships 15–20 years ago. The small-to-moderate intercorrelations are not supportive of a withdrawal construct that includes lateness, absenteeism, and turnover. These intercorrelations also rule out many of the competing models of the relationships between withdrawal behaviors, as many of the models assume all relationships will be positive, null, or negative. On the basis of path analyses using meta-analytic data, the progression of withdrawal model garnered the most support. This suggests that lateness may moderately predict absenteeism and absenteeism may moderately predict turnover.
This study investigated the usefulness of the five-factor model (FFM) of personality in predicting two aspects of managerial performance (task vs. contextual) assessed by utilizing the 360 degree performance rating system. The authors speculated that one reason for the low validity of the FFM might be the failure of single-source (e.g., supervisor) ratings to comprehensively capture the construct of managerial performance. The operational validity of personality was found to increase substantially (50%–74%) across all of the FFM personality traits when both peer and subordinate ratings were added to supervisor ratings according to the multitrait–multimethod approach. Furthermore, the authors responded to the recent calls to validate tests via a multivariate (e.g., multitrait–multimethod) approach by decomposing overall managerial performance into task and contextual performance criteria and by using multiple rating perspectives (sources). Overall, this study contributes to the evidence that personality may be even more useful in predicting managerial performance if the performance criteria are less deficient.