A Quantitative Review and Analysis of the Constructs Underlying Assessment Center Ratings: What are we Measuring?
Date of Award
Doctor of Philosophy
Industrial and Organizational Psychology
David J. Woehr
Michael C. Rush, T. Russell Crook, Eric D. Sundstrom
The overarching goal of this study was to clarify what constructs are being measured by assessment centers (ACs). ACs have been used and studied for years, yet have measurement problems that generally center on the use of information at the dimension-level. However, a necessary step in examining this issue has been neglected: a proper delineation of what constructs ACs actually measure. In an attempt to address this issue, this study‟s primary purpose was to explore the factor structure of AC dimensions. Several a priori models from both the AC and job performance literature were examined as frameworks for explicating the constructs representing dimensions. Data from two sources were used to address this question: Intercorrelations from primary studies were synthesized using meta-analysis (k = 57) and used as input for a series of confirmatory factor analysis models. In addition, the extent to which subject matter experts perceived these broader categories to operate as a summary framework was evaluated by asking experienced AC raters to categorize primary dimensions into the categories of each model.
The results showed that Arthur et al.‟s (2003) framework provided a good fit to the data, offering additional evidence in support of this model. When compared against several alternative frameworks, Arthur et al.‟s (2003) model also provided a better fit to the data than the alternatives. Hence, these seven categories provide a viable framework for explaining what constructs underlie AC dimension ratings. In addition, subject matter experts had the highest level of agreement when classifying primary dimensions into this framework. In addition, several hierarchical models were tested based on the a priori models examined in the study. Of these models, a hierarchical three-factor model fit the well, indicating that a set of higher-order summary categories may also explain variance in the seven factors of Arthur et al.‟s (2003) framework.
Overall, this study provides some clarity on what constructs underlie AC dimension ratings. These findings are expected to make contributions for AC research and practice. Implications for these results, as well as limitations of the study and future directions for research are discussed.
Meriac, John P., "A Quantitative Review and Analysis of the Constructs Underlying Assessment Center Ratings: What are we Measuring?. " PhD diss., University of Tennessee, 2008.