Assessment Center

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 106320 Experts worldwide ranked by ideXlab platform

Filip Lievens - One of the best experts on this subject based on the ideXlab platform.

  • Current Theory and Practice of Assessment Centers: The Importance of Trait Activation
    Oxford Handbooks Online, 2009
    Co-Authors: Filip Lievens, Liesbet De Koster, Eveline Schollaert
    Abstract:

    Assessment Centers have always had a strong link with practise. This link is so strong that the theoretical basis of the workings of an Assessment Center is sometimes questioned. This article posits that trait activation theory might be fruitfully used to explain how job-relevant candidate behavior is elicited and rated in Assessment Centers. Trait activation theory is a recent theory that focuses on the person–situation interaction to explain behavior based on responses to trait-relevant cues found in situations. These observable responses serve as the basis for behavioral ratings on dimensions used in a variety of Assessments such as performance appraisal and interviews, but in also Assessment Centers. The article starts by explaining the basic tenets behind the Assessment Center method and trait activation theory. It shows how trait activation theory might have key implications for current and future Assessment Center research. The article also provides various directions for future Assessment Center studies.

  • predicting cross cultural training performance the validity of personality cognitive ability and dimensions measured by an Assessment Center and a behavior description interview
    Journal of Applied Psychology, 2003
    Co-Authors: Filip Lievens, Michael M Harris, Etienne Van Keer, Claire Bisqueret
    Abstract:

    This study examined the validity of a broad set of predictors for selecting European managers for a cross-cultural training program in Japan. The selection procedure assessed cognitive ability, personality, and dimensions measured by Assessment Center exercises and a behavior description interview. Results show that the factor Openness was significantly related to cross-cultural training performance, whereas cognitive ability was significantly correlated with language acquisition. The dimensions of adaptability, teamwork, and communication as measured by a group discussion exercise provided incremental variance in both criteria, beyond cognitive ability and personality. In general, these results are consistent with the literature on domestic selection, although there are some important differences.

  • dimension and exercise variance in Assessment Center scores a large scale evaluation of multitrait multimethod studies
    Journal of Applied Psychology, 2001
    Co-Authors: Filip Lievens, James M Conway
    Abstract:

    This study addresses 3 questions regarding Assessment Center construct validity: (a) Are Assessment Center ratings best thought of as reflecting dimension constructs (dimension model), exercises (exercise model), or a combination? (b) To what extent do dimensions or exercises account for variance? (c) Which design characteristics increase dimension variance? To this end, a large set of multitrait-multimethod studies (N = 34) were analyzed, showing that Assessment Center ratings were best represented (i.e., in terms of fit and admissible solutions) by a model with correlated dimensions and exercises specified as correlated uniquenesses. In this model, dimension variance equals exercise variance. Significantly more dimension variance was found when fewer dimensions were used and when assessors were psychologists. Use of behavioral checklists, a lower dimension-exercise ratio, and similar exercises also increased dimension variance.

  • A different look at Assessment Centers: Views of Assessment Center users
    International Journal of Selection and Assessment, 1999
    Co-Authors: Filip Lievens, Hans Goemaere
    Abstract:

    This study aims to shed light on possible problems of Assessment Center users and designers when developing and implementing Assessment Centers. Semi-structured interviews with a representative sample of Assessment Center users in Flanders revealed that, besides a large variability in Assessment Center practice, practitioners experience problems with dimension selection and definition, exercise design, line/staff managers as assessors, distinguishing between observation and evaluation, and with the content of assessor training programs. Solutions for these problems are suggested.

  • PROFESSIONAL FORUM A Different Look at Assessment Centers: Views of Assessment Center Users
    1999
    Co-Authors: Filip Lievens, Hans Goemaere
    Abstract:

    This study aims to shed light on possible problems of Assessment Center users and designers when developing and implementing Assessment Centers. Semi-structured interviews with a representative sample of Assessment Center users in Flanders revealed that, besides a large variability in Assessment Center practice, practitioners experience problems with dimension selection and definition, exercise design, line/staff managers as assessors, distinguishing between observation and evaluation, and with the content of assessor training programs. Solutions for these problems are suggested.

Matthew S Fleisher - One of the best experts on this subject based on the ideXlab platform.

Pamela S. Edens - One of the best experts on this subject based on the ideXlab platform.

  • a meta analysis of the criterion related validity of Assessment Center dimensions
    Personnel Psychology, 2003
    Co-Authors: J Winfred R Arthur, Eric Anthony Day, Theresa L. Mcnelly, Pamela S. Edens
    Abstract:

    We used meta-analytic procedures to investigate the criterion-related validity of Assessment Center dimension ratings. By focusing on dimension-level information, we were able to assess the extent to which specific constructs account for the criterion-related validity of Assessment Centers. From a total of 34 articles that reported dimension-level validities, we collapsed 168 Assessment Center dimension labels into an overriding set of 6 dimensions: (a) consideration/awareness of others, (b) communication, (c) drive, (d) influencing others, (e) organizing and planning, and (f) problem solving. Based on this set of 6 dimensions, we extracted 258 independent data points. Results showed a range of estimated true criterion-related validities from .25 to .39. A regression-based composite consisting of 4 out of the 6 dimensions accounted for the criterion-related validity of Assessment Center ratings and explained more variance in performance (20%) than Gaugler, Rosenthal, Thornton, and Bentson (1987) were able to explain using the overall Assessment Center rating (14%).

  • A META‐ANALYSIS OF THE CRITERION‐RELATED VALIDITY OF Assessment Center DIMENSIONS
    Personnel Psychology, 2003
    Co-Authors: Winfred Arthur, Eric Anthony Day, Theresa L. Mcnelly, Pamela S. Edens
    Abstract:

    We used meta-analytic procedures to investigate the criterion-related validity of Assessment Center dimension ratings. By focusing on dimension-level information, we were able to assess the extent to which specific constructs account for the criterion-related validity of Assessment Centers. From a total of 34 articles that reported dimension-level validities, we collapsed 168 Assessment Center dimension labels into an overriding set of 6 dimensions: (a) consideration/awareness of others, (b) communication, (c) drive, (d) influencing others, (e) organizing and planning, and (f) problem solving. Based on this set of 6 dimensions, we extracted 258 independent data points. Results showed a range of estimated true criterion-related validities from .25 to .39. A regression-based composite consisting of 4 out of the 6 dimensions accounted for the criterion-related validity of Assessment Center ratings and explained more variance in performance (20%) than Gaugler, Rosenthal, Thornton, and Bentson (1987) were able to explain using the overall Assessment Center rating (14%).

Katja Pohley - One of the best experts on this subject based on the ideXlab platform.

  • a survey of Assessment Center practices in organizations in the united states
    Personnel Psychology, 1997
    Co-Authors: Annette C Spychalski, Miguel A Quinones, Barbara B Gaugler, Katja Pohley
    Abstract:

    Two hundred fifteen organizations in the United States provided information about multiple aspects of their Assessment Centers, including design, usage, and their adherence to professional guidelines and research-based suggestions for the use of this method. Results reveal that Centers are usually conducted for selection, promotion, and development purposes. Supervisor recommendation plays a sizable role in choosing Center participants. Most often, line managers act as assessors; they typically arrive at participant ratings through a consensus process. In general, respondents indicate close adherence to recommendations for Center design and assessor training. Recommendations involving other practices (e.g., informing participants, evaluating assessors, validating Center results) are frequently not followed. Furthermore, methods thought to improve predictive validity of Center ratings are underutilized. Variability in Center practices according to industry and Center purpose was revealed. We encourage practitioners to follow recommendations for Center usage, and researchers to work to better understand moderators of Center validity.

Phillip E. Lowry - One of the best experts on this subject based on the ideXlab platform.

  • A Survey of the Assessment Center Process in the Public Sector
    Public Personnel Management, 1996
    Co-Authors: Phillip E. Lowry
    Abstract:

    This survey of public sector police and fire chiefs and human resources professionals disclosed increasing use of the Assessment Center method. It also disclosed several serious flaws in the Assessment Centers used in the public sector. Job analyses were not always required, validation was reported lacking or inappropriate, assessors were not always properly trained, and feedback to and from participants was not invariably provided.

  • Selection Methods: Comparison of Assessment Centers with Personnel Records Evaluations
    Public Personnel Management, 1994
    Co-Authors: Phillip E. Lowry
    Abstract:

    Personnel records evaluations were compared with results from seven Assessment Centers to determine if these evaluations improved the predictive power of the Assessment Center. Fifty-five candidate...

  • The Structured Interview: An Alternative to the Assessment Center?
    Public Personnel Management, 1994
    Co-Authors: Phillip E. Lowry
    Abstract:

    This article discusses how to improve the validity and reliability of structured interviews. A framework for the structured interview is suggested. The framework is based on the foundations laid by various researchers, as well as the guidelines for Assessment Centers. The proposed framework was used to structure an interview used in a selection test. The results suggest that this kind of structured interview may be a valid and less costly alternative to the Assessment Center. Additional research to refine and build on the framework is suggested. Personnel selection for managerial and supervisory positions is important for efficient and effective conduct of business in both the private and public sectors. The most widely used personnel selection process today is the interview. Dipboye reports that over 70% of organizations in the United States use the unstructured interview in promotion decisions. In Europe the percentages are even higher with over 90% of British and 94% of French employers reporting the use of interviews for managerial selection.(1) The validity and reliability of the unstructured interview has been shown to be relatively low. Several procedures, such as adding structure to the process and establishing standards have been suggested for improving the interview process. These preliminary efforts have markedly improved the reliability and of the interview process.(2) Purpose The purpose of this article is to build a framework of suggested procedures for the structured interview based on both the foundation laid by previous researchers, and the guidelines in use for Assessment Centers. This framework provides practitioners and researchers with a new starting point for the design and conduct of structured interviews and also defines nascent standards for the structured interview. The Assessment Center and the Structured Interview The Assessment Center method, while not used as extensively as the interview, has been receiving increasing attention. It is particularly important in the managerial selection process for fire and police departments. In 1982, over 44 percent of 156 United States federal, state, and local governments used the Assessment Center.(3) As of 1984, 32 of 73 metropolitan United States fire departments used the Assessment Center, especially for promotion to supervisory and managerial positions.(4) Meta-analytic studies of both the Assessment Center and interview methods reveal that structured interviews and Assessment Centers have similar validities. Wiesner & Cronshaw reported that the 95% confidence interval for the validity coefficient for structured employment interviews was .34 - .86 using the criterion of potential job success. By comparison, Gaugler, Rosenthal, Thornton III, & Bentson reported the Assessment Center 95% confidence interval was .15 - .91 for the similar criterion of management potential.(5) While the validities for Assessment Centers and structured interviews are similar, the direct and indirect costs are not. Typically, Assessment Centers use three or more situational simulations requiring direct observation. A structured interview, on the other hand, may be conducted with only one situation requiring direct observation. Thus the time required to test using a structured interview could be reduced by as much as 25% - 50% of that required for an Assessment Center with a concomitant reduction in cost. One of the most important strengths of the Assessment Center process is the defined set of standards for the design and conduct of an Assessment Center.(6) While these standards are not complete and are still evolving, they do at least provide a fairly definitive set of suggested ways to design and conduct Assessment Centers. Conversely, one of the most striking deficiencies in the interview method today is the lack of such procedures and standards. The need for definitive standards or guidelines for interviews has long been recognized. …

  • The Assessment Center: Effects of Varying Consensus Procedures
    Public Personnel Management, 1992
    Co-Authors: Phillip E. Lowry
    Abstract:

    The impact of using two different consensus procedures in an Assessment Center was investigated using a field experiment. Two groups of assessors observed participants in the same exercises. The experimental group used a consensus procedure that did not allow evaluative discussions of behaviors or attribution of scores to assessors. The control group allowed both activities. The results showed significant and important differences between the two groups of assessors in both scores and rankings of participants. Scores from the experimental group showed no significant difference form independent ratings by supervisors on the same performance dimensions. This contrasted with the strongly significant difference shown by the control group. The results support previous findings that there is a need to standardize the consensus procedures. The process described for the experimental group is suggested as a model. The Assessment Center continues to play an increasingly important role in the selection and development of managers in the public sector. As early as 1982 over 44 percent of 156 federal, state, and local governments used the Assessment Center (Fitzgerald and Quintance, 1982), Yeager (1986) reported that 44% of 73 metropolitan fire departments used the Assessment Center. The key issue in any selection process, including the Assessment Center, is the validity of the results. Questions have been raised concerning the Assessment Center process and its potential impact on validity. For example, Sackett (1982) suggested that, "The main cause of concern is the lack of standarization among Centers. Assessment is a complex process and variations exist from organization to organization on countless factors, including...the method of reaching consensus among assessors...." (p. 144). While there are published standards for Assessment Centers (Task Force, 1980), they are not specific concerning the method used for evaluating participants and reaching consensus on scores (Fitzgerald & Quaintance, 1982). The previous standards (Task Force, 1980) required that assessor judgments were to be based on observations of behaviors and that these judgments were to be integrated by pooling observations at an evaluation meeting. The integration process, or consensus discussion, was never standardized. The newest guidelines (Task Force, 1989) require that the integration be obtained either through the use of a pooling process or "through a statistical integration process validated in accord with professionally accepted standards" (463). Again, the guidelines are silent on the specific way the pooling or consensus process should be conducted. A major concern in consensus discussions is the possibility that one or more assessors might take control of the group and improperly influence the ratings (Schmitt, 1977). Klimoski, et al. (1980) addressed the question of the influence exerted by the chair during consensus discussions. They found that it was possible/or the chair to influence decisions to the extent the decisions reached may have been erroneous. Sackerr & Wilson (1982) reported that there were apparent differences in assessor influence in one Assessment Center. The consensus procedure they used included evaluative discussion of behaviors and attribution of scores to assessors. This seems to be the most common procedure reported in the literature; for example, Dugan (1988), Klimoski, et al. (1980), Russell (1985), and Silverman, et al. (1986). Lowry (in press) studied four Assessment Centers. He used a markedly different consensus procedure patterned after the NGT process pioneered by Delbecque, et al. (1975). In Lowry's Assessment Centers the assessors were not permitted to make evaluative comments during the consensus discussions nor were they allowed to announce the ratings. Lowry found that there was no significant interassessor influence. He concluded that the way the consensus discussions were conducted may have contributed to these results. …

  • The Assessment Center: Reducing Interassessor Influence
    Public Personnel Management, 1991
    Co-Authors: Phillip E. Lowry
    Abstract:

    Four Assessment Centers were conducted using scoring and evaluation procedures patterned after the Nominal Group Technique. Assessors determined scores on performance dimensions after each exercise and just prior to and during the consensus discussions. These scores were never attributed to the assessor. No significant interassessor influence was found in any of the Centers. There is no published standard or otherwise agreed upon way to conduct the consensus discussions within an Assessment Center. Researchers have found that assessors can influence one another during the consensus discussions (Sackett and Wilson). The purpose of this paper is to report findings concerning this interassessor influence in four Assessment Centers. The Assessment Center Process The Assessment Center process for selecting personnel for hire, promotion, or for evaluating managerial knowledges, skills and abilities is becoming more widely used, especially in the public sector. A typical Assessment Center requires participants to complete several simulations that test two or more performance dimensions. Job analysis is used to develop both the simulations and the performance dimensions to ensure their job-relatedness. Assessors observe the behaviors of the participants, and ultimately pool their observation, evaluate the behaviors, and provide a score for the related performance dimensions. The research evidence suggests that properly conducted Assessment Centers are job related and predictive of managerial success (Thornton and Byham, 1982). The operating phrase in the preceding is "properly conducted." What constitutes a properly conducted Assessment Center? There are published standards (Task Force, 1980), but these standards are relatively general in scope. While they do, for example require that assessors "pool" judgments to arrive at scores, they do not specify how this pooling procedure should be accomplished. Cohen (1978) suggested that the pooling of judgments in "the consensus discussion is the most central aspect of Assessment Center technology." Interassessor Influence Are there any problems that might arise from the pooling process? If one or more assessors can influence other assessors during the pooling and evaluation process, the variation is Assessment Center ratings would be affected. Sackett and Wilson (1982) found that there was interassessor influence in one of two Assessment Centers they evaluated. Participant scores were affected by this influence. They suggested that the consensus judgment process includes the opportunity for some assessors to exert more influence on the outcome than others. They concluded that "differences in (assessor) influence are a phenomenon worthy of furthers consideration." They based their finding on the observed differences in influence in two Assessment Centers. Sackett and Wilson (1982) measured assessor influence on the consensus decision by determining the frequency with which an assessor changed a rating during the consensus discussion. Having an assessor's rating adopted by the group was evidence of that assessor's influence. Hence the smaller the relative number of scoring changes, the greater the influence of the assessor. While there may be other factors that would explain why assessors change their scores, the influence factor suggested by Sackett and Wilson (1982) was accepted as the basic premise for the research reported on in this paper. Consensus Procedures to Reduce Interassessor Influence The primary issue addressed in this paper is whether the use of a specific procedure for scoring and researching consensus on evaluation can reduce interassessor influence. The basic research question is whether, when using this procedure, there are any significant differences in the number of scoring changes made by the assessor between the beginning and the end of the consensus discussion. …