Aptitude Tests

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 276 Experts worldwide ranked by ideXlab platform

Manuela Schroeder - One of the best experts on this subject based on the ideXlab platform.

  • AMCIS - Analysis of User Acceptance for Web Based Aptitude Tests with DART
    2020
    Co-Authors: Michael Amberg, Sonja Fischer, Manuela Schroeder
    Abstract:

    Web-based Aptitude Tests, which are a special category of Aptitude Tests, can be used for rather standardized test methods and for a large amount of users. The characteristics of web-based Aptitude Tests can have an impact on the result of the test. The aim of our research is to develop a method for the evaluation of the user acceptance for web-based Aptitude Tests. Negative influences when using a Human Computer Interface should be identified and minimized. After an analysis of existing acceptance models, the DART-approach is chosen as a basis for the adoption to web-based Aptitude Tests. Due to literature research and expert discussion, we identified twelve aggregated acceptance indicators. The use of the DART-approach helps to define a balanced set of measurable acceptance indicators for the evaluation of the user acceptance.

  • analysis of user acceptance for web based Aptitude Tests with dart
    Americas Conference on Information Systems, 2005
    Co-Authors: Michael Amberg, Sonja Fischer, Manuela Schroeder
    Abstract:

    Web-based Aptitude Tests, which are a special category of Aptitude Tests, can be used for rather standardized test methods and for a large amount of users. The characteristics of web-based Aptitude Tests can have an impact on the result of the test. The aim of our research is to develop a method for the evaluation of the user acceptance for web-based Aptitude Tests. Negative influences when using a Human Computer Interface should be identified and minimized. After an analysis of existing acceptance models, the DART-approach is chosen as a basis for the adoption to web-based Aptitude Tests. Due to literature research and expert discussion, we identified twelve aggregated acceptance indicators. The use of the DART-approach helps to define a balanced set of measurable acceptance indicators for the evaluation of the user acceptance.

Michael Amberg - One of the best experts on this subject based on the ideXlab platform.

  • AMCIS - Analysis of User Acceptance for Web Based Aptitude Tests with DART
    2020
    Co-Authors: Michael Amberg, Sonja Fischer, Manuela Schroeder
    Abstract:

    Web-based Aptitude Tests, which are a special category of Aptitude Tests, can be used for rather standardized test methods and for a large amount of users. The characteristics of web-based Aptitude Tests can have an impact on the result of the test. The aim of our research is to develop a method for the evaluation of the user acceptance for web-based Aptitude Tests. Negative influences when using a Human Computer Interface should be identified and minimized. After an analysis of existing acceptance models, the DART-approach is chosen as a basis for the adoption to web-based Aptitude Tests. Due to literature research and expert discussion, we identified twelve aggregated acceptance indicators. The use of the DART-approach helps to define a balanced set of measurable acceptance indicators for the evaluation of the user acceptance.

  • evaluation of user acceptance for web based Aptitude Tests
    Communications of the IIMA, 2006
    Co-Authors: Michael Amberg, Sonja Fischer, Manuela Schroder
    Abstract:

    Web-based Aptitude Tests, which are a special category of Aptitude Tests, can be used for rather standardized test methods and for a large amount of users. The characteristics of web-based Aptitude Tests can have an impact on the test results and the user acceptance. The aim of our research is to develop a method for the evaluation of the user acceptance for web-based Aptitude Tests. Therefore, we used the DART-approach with the dimension (Perceived) Usefulness, (Perceived) Ease of Use, (Perceived) Network Effects and (Perceived) Costs as the theoretical basis, identified important acceptance indicators, developed a questionnaire and conducted a survey. Afterwards, we proved the reliability and conducted a factor analysis. The results point out that some of the defined acceptance indicators should be revised. Additionally, the factor analysis shows that a combination of two dimensions (Perceived) Usefulness and (Perceived) Network Effects is useful especially with regard to web-based Aptitude Tests. Finally, we conducted a univariate analysis to evaluate the user acceptance of a web-based Aptitude test. The visualised result on the basis of a DART-chart clearly shows that the interviewees evaluated the indicators very differently. There are fields, where the Aptitude test fulfils the expectations, and fields, which can be improved.

  • web based Aptitude Tests at universities in german speaking countries
    Communications of the IIMA, 2005
    Co-Authors: Michael Amberg, Sonja Fischer, Manuela Schroder
    Abstract:

    Universities can select their students more and more independently. In order to support the selection process, web-based Aptitude Tests are a possibility to balance benefits und efforts of this task. Within this paper, we will point out how current web-based Aptitude Tests are designed, what competences are covered, and which methods for development are used. For this purpose, we developed a classification how web-based Aptitude Tests are implemented. Furthermore, competences as the basis of web-based Aptitude Tests are appraised. Four competence categories (professional, methodological, personal, and social competences) are selected as the most appropriate pattern. Thereafter, we analyse methods for developing competence specifications. Finally, we state lessons learned for the development of web-based Aptitude Tests at universities. These results are an important preparatory work and a basis for a systematically development of a web-based Aptitude test for the University of Erlangen-Nuremberg.

  • analysis of user acceptance for web based Aptitude Tests with dart
    Americas Conference on Information Systems, 2005
    Co-Authors: Michael Amberg, Sonja Fischer, Manuela Schroeder
    Abstract:

    Web-based Aptitude Tests, which are a special category of Aptitude Tests, can be used for rather standardized test methods and for a large amount of users. The characteristics of web-based Aptitude Tests can have an impact on the result of the test. The aim of our research is to develop a method for the evaluation of the user acceptance for web-based Aptitude Tests. Negative influences when using a Human Computer Interface should be identified and minimized. After an analysis of existing acceptance models, the DART-approach is chosen as a basis for the adoption to web-based Aptitude Tests. Due to literature research and expert discussion, we identified twelve aggregated acceptance indicators. The use of the DART-approach helps to define a balanced set of measurable acceptance indicators for the evaluation of the user acceptance.

Sonja Fischer - One of the best experts on this subject based on the ideXlab platform.

  • AMCIS - Analysis of User Acceptance for Web Based Aptitude Tests with DART
    2020
    Co-Authors: Michael Amberg, Sonja Fischer, Manuela Schroeder
    Abstract:

    Web-based Aptitude Tests, which are a special category of Aptitude Tests, can be used for rather standardized test methods and for a large amount of users. The characteristics of web-based Aptitude Tests can have an impact on the result of the test. The aim of our research is to develop a method for the evaluation of the user acceptance for web-based Aptitude Tests. Negative influences when using a Human Computer Interface should be identified and minimized. After an analysis of existing acceptance models, the DART-approach is chosen as a basis for the adoption to web-based Aptitude Tests. Due to literature research and expert discussion, we identified twelve aggregated acceptance indicators. The use of the DART-approach helps to define a balanced set of measurable acceptance indicators for the evaluation of the user acceptance.

  • evaluation of user acceptance for web based Aptitude Tests
    Communications of the IIMA, 2006
    Co-Authors: Michael Amberg, Sonja Fischer, Manuela Schroder
    Abstract:

    Web-based Aptitude Tests, which are a special category of Aptitude Tests, can be used for rather standardized test methods and for a large amount of users. The characteristics of web-based Aptitude Tests can have an impact on the test results and the user acceptance. The aim of our research is to develop a method for the evaluation of the user acceptance for web-based Aptitude Tests. Therefore, we used the DART-approach with the dimension (Perceived) Usefulness, (Perceived) Ease of Use, (Perceived) Network Effects and (Perceived) Costs as the theoretical basis, identified important acceptance indicators, developed a questionnaire and conducted a survey. Afterwards, we proved the reliability and conducted a factor analysis. The results point out that some of the defined acceptance indicators should be revised. Additionally, the factor analysis shows that a combination of two dimensions (Perceived) Usefulness and (Perceived) Network Effects is useful especially with regard to web-based Aptitude Tests. Finally, we conducted a univariate analysis to evaluate the user acceptance of a web-based Aptitude test. The visualised result on the basis of a DART-chart clearly shows that the interviewees evaluated the indicators very differently. There are fields, where the Aptitude test fulfils the expectations, and fields, which can be improved.

  • web based Aptitude Tests at universities in german speaking countries
    Communications of the IIMA, 2005
    Co-Authors: Michael Amberg, Sonja Fischer, Manuela Schroder
    Abstract:

    Universities can select their students more and more independently. In order to support the selection process, web-based Aptitude Tests are a possibility to balance benefits und efforts of this task. Within this paper, we will point out how current web-based Aptitude Tests are designed, what competences are covered, and which methods for development are used. For this purpose, we developed a classification how web-based Aptitude Tests are implemented. Furthermore, competences as the basis of web-based Aptitude Tests are appraised. Four competence categories (professional, methodological, personal, and social competences) are selected as the most appropriate pattern. Thereafter, we analyse methods for developing competence specifications. Finally, we state lessons learned for the development of web-based Aptitude Tests at universities. These results are an important preparatory work and a basis for a systematically development of a web-based Aptitude test for the University of Erlangen-Nuremberg.

  • analysis of user acceptance for web based Aptitude Tests with dart
    Americas Conference on Information Systems, 2005
    Co-Authors: Michael Amberg, Sonja Fischer, Manuela Schroeder
    Abstract:

    Web-based Aptitude Tests, which are a special category of Aptitude Tests, can be used for rather standardized test methods and for a large amount of users. The characteristics of web-based Aptitude Tests can have an impact on the result of the test. The aim of our research is to develop a method for the evaluation of the user acceptance for web-based Aptitude Tests. Negative influences when using a Human Computer Interface should be identified and minimized. After an analysis of existing acceptance models, the DART-approach is chosen as a basis for the adoption to web-based Aptitude Tests. Due to literature research and expert discussion, we identified twelve aggregated acceptance indicators. The use of the DART-approach helps to define a balanced set of measurable acceptance indicators for the evaluation of the user acceptance.

Thomas R Coyle - One of the best experts on this subject based on the ideXlab platform.

  • Test–retest changes on scholastic Aptitude Tests are not related to g
    Intelligence, 2020
    Co-Authors: Thomas R Coyle
    Abstract:

    Abstract This research examined the relation between test–retest changes on scholastic Aptitude Tests and g-loaded cognitive measures (viz., college grade-point average, Wonderlic Personnel Test, and word recall). University students who had twice taken a scholastic Aptitude test (viz., Scholastic Assessment Test or American College Testing Program Assessment) during high school were recruited. The Aptitude test raw scores and change scores were correlated with the g-loaded cognitive measures in two studies. The Aptitude test change scores (which were mostly gains) were not significantly related to the cognitive measures, whereas the Aptitude test raw scores were significantly related to those measures. Principal components analysis indicated that the Aptitude test change scores had the lowest loading on the g factor, whereas the Aptitude test raw scores and the cognitive measures had relatively high loadings on the g factor. These findings support the position that test–retest changes on scholastic Aptitude Tests do not represent changes in g. Further research is needed to determine the non-g variance components that contributed to the observed test–retest changes.

  • relations among general intelligence g Aptitude Tests and gpa linear effects dominate
    Intelligence, 2015
    Co-Authors: Thomas R Coyle
    Abstract:

    Abstract This research examined linear and nonlinear (quadratic) relations among general intelligence ( g ), Aptitude Tests (SAT, ACT, PSAT), and college GPAs. Test scores and GPAs were obtained from the National Longitudinal Survey of Youth ( N  = 1950) and the College Board Validity Study ( N  = 160670). Regressions estimated linear and quadratic relations among g , based on the Armed Services Vocational Aptitude Battery, composite and subtest scores of Aptitude Tests, and college GPAs. Linear effects explained almost all the variance in relations among variables. In contrast, quadratic effects explained trivial additional variance among variables (less than 1%, on average). The results do not support theories of intelligence (threshold theories or Spearman's Law of Diminishing Returns), which predict that test scores lose predictive power with increases in ability level or at a certain threshold.

  • test retest changes on scholastic Aptitude Tests are not related to g
    Intelligence, 2006
    Co-Authors: Thomas R Coyle
    Abstract:

    Abstract This research examined the relation between test–retest changes on scholastic Aptitude Tests and g-loaded cognitive measures (viz., college grade-point average, Wonderlic Personnel Test, and word recall). University students who had twice taken a scholastic Aptitude test (viz., Scholastic Assessment Test or American College Testing Program Assessment) during high school were recruited. The Aptitude test raw scores and change scores were correlated with the g-loaded cognitive measures in two studies. The Aptitude test change scores (which were mostly gains) were not significantly related to the cognitive measures, whereas the Aptitude test raw scores were significantly related to those measures. Principal components analysis indicated that the Aptitude test change scores had the lowest loading on the g factor, whereas the Aptitude test raw scores and the cognitive measures had relatively high loadings on the g factor. These findings support the position that test–retest changes on scholastic Aptitude Tests do not represent changes in g. Further research is needed to determine the non-g variance components that contributed to the observed test–retest changes.

Thomas R Carretta - One of the best experts on this subject based on the ideXlab platform.

  • validity of spatial ability Tests for selection into stem science technology engineering and math career fields the example of military aviation
    2017
    Co-Authors: James F Johnson, Laura G Barron, Mark R Rose, Thomas R Carretta
    Abstract:

    Quantitative and verbal Aptitude Tests are widely used in the context of student admissions and pre-employment screening. In contrast, there has been “contemporary neglect” of the potential for organizations to use spatial abilities testing to make informed decisions on candidates’ success in educational settings (Wai J, Lubinski D, Benbow CP, J Educ Psychol 101:817–835, 2009) and the workplace. We begin with a review of the research literature on the validity of spatial ability Tests for predicting performance in STEM fields (e.g., engineering, surgery, mathematics, aviation). We address the controversy regarding the extent to which spatial abilities provide incremental validity beyond traditional measures of academic Aptitude. We then present results from over a decade of U.S. Air Force research that has examined the validity of spatial ability Tests relative to verbal and quantitative measures for predicting aircrew and pilot training outcomes. Finally, consistent with meta-analyses examining pilot training outcomes across several countries (e.g., Martinussen M, Int J Aviat Psychol 6:1–20, 1996), we present results showing spatial ability Tests add substantive incremental validity to measures of numerical and verbal ability for predicting pilot training outcomes. Hence, organizations that fail to include spatial testing in screening may be overlooking many individuals most likely to excel in STEM fields. We conclude with a discussion of potential challenges associated with spatial ability testing and provide practical recommendations for organizations considering implementing spatial ability testing in student admissions or personnel selection.

  • standard cognitive psychological Tests predict military pilot training outcomes
    Aviation Psychology and Applied Human Factors, 2013
    Co-Authors: Raymond E King, Thomas R Carretta, Paul D Retzlaff, Erica Barto, Mark S Teachout
    Abstract:

    The predictive validity of scores from two cognitive functioning Tests, the Multidimensional Aptitude Battery (MAB) and the MicroCog, was examined for initial pilot training performance. In addition to training completion, several training performance criteria were available for graduates: academic grades, daily flying grades, check ride grades, and class rank. Mean score comparisons and correlations in samples of between 5,582 and 12,924 trainees across the two Tests showed small but statistically significant relationships with training performance. For example, after correction for range restriction and dichotomization of the criterion, the MAB full-scale IQ score and the MicroCog General Cognitive Functioning score were correlated .29 and .26 respectively with initial pilot training completion. The results pointed to general cognitive ability as the main predictor of training performance. Comparisons with results from studies involving US Air Force pilot Aptitude Tests showed lower validities for these...

  • predictive validity of pilot selection instruments for remotely piloted aircraft training outcome
    Aviation Space and Environmental Medicine, 2013
    Co-Authors: Thomas R Carretta
    Abstract:

    INTRODUCTION: Demand for remotely-piloted aircraft (RPA) support has increased dramatically over the last decade. Initial efforts to meet the demand focused on cross-training experienced manned aircraft pilots and funneling recent Specialized Undergraduate Pilot Training (SUPT) graduates to RPA pilot training. This approach reduced the number of personnel available for manned airframes and is no longer sustainable. In 2009, the USAF established an RPA career field and the Undergraduate RPA Training (URT) course to train officers with no prior flying experience to be RPA pilots. URT selection methods are very similar to those for SUPT. Some important factors for URT applicants are medical flight screening and Aptitude Tests [Air Force Officer Qualifying Test (AFOQT) and Pilot Candidate Selection Method (PCSM)]. The current study examined the predictive validity of the AFOQT pilot and PCSM composites for URT completion. METHOD: Subjects were 139 URT students with AFOQT and PCSM scores. The training criterion was URT pass/fail and the pass rate was 74.8%. RESULTS: Both the AFOQT pilot (r = 0.378) and PCSM (r = 0.480) composites demonstrated good predictive validity. DISCUSSION: No minimum qualifying PCSM score exists for URT. Had a minimum PCSM score of 25 been used, the pass rate would have been 80.2%; 12 more eliminees would have been screened out compared with the current AFOQT pilot minimum qualifying score of 25. Although current selection methods are effective, based on results of several RPA job/task analyses, the Air Force is examining the utility of other measures to supplement current methods.

  • a comparison of two u s air force pilot Aptitude Tests
    Aviation Space and Environmental Medicine, 1998
    Co-Authors: Thomas R Carretta, Paul D Retzlaff, Joseph D Callister, Raymond E King
    Abstract:

    BACKGROUND: The Air Force Officer Qualifying Test (AFOQT) and Multidimensional Aptitude Battery (MAB) were administered to 2233 U.S. Air Force pilot candidates to investigate the common sources of variance in those batteries. The AFOQT was operationally administered as part of the officer commissioning and aircrew selection testing requirement. The MAB is a clinical test battery and was administered to provide an intellectual baseline to assist clinicians when it becomes necessary to evaluate pilots with cognitive referral questions. RESULTS: A joint factor analysis of the AFOQT and MAB revealed that each battery had a hierarchical structure. The higher-order factor in the AFOQT previously had been identified as general cognitive ability (g). The intercorrelation between the higher-order factors from the batteries was 0.981, indicating that both measured g. Although both batteries measured g and included verbal, spatial, and perceptual speed Tests, the AFOQT also included Tests of aviation knowledge not found in the MAB. CONCLUSION: Additional studies are required to evaluate the utility of the AFOQT for clinical assessment and the MAB for officer and aircrew selection. Language: en

  • Cognitive-Components Tests Are Not Much More than g: An Extension of Kyllonen's Analyses
    Journal of General Psychology, 1996
    Co-Authors: Joseph M. Stauffer, Thomas R Carretta
    Abstract:

    Abstract A battery of 10 traditional paper-and-pencil Aptitude Tests and a battery of 25 cognitive-components-based Tests were administered to 298 men and women to investigate the common sources of variance in those batteries. Earlier confirmatory factor analyses showed each battery to have a hierarchical structure, each with a single higher order factor. The higher order factor in the paper-and-pencil battery had previously been identified as general cognitive ability, or g. The higher order factor from the cognitive-components battery had been identified as working memory. The intercorrelation of the higher order factors from the two batteries was .994, indicating that both measured g. The proportion of common variance because of g was greater in the cognitive-components battery than in the paper-and-pencil battery. The correlations between each factor based on cognitive components and g averaged .946. Despite theoretical foundations and arguments, cognitive components Tests appear to measure much the s...