Psychological Measurement

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 306 Experts worldwide ranked by ideXlab platform

Andreas Frey - One of the best experts on this subject based on the ideXlab platform.

  • Special Topic: Current Issues in Educational and Psychological Measurement: Design, Calibration, and Adaptive Testing (Part 2): Guest Editorial
    Psychological test and assessment modeling, 2020
    Co-Authors: Ulf Kröhne, Andreas Frey
    Abstract:

    Guest editorialPart 2 of the special topic "Current issues in educational and Psychological Measurement: Design, calibration, and adaptive testing" of Psychological Test and Assessment Modeling continues the series of research papers dealing with empirical research questions related to calibration designs and computerized adaptive testing. This part includes three papers that add to the foregoing publications.The first paper entitled "Effect of item order on item calibration and item bank construction for computer adaptive tests" by Walter and Rose (2013) focuses on the central independence assumption and its relation to item calibration designs, bridging the gap to the papers by Yousfi and Bohme (2012), Kubinger, Steinfeld, Reif and Yanagida (2012) as well as Frey and Bernhardt (2012) published in the Part 1 of this special issue. Walter and Rose (2013) provide an experimental comparison to investigate the effect of two different calibration designs on the estimated item parameters and the ability estimates of simulated adaptive tests using the resulting item banks.The second paper entitled "Too hard, too easy, or just right? The relationship between effort or boredom and ability-difficulty fit" by Asseburg and Frey (2013) turns the attention to motivational and emotional aspects of achievement tests and their relation to test performance. Similar to Hartig and Buchholz (2012), individual differences in measures derived from Item Response Theory are investigated. In analyzing data from a second testing day of the PISA 2006 assessment in Germany, Asseburg and Frey (2013) show the correlation of the individual difference between the estimated ability and the mean difficulty of the processed items to self-reported effort and boredom.The final paper entitled "The sequential probability ratio test for multidimensional adaptive testing with between-item multidimensionality" by Seitz and Frey (2013) analyzes the sequential probability ratio test (SPRT) that was also addressed by Patton, Cheng, Yuan and Diao (2012) in Part 1. In comparison to Patton et al. (2012) who use the SPRT in combination with unidimensional adaptive testing, Seitz and Frey (2013) examine this method for classifying individuals into one of several ability categories within multidimensional adaptive testing with between-item multidimensionality.As guest editors of both parts of this special topic, we would again like to thank the contributing authors for their elaborated and highly interesting articles that are providing new and important insights in the field of "Educational and Psychological Measurement". …

  • Special Topic: Current Issues in Educational and Psychological Measurement: Design, Calibration, and Adaptive Testing (Part 1) Guest Editorial
    Psychological test and assessment modeling, 2012
    Co-Authors: Andreas Frey, Ulf Kröhne
    Abstract:

    In the last decades, Educational and Psychological Measurement was a very active field of academic research. Numerous new methods and procedures were developed and many of them are now used on a regular basis and/or are implemented in statistical software packages. But, even though a solid state of knowledge has been established in many areas of Educational and Psychological Measurement, new demands and requirements are calling for new methodological answers and specific analysis procedures. Several of these demands and requirements stem from Large-Scale Assessments (LSAs). In LSAs, very large samples are examined; often under the objectives of deriving sound comparisons between quite different populations like countries and of drawing far-reaching inferences. The general objective of the Programme for International Student Assessment (PISA), for example, is to answer the rather general question of how well prepared students are to participate in society. The combination of examining very large samples, the desire for the comparison of rather different populations, and the aim to infer farreaching interpretations creates a couple of demanding methodological challenges. Important methodological challenges that have not yet been answered sufficiently concern aspects of complex test designs used to distribute test items to participants, the handling of unwanted item context effects on both item parameter estimates and test performance, the calibration of data sets assessed with complex study designs, and the application of computerized adaptive testing (CAT) in order to meet specific diagnostic needs.The special topic "Current issues in Educational and Psychological Measurement: Design, calibration, and adaptive testing" of Psychological Test and Assessment Modeling assembles a series of research papers addressing current issues in these areas. The general methodological approach used in all papers is the Item Response Theory (IRT). The special topic is spread over two issues of Psychological Test and Assessment Modeling. This issue is the first part and includes five papers.With the first paper entitled "Principles and procedures of considering item sequence effects in the development of calibrated item pools: Conceptual analysis and empirical illustration" Yousfi and Bohme (2012) concentrate on item context effects due to the position and the sequence in which they are presented in test booklets. After introducing a taxonomy of booklet designs, different booklet designs are compared with regards to the bias and efficiency of item parameter estimates for CAT within two simulation studies.The second paper entitled "On the importance of using balanced booklet designs in PISA" by Frey and Bernhardt (2012) focusses on the balanced booklet design used in PISA from the year 2003 on. The effects of a systematic distortion of the balanced booklet design structure on estimates for reading performance in different sub-populations are examined. Additionally, the question as to whether students with special characteristics are more prone to be advantaged or disadvantaged by a balanced booklet design compared to an unbalanced booklet design is analyzed.The third paper entitled "A multilevel item response model for item position effects and individual persistence" from Hartig and Buchholz (2012) explicitly examines item position effects using student responses from different countries assessed in PISA 2006. In contrast to Yousfi and Bohme (2012), who compare different booklet designs with regards to item parameter estimates within simulation studies, Hartig and Buchholz investigate individual differences in item position effects and their relationship with student performance in science. …

  • multidimensional adaptive testing in educational and Psychological Measurement current state and future challenges
    Studies in Educational Evaluation, 2009
    Co-Authors: Andreas Frey, Nickinils Seitz
    Abstract:

    Abstract The paper gives an overview of multidimensional adaptive testing (MAT) and evaluates its applicability in educational and Psychological testing. The approach of Segall (1996) is described as a general framework for MAT. The main advantage of MAT is its capability to increase Measurement efficiency. In simulation studies conceptualizing situations typical to large scale assessments, the number of presented items was reduced by MAT by about 30–50% compared to unidimensional adaptive testing and by about 70% compared to fixed item testing holding Measurement precision constant. Empirical results underline these findings. Before MAT is used routinely some open questions should be answered first. After that, MAT represents a very promising approach to highly efficient simultaneous testing of multiple competencies.

Joemon M. Jose - One of the best experts on this subject based on the ideXlab platform.

  • Temporal attention graph
    2009 17th European Signal Processing Conference, 2009
    Co-Authors: Joemon M. Jose
    Abstract:

    Temporal attention is a Psychological Measurement of human focus in a long perceptual process such as watching a sports video. This Measurement facilitates the identification of the most attractive components in media documents, especially in videos. In this paper, we propose a graphic representation which visualizes attention related temporal sequences from multiple resolutions. Efficient image operations are used to analyze perceptual attention. This results in an effective fusion approach for temporal attention estimation. We evaluate the effectiveness by the application of general highlight detection in sports videos, as sports highlights are temporal attended area. The experimental collection includes six full football games from FIFA World Cup and European Champion.

Ulf Kröhne - One of the best experts on this subject based on the ideXlab platform.

  • Special Topic: Current Issues in Educational and Psychological Measurement: Design, Calibration, and Adaptive Testing (Part 2): Guest Editorial
    Psychological test and assessment modeling, 2020
    Co-Authors: Ulf Kröhne, Andreas Frey
    Abstract:

    Guest editorialPart 2 of the special topic "Current issues in educational and Psychological Measurement: Design, calibration, and adaptive testing" of Psychological Test and Assessment Modeling continues the series of research papers dealing with empirical research questions related to calibration designs and computerized adaptive testing. This part includes three papers that add to the foregoing publications.The first paper entitled "Effect of item order on item calibration and item bank construction for computer adaptive tests" by Walter and Rose (2013) focuses on the central independence assumption and its relation to item calibration designs, bridging the gap to the papers by Yousfi and Bohme (2012), Kubinger, Steinfeld, Reif and Yanagida (2012) as well as Frey and Bernhardt (2012) published in the Part 1 of this special issue. Walter and Rose (2013) provide an experimental comparison to investigate the effect of two different calibration designs on the estimated item parameters and the ability estimates of simulated adaptive tests using the resulting item banks.The second paper entitled "Too hard, too easy, or just right? The relationship between effort or boredom and ability-difficulty fit" by Asseburg and Frey (2013) turns the attention to motivational and emotional aspects of achievement tests and their relation to test performance. Similar to Hartig and Buchholz (2012), individual differences in measures derived from Item Response Theory are investigated. In analyzing data from a second testing day of the PISA 2006 assessment in Germany, Asseburg and Frey (2013) show the correlation of the individual difference between the estimated ability and the mean difficulty of the processed items to self-reported effort and boredom.The final paper entitled "The sequential probability ratio test for multidimensional adaptive testing with between-item multidimensionality" by Seitz and Frey (2013) analyzes the sequential probability ratio test (SPRT) that was also addressed by Patton, Cheng, Yuan and Diao (2012) in Part 1. In comparison to Patton et al. (2012) who use the SPRT in combination with unidimensional adaptive testing, Seitz and Frey (2013) examine this method for classifying individuals into one of several ability categories within multidimensional adaptive testing with between-item multidimensionality.As guest editors of both parts of this special topic, we would again like to thank the contributing authors for their elaborated and highly interesting articles that are providing new and important insights in the field of "Educational and Psychological Measurement". …

  • Special Topic: Current Issues in Educational and Psychological Measurement: Design, Calibration, and Adaptive Testing (Part 1) Guest Editorial
    Psychological test and assessment modeling, 2012
    Co-Authors: Andreas Frey, Ulf Kröhne
    Abstract:

    In the last decades, Educational and Psychological Measurement was a very active field of academic research. Numerous new methods and procedures were developed and many of them are now used on a regular basis and/or are implemented in statistical software packages. But, even though a solid state of knowledge has been established in many areas of Educational and Psychological Measurement, new demands and requirements are calling for new methodological answers and specific analysis procedures. Several of these demands and requirements stem from Large-Scale Assessments (LSAs). In LSAs, very large samples are examined; often under the objectives of deriving sound comparisons between quite different populations like countries and of drawing far-reaching inferences. The general objective of the Programme for International Student Assessment (PISA), for example, is to answer the rather general question of how well prepared students are to participate in society. The combination of examining very large samples, the desire for the comparison of rather different populations, and the aim to infer farreaching interpretations creates a couple of demanding methodological challenges. Important methodological challenges that have not yet been answered sufficiently concern aspects of complex test designs used to distribute test items to participants, the handling of unwanted item context effects on both item parameter estimates and test performance, the calibration of data sets assessed with complex study designs, and the application of computerized adaptive testing (CAT) in order to meet specific diagnostic needs.The special topic "Current issues in Educational and Psychological Measurement: Design, calibration, and adaptive testing" of Psychological Test and Assessment Modeling assembles a series of research papers addressing current issues in these areas. The general methodological approach used in all papers is the Item Response Theory (IRT). The special topic is spread over two issues of Psychological Test and Assessment Modeling. This issue is the first part and includes five papers.With the first paper entitled "Principles and procedures of considering item sequence effects in the development of calibrated item pools: Conceptual analysis and empirical illustration" Yousfi and Bohme (2012) concentrate on item context effects due to the position and the sequence in which they are presented in test booklets. After introducing a taxonomy of booklet designs, different booklet designs are compared with regards to the bias and efficiency of item parameter estimates for CAT within two simulation studies.The second paper entitled "On the importance of using balanced booklet designs in PISA" by Frey and Bernhardt (2012) focusses on the balanced booklet design used in PISA from the year 2003 on. The effects of a systematic distortion of the balanced booklet design structure on estimates for reading performance in different sub-populations are examined. Additionally, the question as to whether students with special characteristics are more prone to be advantaged or disadvantaged by a balanced booklet design compared to an unbalanced booklet design is analyzed.The third paper entitled "A multilevel item response model for item position effects and individual persistence" from Hartig and Buchholz (2012) explicitly examines item position effects using student responses from different countries assessed in PISA 2006. In contrast to Yousfi and Bohme (2012), who compare different booklet designs with regards to item parameter estimates within simulation studies, Hartig and Buchholz investigate individual differences in item position effects and their relationship with student performance in science. …

David Alexander Ellis - One of the best experts on this subject based on the ideXlab platform.

  • are smartphones really that bad improving the Psychological Measurement of technology related behaviors
    Computers in Human Behavior, 2019
    Co-Authors: David Alexander Ellis
    Abstract:

    Understanding how people use technology remains important, particularly when measuring the impact this might have on individuals and society. To date, research within Psychological science often frames new technology as problematic with overwhelmingly negative consequences. However, this paper argues that the latest generation of psychometric tools, which aim to assess smartphone usage, are unable to capture technology related experiences or behaviors. As a result, many conclusions concerning the Psychological impact of technology use remain unsound. Current assessments have also failed to keep pace with new methodological developments and these data-intensive approaches challenge the notion that smartphones and related technologies are inherently problematic. The field should now consider how it might re-position itself conceptually and methodologically given that many ‘addictive’ technologies have long since become intertwined with daily life.

Alexandra Noelle Fisher - One of the best experts on this subject based on the ideXlab platform.

  • Measurement memo I: Updated practices in Psychological Measurement for sexual scientists
    Canadian Journal of Human Sexuality, 2019
    Co-Authors: John Kitchener Sakaluk, Alexandra Noelle Fisher
    Abstract:

    The validity of Psychological Measurement is a crucial auxiliary theory underlying many sexual science studies. Although many sexuality researchers are familiar with certain elements of psychologic...

  • Measurement Memo I: Updated Practices in Psychological Measurement for Sexual Scientists
    2019
    Co-Authors: John Kitchener Sakaluk, Alexandra Noelle Fisher
    Abstract:

    The validity of Psychological Measurement is a crucial auxiliary theory underlying many sexual science studies. Although many sexuality researchers are familiar with certain elements of Psychological Measurement, the field of Psychological Measurement is a developing and evolving literature, with concepts, applications, and techniques that do not always trickle down quickly into interdisciplinary fields like sexual science. The purpose of this Measurement Memo, therefore, is to connect sexual scientists to Measurement-related issues, explanations, and resources that they may not otherwise encounter in their scholarly reading. Our review focuses on those carrying out Psychological Measurement using theories and methods of latent variable modeling, and we identify and summarize key ideas and references that serve as good launching points for sexual scientists to begin to improve their Psychological Measurement practices, for beginners and seasoned users alike.