Human Performance Model

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 153 Experts worldwide ranked by ideXlab platform

David C. Foyle - One of the best experts on this subject based on the ideXlab platform.

  • Evaluating Nextgen Closely Spaced Parallel Operations Concepts with Validated Human Performance Models: Flight Deck Guidelines
    2013
    Co-Authors: Becky L. Hooey, Brian F. Gore, Eric Mahlstedt, David C. Foyle
    Abstract:

    The objectives of the current research were to: Develop valid Human Performance Models (HPMs) of approach and landing operations; use these Models to evaluate the impact of NextGen Closely Spaced Parallel Operations (CSPO) on pilot Performance; and, draw conclusions regarding flight deck displays and pilot roles and responsibilities for NextGen CSPO concepts. These were accomplished in two phases. In Phase 1, CSPO scenarios were developed and validated. In Phase 2 (reported here) flight deck guidelines were developed. Phase 1 (Model Development and Validation). Using NASA’s Man-machine Integration Design and Analysis System v5 (MIDAS v5), a high-fidelity Human Performance Model (HPM) of a two-pilot commercial crew flying current-day area navigation (RNAV) approach and landing operations was developed. The Model contained over 970 individual pilot tasks, which were based on cognitive task analyses and cognitive walkthroughs conducted with commercial pilots and air traffic controllers. The Model was validated by statistically comparing Model results to existing Human-in-the-Loop (HITL) data. Workload output correlated with a comparable HITL study with r

  • NASA's Use of Human Performance Models for NextGen Concept Development and Evaluations
    2011
    Co-Authors: Brian F. Gore, Becky L. Hooey, David C. Foyle
    Abstract:

    Integrated Human Performance Model (HPM) validity is a paramount concern when HPM predictions are used for next-generation aviation system development. HPM validity is a challenge because of the integrated nature of the HPM and because many of the embedded behaviors may not be readily observed. A rigorous validation process is required to arrive at valid integrated HPMs and improve the credibility of the Models being developed. This credibility impacts the subsequent use of the Model to explore concepts being proposed for future systems. The current paper will highlight a recent methodical validation approach that was developed and applied to a Federal Aviation Administration (FAA)-National Aeronautics and Space Administration (NASA) HPM of a candidate NextGen concept of operations using the Man-machine Integration Design and Analysis System (MIDAS v5). The HPM that was developed was deemed valid from multiple levels using multiple input and output parameters.

  • Meeting the Challenge of Cognitive Human Performance Model Interpretability Through Transparency: MIDAS v5.x
    2008
    Co-Authors: Brian F. Gore, L Becky, David C. Foyle, Shelly Scott-nash
    Abstract:

    Transparency in integrated Human Performance Models (HPMs) is needed to support Model verification, validation, and credibility. However, Model transparency can be difficult to attain because of the complex interactions that can exist among the cognitive, physical, environment and crewstation Models, and because the cognitive Models embedded within integrated HPMs produce behaviors that are not directly observable. This paper will illustrate several techniques adopted by the Man-machine Integration Design and Analysis System (MIDAS) to increase three forms of transparency: input transparency, Model architecture transparency, and output transparency.

Yili Liu - One of the best experts on this subject based on the ideXlab platform.

  • Development and Evaluation of a Computational Human Performance Model of In-vehicle Manual and Speech Interactions
    Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2018
    Co-Authors: Heejin Jeong, Yili Liu
    Abstract:

    Usability evaluation traditionally relies on costly and time-consuming Human-subject experiments, which typically involve developing physical prototypes, designing usability experiment, and recruiting Human subjects. To minimize the limitations of Human-subject experiments, computational Human Performance Models can be used as an alternative. Human Performance Models generate digital simulations of Human Performance and examine the underlying psychological and physiological mechanisms to help understand and predict Human Performance. A variety of in-vehicle information systems (IVISs) using advanced automotive technologies have been developed to improve driver interactions with the in-vehicle systems. Numerous studies have used Human subjects to evaluate in-vehicle Human-system interactions; however, there are few Modeling studies to estimate and simulate Human Performance, especially in in-vehicle manual and speech interactions. This paper presents a computational Human Performance Modeling study for a usability test of IVISs using manual and speech interactions. Specifically, the Model was aimed to generate digital simulations of Human Performance for a driver seat adjustment task to decrease the comfort level of a part of driver seat (i.e., the lower lumbar), using three different IVIS controls: direct-manual, indirect-manual, and voice controls. The direct-manual control is an input method to press buttons on the touchscreen display located on the center stack in the vehicle. The indirect-manual control is to press physical buttons mounted on the steering wheel to control a small display in the dashboard-cluster, which requires confirming visual feedback on the cluster display located on the dashboard. The voice control is to say a voice command, “ deflate lower lumbar” through an in-vehicle speaker. The Model was developed to estimate task completion time and workload for the driver seat adjustment task, using the Queueing Network cognitive architecture (Liu, Feyen, & Tsimhoni, 2006). Processing times in the Model were recorded every 50 msec and used as the estimates of task completion time. The estimated workload was measured by percentage utilization of servers used in the architecture. After the Model was developed, the Model was evaluated using an empirical data set of thirty-five Human subjects from Chen, Tonshal, Rankin, & Feng (2016), in which the task completion times for the driver seat adjustment task using commercial in-vehicle systems (i.e., SYNC with MyFord Touch) were recorded. Driver workload was measured by NASA’s task load index (TLX). The average of the values from the NASA-TLX’s six categories was used to compare to the Model’s estimated workload. The Model produced results similar to actual Human Performance (i.e., task completion time, workload). The real-world engineering example presented in this study contributes to the literature of computational Human Performance Modeling research.

  • development and evaluation of a computational Human Performance Model of in vehicle manual and speech interactions
    Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2018
    Co-Authors: Heejin Jeong, Yili Liu
    Abstract:

    Usability evaluation traditionally relies on costly and time-consuming Human-subject experiments, which typically involve developing physical prototypes, designing usability experiment, and recruit...

Samuel Seljan - One of the best experts on this subject based on the ideXlab platform.

  • Classifying sensitive content in online advertisements with deep learning
    International Journal of Data Science and Analytics, 2020
    Co-Authors: Daniel Austin, Ashutosh Sanzgiri, Kannan Sankaran, Ryan Woodard, Amit Lissack, Samuel Seljan
    Abstract:

    In online advertising, an important quality control step is to audit advertising images (“creatives”) before they appear on publishers’ Web pages. This ensures that advertisements only appear on Web pages where the ad is appropriate. If a creative with sensitive content such as gambling and pornography is displayed on the wrong Web page, it can ruin the user’s experience, the publisher’s reputation, and may have legal implications. To protect against this, Humans must audit every creative before it is displayed through our ad exchange; this process is costly and time-consuming. To detect sensitive content, we use a pre-trained deep convolutional neural network (Xception Chollet in: The IEEE conference on computer vision and pattern recognition (CVPR), 2017) to process the creative image, and merge its features with the historical distribution of categories associated with the creative’s landing page (the Web page that loads when the ad is clicked, which may also contain sensitive content). This representation is then passed through a series of fully connected layers to predict the sensitive category. The trained Model achieves slightly better than Human Performance (Model accuracy 99.92%; Human accuracy 99.88%) on a large fraction of creatives (61%), while making 3.5 times fewer mistakes in very sensitive categories. The main challenges we faced were to detect, with high accuracy, creatives from 10 “very sensitive” categories as determined by our Creative Audit team, along with a highly imbalanced data set with 95% of creatives having no sensitive categories. This paper extends the work we described in Austin et al. (in: Proceedings of the 2018 IEEE international conference on data science and advanced analytics (DSAA), DSAA’18, 2018). It demonstrates the successful usage of deep learning in production as a method for detecting sensitive creatives, while respecting the constraints set by business.

  • Classifying sensitive content in online advertisements with deep learning
    International Journal of Data Science and Analytics, 2020
    Co-Authors: Daniel Austin, Ashutosh Sanzgiri, Kannan Sankaran, Ryan Woodard, Amit Lissack, Samuel Seljan
    Abstract:

    In online advertising, an important quality control step is to audit advertising images (“creatives”) before they appear on publishers’ Web pages. This ensures that advertisements only appear on Web pages where the ad is appropriate. If a creative with sensitive content such as gambling and pornography is displayed on the wrong Web page, it can ruin the user’s experience, the publisher’s reputation, and may have legal implications. To protect against this, Humans must audit every creative before it is displayed through our ad exchange; this process is costly and time-consuming. To detect sensitive content, we use a pre-trained deep convolutional neural network (Xception Chollet in: The IEEE conference on computer vision and pattern recognition (CVPR), 2017) to process the creative image, and merge its features with the historical distribution of categories associated with the creative’s landing page (the Web page that loads when the ad is clicked, which may also contain sensitive content). This representation is then passed through a series of fully connected layers to predict the sensitive category. The trained Model achieves slightly better than Human Performance (Model accuracy 99.92%; Human accuracy 99.88%) on a large fraction of creatives (61%), while making 3.5 times fewer mistakes in very sensitive categories. The main challenges we faced were to detect, with high accuracy, creatives from 10 “very sensitive” categories as determined by our Creative Audit team, along with a highly imbalanced data set with 95% of creatives having no sensitive categories. This paper extends the work we described in Austin et al. (in: Proceedings of the 2018 IEEE international conference on data science and advanced analytics (DSAA), DSAA’18, 2018). It demonstrates the successful usage of deep learning in production as a method for detecting sensitive creatives, while respecting the constraints set by business.

Brian F. Gore - One of the best experts on this subject based on the ideXlab platform.

  • Evaluating Nextgen Closely Spaced Parallel Operations Concepts with Validated Human Performance Models: Flight Deck Guidelines
    2013
    Co-Authors: Becky L. Hooey, Brian F. Gore, Eric Mahlstedt, David C. Foyle
    Abstract:

    The objectives of the current research were to: Develop valid Human Performance Models (HPMs) of approach and landing operations; use these Models to evaluate the impact of NextGen Closely Spaced Parallel Operations (CSPO) on pilot Performance; and, draw conclusions regarding flight deck displays and pilot roles and responsibilities for NextGen CSPO concepts. These were accomplished in two phases. In Phase 1, CSPO scenarios were developed and validated. In Phase 2 (reported here) flight deck guidelines were developed. Phase 1 (Model Development and Validation). Using NASA’s Man-machine Integration Design and Analysis System v5 (MIDAS v5), a high-fidelity Human Performance Model (HPM) of a two-pilot commercial crew flying current-day area navigation (RNAV) approach and landing operations was developed. The Model contained over 970 individual pilot tasks, which were based on cognitive task analyses and cognitive walkthroughs conducted with commercial pilots and air traffic controllers. The Model was validated by statistically comparing Model results to existing Human-in-the-Loop (HITL) data. Workload output correlated with a comparable HITL study with r

  • NASA's Use of Human Performance Models for NextGen Concept Development and Evaluations
    2011
    Co-Authors: Brian F. Gore, Becky L. Hooey, David C. Foyle
    Abstract:

    Integrated Human Performance Model (HPM) validity is a paramount concern when HPM predictions are used for next-generation aviation system development. HPM validity is a challenge because of the integrated nature of the HPM and because many of the embedded behaviors may not be readily observed. A rigorous validation process is required to arrive at valid integrated HPMs and improve the credibility of the Models being developed. This credibility impacts the subsequent use of the Model to explore concepts being proposed for future systems. The current paper will highlight a recent methodical validation approach that was developed and applied to a Federal Aviation Administration (FAA)-National Aeronautics and Space Administration (NASA) HPM of a candidate NextGen concept of operations using the Man-machine Integration Design and Analysis System (MIDAS v5). The HPM that was developed was deemed valid from multiple levels using multiple input and output parameters.

  • Meeting the Challenge of Cognitive Human Performance Model Interpretability Through Transparency: MIDAS v5.x
    2008
    Co-Authors: Brian F. Gore, L Becky, David C. Foyle, Shelly Scott-nash
    Abstract:

    Transparency in integrated Human Performance Models (HPMs) is needed to support Model verification, validation, and credibility. However, Model transparency can be difficult to attain because of the complex interactions that can exist among the cognitive, physical, environment and crewstation Models, and because the cognitive Models embedded within integrated HPMs produce behaviors that are not directly observable. This paper will illustrate several techniques adopted by the Man-machine Integration Design and Analysis System (MIDAS) to increase three forms of transparency: input transparency, Model architecture transparency, and output transparency.

Becky L. Hooey - One of the best experts on this subject based on the ideXlab platform.

  • Evaluating Nextgen Closely Spaced Parallel Operations Concepts with Validated Human Performance Models: Flight Deck Guidelines
    2013
    Co-Authors: Becky L. Hooey, Brian F. Gore, Eric Mahlstedt, David C. Foyle
    Abstract:

    The objectives of the current research were to: Develop valid Human Performance Models (HPMs) of approach and landing operations; use these Models to evaluate the impact of NextGen Closely Spaced Parallel Operations (CSPO) on pilot Performance; and, draw conclusions regarding flight deck displays and pilot roles and responsibilities for NextGen CSPO concepts. These were accomplished in two phases. In Phase 1, CSPO scenarios were developed and validated. In Phase 2 (reported here) flight deck guidelines were developed. Phase 1 (Model Development and Validation). Using NASA’s Man-machine Integration Design and Analysis System v5 (MIDAS v5), a high-fidelity Human Performance Model (HPM) of a two-pilot commercial crew flying current-day area navigation (RNAV) approach and landing operations was developed. The Model contained over 970 individual pilot tasks, which were based on cognitive task analyses and cognitive walkthroughs conducted with commercial pilots and air traffic controllers. The Model was validated by statistically comparing Model results to existing Human-in-the-Loop (HITL) data. Workload output correlated with a comparable HITL study with r

  • NASA's Use of Human Performance Models for NextGen Concept Development and Evaluations
    2011
    Co-Authors: Brian F. Gore, Becky L. Hooey, David C. Foyle
    Abstract:

    Integrated Human Performance Model (HPM) validity is a paramount concern when HPM predictions are used for next-generation aviation system development. HPM validity is a challenge because of the integrated nature of the HPM and because many of the embedded behaviors may not be readily observed. A rigorous validation process is required to arrive at valid integrated HPMs and improve the credibility of the Models being developed. This credibility impacts the subsequent use of the Model to explore concepts being proposed for future systems. The current paper will highlight a recent methodical validation approach that was developed and applied to a Federal Aviation Administration (FAA)-National Aeronautics and Space Administration (NASA) HPM of a candidate NextGen concept of operations using the Man-machine Integration Design and Analysis System (MIDAS v5). The HPM that was developed was deemed valid from multiple levels using multiple input and output parameters.