Keystroke Level

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1164 Experts worldwide ranked by ideXlab platform

Hend S. Al-khalifa - One of the best experts on this subject based on the ideXlab platform.

  • A Systematic Review of Modifications and Validation Methods for the Extension of the Keystroke-Level Model
    Advances in Human-Computer Interaction, 2018
    Co-Authors: Shiroq Al-megren, Joharah Khabti, Hend S. Al-khalifa
    Abstract:

    The Keystroke-Level model (KLM) is the simplest model of the goals, operators, methods, and selection rules (GOMS) family. The KLM computes formative quantitative predictions of task execution time. This paper provides a systematic literature review of KLM extensions across various applications and setups. The objective of this review is to address research questions concerning the development and validation of extensions. A total of 54 KLM extensions have been exhaustively reviewed. The results show that the original Keystroke and mental act operators were continuously preserved or adapted and that the drawing operator was used the least. Excluding the original operators, almost 45 operators were collated from the primary studies. Only half of the studies validated their model’s efficiency through experiments. The results also identify several research gaps, such as the shortage of KLM extensions for post-GUI/WIMP interfaces. Based on the results obtained in this work, this review finally provides guidelines for researchers and practitioners.

  • ICCHP (2) - Blind FLM Web-Based Tools for Keystroke-Level Predictive Assessment of Visually Impaired Smartphone Interaction
    Lecture Notes in Computer Science, 2018
    Co-Authors: Shiroq Al-megren, Wejdan Altamimi, Hend S. Al-khalifa
    Abstract:

    The Keystroke-Level model (KLM) is a predictive model used to evaluate motor behaviour in skilled error-free user interaction involving conventional techniques, i.e. mouse and keyboard. A blind fingerstroke-Level model (blind FLM) was recently introduced as an extension of KLM to assess visually impaired interaction on smartphones. The model comprises six operators that are used to calculate the time required for a visually impaired expert user to accomplish a task on a smartphone. In this paper, we present two blind FLM tools: calculator and editor. These tools enable designers to create behavioural models of user tasks from which reliable estimates of skilled user task times can be computed. Each tool was used to model a sample task on YouTube to assess its performance against previously recorded values. Both tools accurately predicted user performance with an average error of 1.27%.

  • INTERACT (1) - Blind FLM: An Enhanced Keystroke-Level Model for Visually Impaired Smartphone Interaction
    Human-Computer Interaction - INTERACT 2017, 2017
    Co-Authors: Shiroq Al-megren, Wejdan Altamimi, Hend S. Al-khalifa
    Abstract:

    The Keystroke-Level Model (KLM) is a predictive model used to numerically predict how long it takes an expert user to accomplish a task. KLM has been successfully used to model conventional interactions, however, it does not thoroughly render smartphone touch interactions or accessible interfaces (e.g. screen readers). On the other hand, the Fingerstroke-Level Model (FLM) extends KLM to describe and assess mobile-based game applications, which marks it as a candidate model for predicting smartphone touch interactions.

Shiroq Al-megren - One of the best experts on this subject based on the ideXlab platform.

  • A Systematic Review of Modifications and Validation Methods for the Extension of the Keystroke-Level Model
    Advances in Human-Computer Interaction, 2018
    Co-Authors: Shiroq Al-megren, Joharah Khabti, Hend S. Al-khalifa
    Abstract:

    The Keystroke-Level model (KLM) is the simplest model of the goals, operators, methods, and selection rules (GOMS) family. The KLM computes formative quantitative predictions of task execution time. This paper provides a systematic literature review of KLM extensions across various applications and setups. The objective of this review is to address research questions concerning the development and validation of extensions. A total of 54 KLM extensions have been exhaustively reviewed. The results show that the original Keystroke and mental act operators were continuously preserved or adapted and that the drawing operator was used the least. Excluding the original operators, almost 45 operators were collated from the primary studies. Only half of the studies validated their model’s efficiency through experiments. The results also identify several research gaps, such as the shortage of KLM extensions for post-GUI/WIMP interfaces. Based on the results obtained in this work, this review finally provides guidelines for researchers and practitioners.

  • ICCHP (2) - Blind FLM Web-Based Tools for Keystroke-Level Predictive Assessment of Visually Impaired Smartphone Interaction
    Lecture Notes in Computer Science, 2018
    Co-Authors: Shiroq Al-megren, Wejdan Altamimi, Hend S. Al-khalifa
    Abstract:

    The Keystroke-Level model (KLM) is a predictive model used to evaluate motor behaviour in skilled error-free user interaction involving conventional techniques, i.e. mouse and keyboard. A blind fingerstroke-Level model (blind FLM) was recently introduced as an extension of KLM to assess visually impaired interaction on smartphones. The model comprises six operators that are used to calculate the time required for a visually impaired expert user to accomplish a task on a smartphone. In this paper, we present two blind FLM tools: calculator and editor. These tools enable designers to create behavioural models of user tasks from which reliable estimates of skilled user task times can be computed. Each tool was used to model a sample task on YouTube to assess its performance against previously recorded values. Both tools accurately predicted user performance with an average error of 1.27%.

  • INTERACT (1) - Blind FLM: An Enhanced Keystroke-Level Model for Visually Impaired Smartphone Interaction
    Human-Computer Interaction - INTERACT 2017, 2017
    Co-Authors: Shiroq Al-megren, Wejdan Altamimi, Hend S. Al-khalifa
    Abstract:

    The Keystroke-Level Model (KLM) is a predictive model used to numerically predict how long it takes an expert user to accomplish a task. KLM has been successfully used to model conventional interactions, however, it does not thoroughly render smartphone touch interactions or accessible interfaces (e.g. screen readers). On the other hand, the Fingerstroke-Level Model (FLM) extends KLM to describe and assess mobile-based game applications, which marks it as a candidate model for predicting smartphone touch interactions.

  • Blind FLM: An Enhanced Keystroke-Level Model for Visually Impaired Smartphone Interaction
    2017
    Co-Authors: Shiroq Al-megren, Wejdan Altamimi, Hend Al-khalifa
    Abstract:

    The Keystroke-Level Model (KLM) is a predictive model used to numerically predict how long it takes an expert user to accomplish a task. KLM has been successfully used to model conventional interactions, however, it does not thoroughly render smartphone touch interactions or accessible interfaces (e.g. screen readers). On the other hand, the Fingerstroke-Level Model (FLM) extends KLM to describe and assess mobile-based game applications, which marks it as a candidate model for predicting smartphone touch interactions.This paper aims to further extend FLM for visually impaired smartphone users. An initial user study identified basic elements of blind users’ interactions that were used to extend FLM; the new model is called “Blind FLM’”. Then an additional user study was conducted to determine the applicability of the new model for describing blind users’ touch interactions with a smartphone, and to compute the accuracy of the new model. Blind FLM evaluation showed that it can predict blind users’ performance with an average error of 2.36%.

Bonnie E. John - One of the best experts on this subject based on the ideXlab platform.

  • towards a tool for Keystroke Level modeling of skilled screen reading
    Conference on Computers and Accessibility, 2010
    Co-Authors: Shari Trewin, Bonnie E. John, John T Richards, Cal Swart, Jonathan Brezin, Rachel K E Bellamy, John C Thomas
    Abstract:

    Designers often have no access to individuals who use screen reading software, and may have little understanding of how their design choices impact these users. We explore here whether cog-nitive models of auditory interaction could provide insight into screen reader usability. By comparing human data with a tool-generated model of a practiced task performed using a screen reader, we identify several requirements for such models and tools. Most important is the need to represent parallel execution of hearing with thinking and acting. Rules for placement of cogni-tive operators that were developed for visual user interfaces may not be applicable in the auditory domain. Other mismatches be-tween the data and the model were attributed to the extremely fast listening rate and differences between the typing patterns of screen reader usage and the model's assumptions. This work in-forms the development of more accurate models of auditory inter-action. Tools incorporating such models could help designers create user interfaces that are well tuned for screen reader users, without the need for modeling expertise.

  • ASSETS - Towards a tool for Keystroke Level modeling of skilled screen reading
    Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility - ASSETS '10, 2010
    Co-Authors: Shari Trewin, Bonnie E. John, John T Richards, Cal Swart, Jonathan Brezin, Rachel K E Bellamy, John C Thomas
    Abstract:

    Designers often have no access to individuals who use screen reading software, and may have little understanding of how their design choices impact these users. We explore here whether cog-nitive models of auditory interaction could provide insight into screen reader usability. By comparing human data with a tool-generated model of a practiced task performed using a screen reader, we identify several requirements for such models and tools. Most important is the need to represent parallel execution of hearing with thinking and acting. Rules for placement of cogni-tive operators that were developed for visual user interfaces may not be applicable in the auditory domain. Other mismatches be-tween the data and the model were attributed to the extremely fast listening rate and differences between the typing patterns of screen reader usage and the model's assumptions. This work in-forms the development of more accurate models of auditory inter-action. Tools incorporating such models could help designers create user interfaces that are well tuned for screen reader users, without the need for modeling expertise.

  • comparisons of Keystroke Level model predictions to observed data
    Human Factors in Computing Systems, 2006
    Co-Authors: Leonghwee Teo, Bonnie E. John
    Abstract:

    Comparison of model prediction against observed data is an investigative step used in cognitive modeling research for human-computer interaction. In this paper we describe comparisons between Keystroke-Level Model (KLM) predictions and user behavior by total duration, aggregated events and Cohen's Kappa. Our preliminary investigations support the validity of KLM mental preparation duration and placement rules in modeling interaction with handheld devices but suggest changing a previously-published parameter.

  • CHI Extended Abstracts - Comparisons of Keystroke-Level model predictions to observed data
    CHI '06 extended abstracts on Human factors in computing systems - CHI EA '06, 2006
    Co-Authors: Leonghwee Teo, Bonnie E. John
    Abstract:

    Comparison of model prediction against observed data is an investigative step used in cognitive modeling research for human-computer interaction. In this paper we describe comparisons between Keystroke-Level Model (KLM) predictions and user behavior by total duration, aggregated events and Cohen's Kappa. Our preliminary investigations support the validity of KLM mental preparation duration and placement rules in modeling interaction with handheld devices but suggest changing a previously-published parameter.

  • predicting task execution time on handheld devices using the Keystroke Level model
    Human Factors in Computing Systems, 2005
    Co-Authors: Lu Luo, Bonnie E. John
    Abstract:

    The Keystroke-Level Model (KLM) has been shown to predict skilled use of desktop systems, but has not been validated on a handheld device that uses a stylus instead of a keyboard. This paper investigates the accuracy of KLM predictions for user interface tasks running on a Palm OS based handheld device. The models were produced using a recently developed tool for KLM construction, CogTool, and were compared to data obtained from a user study of 10 participants. Our results have shown that the KLM can accurately predict task execution time on handheld user interfaces with less than 8% prediction error.

Wejdan Altamimi - One of the best experts on this subject based on the ideXlab platform.

  • blind flm web based tools for Keystroke Level predictive assessment of visually impaired smartphone interaction
    International Conference on Computers Helping People with Special Needs, 2018
    Co-Authors: Shiroq Almegren, Wejdan Altamimi, Hend S Alkhalifa
    Abstract:

    The Keystroke-Level model (KLM) is a predictive model used to evaluate motor behaviour in skilled error-free user interaction involving conventional techniques, i.e. mouse and keyboard. A blind fingerstroke-Level model (blind FLM) was recently introduced as an extension of KLM to assess visually impaired interaction on smartphones. The model comprises six operators that are used to calculate the time required for a visually impaired expert user to accomplish a task on a smartphone. In this paper, we present two blind FLM tools: calculator and editor. These tools enable designers to create behavioural models of user tasks from which reliable estimates of skilled user task times can be computed. Each tool was used to model a sample task on YouTube to assess its performance against previously recorded values. Both tools accurately predicted user performance with an average error of 1.27%.

  • ICCHP (2) - Blind FLM Web-Based Tools for Keystroke-Level Predictive Assessment of Visually Impaired Smartphone Interaction
    Lecture Notes in Computer Science, 2018
    Co-Authors: Shiroq Al-megren, Wejdan Altamimi, Hend S. Al-khalifa
    Abstract:

    The Keystroke-Level model (KLM) is a predictive model used to evaluate motor behaviour in skilled error-free user interaction involving conventional techniques, i.e. mouse and keyboard. A blind fingerstroke-Level model (blind FLM) was recently introduced as an extension of KLM to assess visually impaired interaction on smartphones. The model comprises six operators that are used to calculate the time required for a visually impaired expert user to accomplish a task on a smartphone. In this paper, we present two blind FLM tools: calculator and editor. These tools enable designers to create behavioural models of user tasks from which reliable estimates of skilled user task times can be computed. Each tool was used to model a sample task on YouTube to assess its performance against previously recorded values. Both tools accurately predicted user performance with an average error of 1.27%.

  • blind flm an enhanced Keystroke Level model for visually impaired smartphone interaction
    International Conference on Human-Computer Interaction, 2017
    Co-Authors: Shiroq Almegren, Wejdan Altamimi, Hend S Alkhalifa
    Abstract:

    The Keystroke-Level Model (KLM) is a predictive model used to numerically predict how long it takes an expert user to accomplish a task. KLM has been successfully used to model conventional interactions, however, it does not thoroughly render smartphone touch interactions or accessible interfaces (e.g. screen readers). On the other hand, the Fingerstroke-Level Model (FLM) extends KLM to describe and assess mobile-based game applications, which marks it as a candidate model for predicting smartphone touch interactions.

  • INTERACT (1) - Blind FLM: An Enhanced Keystroke-Level Model for Visually Impaired Smartphone Interaction
    Human-Computer Interaction - INTERACT 2017, 2017
    Co-Authors: Shiroq Al-megren, Wejdan Altamimi, Hend S. Al-khalifa
    Abstract:

    The Keystroke-Level Model (KLM) is a predictive model used to numerically predict how long it takes an expert user to accomplish a task. KLM has been successfully used to model conventional interactions, however, it does not thoroughly render smartphone touch interactions or accessible interfaces (e.g. screen readers). On the other hand, the Fingerstroke-Level Model (FLM) extends KLM to describe and assess mobile-based game applications, which marks it as a candidate model for predicting smartphone touch interactions.

  • Blind FLM: An Enhanced Keystroke-Level Model for Visually Impaired Smartphone Interaction
    2017
    Co-Authors: Shiroq Al-megren, Wejdan Altamimi, Hend Al-khalifa
    Abstract:

    The Keystroke-Level Model (KLM) is a predictive model used to numerically predict how long it takes an expert user to accomplish a task. KLM has been successfully used to model conventional interactions, however, it does not thoroughly render smartphone touch interactions or accessible interfaces (e.g. screen readers). On the other hand, the Fingerstroke-Level Model (FLM) extends KLM to describe and assess mobile-based game applications, which marks it as a candidate model for predicting smartphone touch interactions.This paper aims to further extend FLM for visually impaired smartphone users. An initial user study identified basic elements of blind users’ interactions that were used to extend FLM; the new model is called “Blind FLM’”. Then an additional user study was conducted to determine the applicability of the new model for describing blind users’ touch interactions with a smartphone, and to compute the accuracy of the new model. Blind FLM evaluation showed that it can predict blind users’ performance with an average error of 2.36%.

Peter Tarasewich - One of the best experts on this subject based on the ideXlab platform.

  • a new error metric for text entry method evaluation
    Human Factors in Computing Systems, 2006
    Co-Authors: Jun Gong, Peter Tarasewich
    Abstract:

    On devices such as mobile phones, text is often entered using keypads and predictive text entry techniques. Current metrics used for measuring text entry error rates have limitations in terms of the types of errors they account for, and cannot easily distinguish between different types of errors. This research proposes a new text entry error metric that addresses some of the outstanding issues that exist with current metrics. Specifically, the metric accounts in detail for the way the user handles corrections during text entry, moving beyond current Keystroke Level error measurement. The feasibility and usefulness of this new metric is shown through the analysis of an experiment that tests an alphabetically constrained keypad design that includes upper and lower case letters, numbers, and punctuation marks.

  • CHI - A new error metric for text entry method evaluation
    Proceedings of the SIGCHI conference on Human Factors in computing systems - CHI '06, 2006
    Co-Authors: Jun Gong, Peter Tarasewich
    Abstract:

    On devices such as mobile phones, text is often entered using keypads and predictive text entry techniques. Current metrics used for measuring text entry error rates have limitations in terms of the types of errors they account for, and cannot easily distinguish between different types of errors. This research proposes a new text entry error metric that addresses some of the outstanding issues that exist with current metrics. Specifically, the metric accounts in detail for the way the user handles corrections during text entry, moving beyond current Keystroke Level error measurement. The feasibility and usefulness of this new metric is shown through the analysis of an experiment that tests an alphabetically constrained keypad design that includes upper and lower case letters, numbers, and punctuation marks.