Task Success Rate

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 411 Experts worldwide ranked by ideXlab platform

Bruce N. Walker - One of the best experts on this subject based on the ideXlab platform.

  • Menu Navigation With In-Vehicle Technologies: Auditory Menu Cues Improve Dual Task Performance, Preference, and Workload.
    International Journal of Human-Computer Interaction, 2015
    Co-Authors: Myounghoon Jeon, Jeff Wilson, Thomas M. Gable, B. K. Davison, Michael A. Nees, Bruce N. Walker
    Abstract:

    Auditory display research for driving has mainly examined a limited range of Tasks (e.g., collision warnings, cell phone Tasks). In contrast, the goal of this project was to evaluate the effectiveness of enhanced auditory menu cues in a simulated driving context. The advanced auditory cues of “spearcons” (compressed speech cues) and “spindex” (a speech-based index cue) were predicted to improve both menu navigation and driving. Two experiments used a dual Task paradigm in which users selected songs on the vehicle’s infotainment system. In Experiment 1, 24 undergraduates played a simple, perceptual-motor ball-catching game (the primary Task; a surrogate for driving), and navigated through an alphabetized list of 150 song titles—rendered as an auditory menu—as a secondary Task. The menu was presented either in the typical visual-only manner, or enhanced with text-to-speech (TTS), or TTS plus one of three types of additional auditory cues. In Experiment 2, 34 undergraduates conducted the same secondary Task while driving in a simulator. In both experiments, performance on both the primary Task (Success Rate of the game or driving performance) and the secondary Task (menu search time) was better with the auditory menus than with no sound. Perceived workload scores as well as user preferences favored the enhanced auditory cue types. These results show that adding audio, and enhanced auditory cues in particular, can allow a driver to opeRate the menus of in-vehicle technologies more efficiently while driving more safely. Results are discussed with multiple resources theory. [ABSTRACT FROM AUTHOR]

  • AutomotiveUI - Enhanced auditory menu cues improve dual Task performance and are preferred with in-vehicle technologies
    Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI '09, 2009
    Co-Authors: Myounghoon Jeon, Jeff Wilson, B. K. Davison, Michael A. Nees, Bruce N. Walker
    Abstract:

    Auditory display research for driving has mainly focused on collision warning signals, and recent studies on auditory in-vehicle information presentation have examined only a limited range of Tasks (e.g., cell phone operation Tasks or verbal Tasks such as reading digit strings). The present study used a dual Task paradigm to evaluate a plausible scenario in which users navigated a song list. We applied enhanced auditory menu navigation cues, including spearcons (i.e., compressed speech) and a spindex (i.e., a speech index that used brief audio cues to communicate the user's position in a long menu list). Twenty-four undergraduates navigated through an alphabetized song list of 150 song titles---rendered as an auditory menu---while they concurrently played a simple, perceptual-motor, ball-catching game. The menu was presented with text-to-speech (TTS) alone, TTS plus one of three types of enhanced auditory cues, or no sound at all. Both performance of the primary Task (Success Rate of the game) and the secondary Task (menu search time) were better with the auditory menus than with no sound. Subjective workload scores (NASA TLX) and user preferences favored the enhanced auditory cue types. Results are discussed in terms of multiple resources theory and practical IVT design applications.

  • Enhanced auditory menu cues improve dual Task performance and are preferred with in-vehicle technologies
    Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI '09, 2009
    Co-Authors: Myounghoon Jeon, Jeff Wilson, B. K. Davison, Michael A. Nees, Bruce N. Walker
    Abstract:

    Auditory display research for driving has mainly focused on collision warning signals, and recent studies on auditory in-vehicle information presentation have examined only a limited range of Tasks (e.g., cell phone operation Tasks or verbal Tasks such as reading digit strings). The present study used a dual Task paradigm to evaluate a plausible scenario in which users navigated a song list. We applied enhanced auditory menu navigation cues, including spearcons (i.e., compressed speech) and a spindex (i.e., a speech index that used brief audio cues to communicate the users position in a long menu list). Twenty-four undergraduates navigated through an alphabetized song list of 150 song titlesrendered as an auditory menuwhile they concurrently played a simple, perceptual-motor, ball-catching game. The menu was presented with text-to-speech (TTS) alone, TTS plus one of three types of enhanced auditory cues, or no sound at all. Both performance of the primary Task (Success Rate of the game) and the secondary Task (menu search time) were better with the auditory menus than with no sound. Subjective workload scores (NASA TLX) and user preferences favored the enhanced auditory cue types. Results are discussed in terms of multiple resources theory and practical IVT design applications.

Adam Emfield - One of the best experts on this subject based on the ideXlab platform.

  • SLT - An end-to-end dialog system for TV program discovery
    2014 IEEE Spoken Language Technology Workshop (SLT), 2014
    Co-Authors: Deepak Ramachandran, Benjamin Douglas, Ronald Provine, Adwait Ratnaparkhi, Jeremy Mendel, William Jarrold, Adam Emfield
    Abstract:

    In this paper, we present an end-to-end dialog system for TV program discovery that uniquely combines several technologies such as trainable relation extraction, belief tracking over relational structures, mixed-initiative dialog management, and inference over large-scale knowledge graphs. We present an evaluation of our end-to-end system with real users, and found that our system performed well along several dimensions such as usability and Task Success Rate. These results demonstRate the effectiveness of our system in the target domain.

  • An end-to-end dialog system for TV program discovery
    2014 IEEE Workshop on Spoken Language Technology SLT 2014 - Proceedings, 2014
    Co-Authors: Deepak Ramachandran, Benjamin Douglas, Ronald Provine, Adwait Ratnaparkhi, Jeremy Mendel, William Jarrold, Peter Z. Yeh, Adam Emfield
    Abstract:

    In this paper, we present an end-to-end dialog system for TV program discovery that uniquely combines several technologies such as trainable relation extraction, belief tracking over relational structures, mixed-initiative dialog management, and inference over large-scale knowledge graphs. We present an evaluation of our end-to-end system with real users, and found that our system performed well along several dimensions such as usability and Task Success Rate. These results demonstRate the effectiveness of our system in the target domain.

Zhen Xu - One of the best experts on this subject based on the ideXlab platform.

  • Task scheduling of satellite ground station systems based on the neighbor-area search algorithm
    2013 Ninth International Conference on Natural Computation (ICNC), 2013
    Co-Authors: Zhen Xu, Chong Wang
    Abstract:

    This paper describes a neighbor-area search algorithm based on the tree structure for Task scheduling of satellite ground station systems. Compared with traditional ones, it can get the optimal schedule much faster while keeping the delay and the Task Success Rate almost the same. Thus the ability of real-time planning of satellite systems can be greatly enhanced, particularly for massive Tasks.

  • The integRated routing algorithm for multi-satellites, multi-ground-stations and multi-processing-centers
    2013 Ninth International Conference on Natural Computation (ICNC), 2013
    Co-Authors: Zhen Xu, Xinhui Meng, Zijing Cheng
    Abstract:

    The research of routing planning for satellites and ground stations always treats the whole process as two independent parts: transmission from satellites to ground stations and transmission in the ground networks. The sepaRate research causes two problems that the two parts could not connect tightly and the delay is large. In order to solve these problems, the paper designs an integRated routing algorithm that treats the transmission as a whole. The algorithm considers the real-time link cost, calculates the normalization matrix, and searches the best real-time link for transmission. Simulation results show that comparing with the sepaRate optimal solution, the integRated solution reduces the delay of the Task sequences while keeping the sum of Successful Task priority and Task Success Rate almost the same. The integRated routing algorithm improved the performance of routing planning for satellites and ground stations.

  • ICNC - Task scheduling of satellite ground station systems based on the neighbor-area search algorithm
    2013 Ninth International Conference on Natural Computation (ICNC), 2013
    Co-Authors: Zhen Xu, Chong Wang
    Abstract:

    This paper describes a neighbor-area search algorithm based on the tree structure for Task scheduling of satellite ground station systems. Compared with traditional ones, it can get the optimal schedule much faster while keeping the delay and the Task Success Rate almost the same. Thus the ability of real-time planning of satellite systems can be greatly enhanced, particularly for massive Tasks.

  • ICNC - The integRated routing algorithm for multi-satellites, multi-ground-stations and multi-processing-centers
    2013 Ninth International Conference on Natural Computation (ICNC), 2013
    Co-Authors: Zhen Xu, Xinhui Meng, Zijing Cheng
    Abstract:

    The research of routing planning for satellites and ground stations always treats the whole process as two independent parts: transmission from satellites to ground stations and transmission in the ground networks. The sepaRate research causes two problems that the two parts could not connect tightly and the delay is large. In order to solve these problems, the paper designs an integRated routing algorithm that treats the transmission as a whole. The algorithm considers the real-time link cost, calculates the normalization matrix, and searches the best real-time link for transmission. Simulation results show that comparing with the sepaRate optimal solution, the integRated solution reduces the delay of the Task sequences while keeping the sum of Successful Task priority and Task Success Rate almost the same. The integRated routing algorithm improved the performance of routing planning for satellites and ground stations.

Nancy S. Pollard - One of the best experts on this subject based on the ideXlab platform.

  • Planning pre-grasp manipulation for transport Tasks
    2010 IEEE International Conference on Robotics and Automation, 2010
    Co-Authors: Lillian Y. Chang, Siddhartha S. Srinivasa, Nancy S. Pollard
    Abstract:

    Studies of human manipulation stRategies suggest that pre-grasp object manipulation, such as rotation or sliding of the object to be grasped, can improve Task performance by increasing both the Task Success Rate and the quality of load-supporting postures. In previous demonstrations, pre-grasp object rotation by a robot manipulator was limited to manually-programmed actions. We present a method for automating the planning of pre-grasp rotation for object transport Tasks. Our technique optimizes the grasp acquisition point by selecting a target object pose that can be grasped by high-payload manipulator configurations. Careful selection of the transition states leads to Successful transport plans for Tasks that are otherwise infeasible. In addition, optimization of the grasp acquisition posture also indirectly improves the transport plan quality, as measured by the safety margin of the manipulator payload limits.

  • ICRA - Planning pre-grasp manipulation for transport Tasks
    2010 IEEE International Conference on Robotics and Automation, 2010
    Co-Authors: Lillian Y. Chang, Siddhartha S. Srinivasa, Nancy S. Pollard
    Abstract:

    Studies of human manipulation stRategies suggest that pre-grasp object manipulation, such as rotation or sliding of the object to be grasped, can improve Task performance by increasing both the Task Success Rate and the quality of load-supporting postures. In previous demonstrations, pre-grasp object rotation by a robot manipulator was limited to manually-programmed actions. We present a method for automating the planning of pre-grasp rotation for object transport Tasks. Our technique optimizes the grasp acquisition point by selecting a target object pose that can be grasped by high-payload manipulator configurations. Careful selection of the transition states leads to Successful transport plans for Tasks that are otherwise infeasible. In addition, optimization of the grasp acquisition posture also indirectly improves the transport plan quality, as measured by the safety margin of the manipulator payload limits.

Myounghoon Jeon - One of the best experts on this subject based on the ideXlab platform.

  • Menu Navigation With In-Vehicle Technologies: Auditory Menu Cues Improve Dual Task Performance, Preference, and Workload.
    International Journal of Human-Computer Interaction, 2015
    Co-Authors: Myounghoon Jeon, Jeff Wilson, Thomas M. Gable, B. K. Davison, Michael A. Nees, Bruce N. Walker
    Abstract:

    Auditory display research for driving has mainly examined a limited range of Tasks (e.g., collision warnings, cell phone Tasks). In contrast, the goal of this project was to evaluate the effectiveness of enhanced auditory menu cues in a simulated driving context. The advanced auditory cues of “spearcons” (compressed speech cues) and “spindex” (a speech-based index cue) were predicted to improve both menu navigation and driving. Two experiments used a dual Task paradigm in which users selected songs on the vehicle’s infotainment system. In Experiment 1, 24 undergraduates played a simple, perceptual-motor ball-catching game (the primary Task; a surrogate for driving), and navigated through an alphabetized list of 150 song titles—rendered as an auditory menu—as a secondary Task. The menu was presented either in the typical visual-only manner, or enhanced with text-to-speech (TTS), or TTS plus one of three types of additional auditory cues. In Experiment 2, 34 undergraduates conducted the same secondary Task while driving in a simulator. In both experiments, performance on both the primary Task (Success Rate of the game or driving performance) and the secondary Task (menu search time) was better with the auditory menus than with no sound. Perceived workload scores as well as user preferences favored the enhanced auditory cue types. These results show that adding audio, and enhanced auditory cues in particular, can allow a driver to opeRate the menus of in-vehicle technologies more efficiently while driving more safely. Results are discussed with multiple resources theory. [ABSTRACT FROM AUTHOR]

  • AutomotiveUI - Enhanced auditory menu cues improve dual Task performance and are preferred with in-vehicle technologies
    Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI '09, 2009
    Co-Authors: Myounghoon Jeon, Jeff Wilson, B. K. Davison, Michael A. Nees, Bruce N. Walker
    Abstract:

    Auditory display research for driving has mainly focused on collision warning signals, and recent studies on auditory in-vehicle information presentation have examined only a limited range of Tasks (e.g., cell phone operation Tasks or verbal Tasks such as reading digit strings). The present study used a dual Task paradigm to evaluate a plausible scenario in which users navigated a song list. We applied enhanced auditory menu navigation cues, including spearcons (i.e., compressed speech) and a spindex (i.e., a speech index that used brief audio cues to communicate the user's position in a long menu list). Twenty-four undergraduates navigated through an alphabetized song list of 150 song titles---rendered as an auditory menu---while they concurrently played a simple, perceptual-motor, ball-catching game. The menu was presented with text-to-speech (TTS) alone, TTS plus one of three types of enhanced auditory cues, or no sound at all. Both performance of the primary Task (Success Rate of the game) and the secondary Task (menu search time) were better with the auditory menus than with no sound. Subjective workload scores (NASA TLX) and user preferences favored the enhanced auditory cue types. Results are discussed in terms of multiple resources theory and practical IVT design applications.

  • Enhanced auditory menu cues improve dual Task performance and are preferred with in-vehicle technologies
    Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI '09, 2009
    Co-Authors: Myounghoon Jeon, Jeff Wilson, B. K. Davison, Michael A. Nees, Bruce N. Walker
    Abstract:

    Auditory display research for driving has mainly focused on collision warning signals, and recent studies on auditory in-vehicle information presentation have examined only a limited range of Tasks (e.g., cell phone operation Tasks or verbal Tasks such as reading digit strings). The present study used a dual Task paradigm to evaluate a plausible scenario in which users navigated a song list. We applied enhanced auditory menu navigation cues, including spearcons (i.e., compressed speech) and a spindex (i.e., a speech index that used brief audio cues to communicate the users position in a long menu list). Twenty-four undergraduates navigated through an alphabetized song list of 150 song titlesrendered as an auditory menuwhile they concurrently played a simple, perceptual-motor, ball-catching game. The menu was presented with text-to-speech (TTS) alone, TTS plus one of three types of enhanced auditory cues, or no sound at all. Both performance of the primary Task (Success Rate of the game) and the secondary Task (menu search time) were better with the auditory menus than with no sound. Subjective workload scores (NASA TLX) and user preferences favored the enhanced auditory cue types. Results are discussed in terms of multiple resources theory and practical IVT design applications.