Natural Interaction

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 268632 Experts worldwide ranked by ideXlab platform

Mark Billinghurst - One of the best experts on this subject based on the ideXlab platform.

  • assessing the relationship between cognitive load and the usability of a mobile augmented reality tutorial system a study of gender effects
    International Journal of Assessment Tools in Education, 2019
    Co-Authors: Emin Ibili, Mark Billinghurst
    Abstract:

    In this study, the relationship between the usability of a mobile Augmented Reality (AR) tutorial system and cognitive load was examined. In this context, the relationship between perceived usefulness, the perceived ease of use, and the perceived Natural Interaction factors and intrinsic, extraneous, germane cognitive load were investigated.  In addition, the effect of gender on this relationship was investigated. The research results show that there was a strong relationship between the perceived ease of use and the extraneous load in males, and there was a strong relationship between the perceived usefulness and the intrinsic load in females. Both the perceived usefulness and the perceived ease of use had a strong relationship with the germane cognitive load. Moreover, the perceived Natural Interaction had a strong relationship with the perceived usefulness in females and the perceived ease of use in males. This research will provide significant clues to AR software developers and researchers to help reduce or control cognitive load in the development of AR-based instructional software.

  • Designing an Augmented Reality Multimodal Interface for 6DOF Manipulation Techniques
    Intelligent Systems and Applications, 2019
    Co-Authors: Ajune Wanis Ismail, Mohd Shahrizal Sunar, Mark Billinghurst, Cik Suhaimi Yusof
    Abstract:

    Augmented Reality (AR) supports Natural Interaction in physical and virtual worlds, so it has recently given rise to a number of novel Interaction modalities. This paper presents a method for using hand-gestures with speech input for multimodal Interaction in AR. It focuses on providing an intuitive AR environment which supports Natural Interaction with virtual objects while sustaining accessible real tasks and Interaction mechanisms. The paper reviews previous multimodal interfaces and describes recent studies in AR that employ gesture and speech inputs for multimodal input. It describes an implementation of gesture Interaction with speech input in AR for virtual object manipulation. Finally, the paper presents a user evaluation of the technique, showing that it can be used to improve the Interaction between virtual and physical elements in an AR environment.

  • exploring enhancements for remote mixed reality collaboration
    International Conference on Computer Graphics and Interactive Techniques, 2017
    Co-Authors: Thammathip Piumsomboon, Gun A Lee, Barrett Ens, Arindam Day, Youngho Lee, Mark Billinghurst
    Abstract:

    In this paper, we explore techniques for enhancing remote Mixed Reality (MR) collaboration in terms of communication and Interaction. We created CoVAR, a MR system for remote collaboration between an Augmented Reality (AR) and Augmented Virtuality (AV) users. Awareness cues and AV-Snap-to-AR interface were proposed for enhancing communication. Collaborative Natural Interaction, and AV-User-Body-Scaling were implemented for enhancing Interaction. We conducted an exploratory study examining the awareness cues and the collaborative gaze, and the results showed the benefits of the proposed techniques for enhancing communication and Interaction.

  • grasp shell vs gesture speech a comparison of direct and indirect Natural Interaction techniques in augmented reality
    International Symposium on Mixed and Augmented Reality, 2014
    Co-Authors: Thammathip Piumsomboon, David Altimira, Hyungon Kim, Adrian Clark, Gun A Lee, Mark Billinghurst
    Abstract:

    In order for Natural Interaction in Augmented Reality (AR) to become widely adopted, the techniques used need to be shown to support precise Interaction, and the gestures used proven to be easy to understand and perform. Recent research has explored free-hand gesture Interaction with AR interfaces, but there have been few formal evaluations conducted with such systems. In this paper we introduce and evaluate two Natural Interaction techniques: the free-hand gesture based Grasp-Shell, which provides direct physical manipulation of virtual content; and the multi-modal Gesture-Speech, which combines speech and gesture for indirect Natural Interaction. These techniques support object selection, 6 degree of freedom movement, uniform scaling, as well as physics-based Interaction such as pushing and flinging. We conducted a study evaluating and comparing Grasp-Shell and Gesture-Speech for fundamental manipulation tasks. The results show that Grasp-Shell outperforms Gesture-Speech in both efficiency and user preference for translation and rotation tasks, while Gesture-Speech is better for uniform scaling. They could be good complementary Interaction methods in a physics-enabled AR environment, as this combination potentially provides both control and interactivity in one interface. We conclude by discussing implications and future directions of this research.

  • tangible tiles design and evaluation of a tangible user interface in a collaborative tabletop setup
    Australasian Computer-Human Interaction Conference, 2006
    Co-Authors: Manuela Waldner, Jorg Hauber, Jurgen Zauner, Michael Haller, Mark Billinghurst
    Abstract:

    In this paper we describe a tangible user interface "Tangible Tiles", which uses optically tracked transparent plexiglass tiles for Interaction and display of projected imagery on a table or whiteboard. We designed and implemented a number of Interaction techniques based on two sets of different tiles, which either directly represent digital objects or function as tools for data manipulation. To discover the strengths and weaknesses of our current prototype, we conducted a user study that compared simple Interaction with digital imagery in three conditions: 1) our Tangible Tiles system, 2) a commercial touch screen, and 3) a control condition using real paper prints. Although we discovered some conceptual problems, the results show potential benefits of Tangible Tiles for supporting collaboration and Natural Interaction.

Gernot A Fink - One of the best experts on this subject based on the ideXlab platform.

  • focusing computational visual attention in multi modal human robot Interaction
    International Conference on Multimodal Interfaces, 2010
    Co-Authors: Boris Schauerte, Gernot A Fink
    Abstract:

    Identifying verbally and non-verbally referred-to objects is an important aspect of human-robot Interaction. Most importantly, it is essential to achieve a joint focus of attention and, thus, a Natural Interaction behavior. In this contribution, we introduce a saliency-based model that reflects how multi-modal referring acts influence the visual search, i.e. the task to find a specific object in a scene. Therefore, we combine positional information obtained from pointing gestures with contextual knowledge about the visual appearance of the referred-to object obtained from language. The available information is then integrated into a biologically-motivated saliency model that forms the basis for visual search. We prove the feasibility of the proposed approach by presenting the results of an experimental evaluation.

  • providing the basis for human robot Interaction a multi modal attention system for a mobile robot
    International Conference on Multimodal Interfaces, 2003
    Co-Authors: Sebastian Lang, Gernot A Fink, Marcus Kleinehagenbrock, Sascha Hohenner, Jannik Fritsch, Gerhard Sagerer
    Abstract:

    In order to enable the widespread use of robots in home and office environments, systems with Natural Interaction capabilities have to be developed. A prerequisite for Natural Interaction is the robot's ability to automatically recognize when and how long a person's attention is directed towards it for communication. As in open environments several persons can be present simultaneously, the detection of the communication partner is of particular importance. In this paper we present an attention system for a mobile robot which enables the robot to shift its attention to the person of interest and to maintain attention during Interaction. Our approach is based on a method for multi-modal person tracking which uses a pan-tilt camera for face recognition, two microphones for sound source localization, and a laser range finder for leg detection. Shifting of attention is realized by turning the camera into the direction of the person which is currently speaking. From the orientation of the head it is decided whether the speaker addresses the robot. The performance of the proposed approach is demonstrated with an evaluation. In addition, qualitative results from the performance of the robot at the exhibition part of the ICVS'03 are provided.

Doug A Bowman - One of the best experts on this subject based on the ideXlab platform.

  • evaluating Natural Interaction techniques in video games
    Symposium on 3D User Interfaces, 2010
    Co-Authors: Ryan P Mcmahan, Alexander Joel D Alon, Shaimaa Lazem, Robert J Beaton, David Machaj, Michael Schaefer, Mara G Silva, Anamary Leal, Robert Hagan, Doug A Bowman
    Abstract:

    Despite the gaming industry's recent trend for using “NaturalInteraction techniques, which mimic real world actions with a high level of fidelity, it is not clear how Natural Interaction techniques affect the player experience. In order to obtain a better understanding, we designed and conducted a study using Mario Kart Wii, a commercial racing game for the Nintendo Wii. We chose this platform due to its seemingly balanced design of both Natural and non-Natural Interaction techniques. Our empirical study of these techniques found that the non-Natural Interaction techniques significantly outperform their more Natural counterparts. We offer three hypotheses to explain our finding and suggest them as important Interaction design considerations.

Thammathip Piumsomboon - One of the best experts on this subject based on the ideXlab platform.

  • exploring enhancements for remote mixed reality collaboration
    International Conference on Computer Graphics and Interactive Techniques, 2017
    Co-Authors: Thammathip Piumsomboon, Gun A Lee, Barrett Ens, Arindam Day, Youngho Lee, Mark Billinghurst
    Abstract:

    In this paper, we explore techniques for enhancing remote Mixed Reality (MR) collaboration in terms of communication and Interaction. We created CoVAR, a MR system for remote collaboration between an Augmented Reality (AR) and Augmented Virtuality (AV) users. Awareness cues and AV-Snap-to-AR interface were proposed for enhancing communication. Collaborative Natural Interaction, and AV-User-Body-Scaling were implemented for enhancing Interaction. We conducted an exploratory study examining the awareness cues and the collaborative gaze, and the results showed the benefits of the proposed techniques for enhancing communication and Interaction.

  • grasp shell vs gesture speech a comparison of direct and indirect Natural Interaction techniques in augmented reality
    International Symposium on Mixed and Augmented Reality, 2014
    Co-Authors: Thammathip Piumsomboon, David Altimira, Hyungon Kim, Adrian Clark, Gun A Lee, Mark Billinghurst
    Abstract:

    In order for Natural Interaction in Augmented Reality (AR) to become widely adopted, the techniques used need to be shown to support precise Interaction, and the gestures used proven to be easy to understand and perform. Recent research has explored free-hand gesture Interaction with AR interfaces, but there have been few formal evaluations conducted with such systems. In this paper we introduce and evaluate two Natural Interaction techniques: the free-hand gesture based Grasp-Shell, which provides direct physical manipulation of virtual content; and the multi-modal Gesture-Speech, which combines speech and gesture for indirect Natural Interaction. These techniques support object selection, 6 degree of freedom movement, uniform scaling, as well as physics-based Interaction such as pushing and flinging. We conducted a study evaluating and comparing Grasp-Shell and Gesture-Speech for fundamental manipulation tasks. The results show that Grasp-Shell outperforms Gesture-Speech in both efficiency and user preference for translation and rotation tasks, while Gesture-Speech is better for uniform scaling. They could be good complementary Interaction methods in a physics-enabled AR environment, as this combination potentially provides both control and interactivity in one interface. We conclude by discussing implications and future directions of this research.

Gerhard Sagerer - One of the best experts on this subject based on the ideXlab platform.

  • providing the basis for human robot Interaction a multi modal attention system for a mobile robot
    International Conference on Multimodal Interfaces, 2003
    Co-Authors: Sebastian Lang, Gernot A Fink, Marcus Kleinehagenbrock, Sascha Hohenner, Jannik Fritsch, Gerhard Sagerer
    Abstract:

    In order to enable the widespread use of robots in home and office environments, systems with Natural Interaction capabilities have to be developed. A prerequisite for Natural Interaction is the robot's ability to automatically recognize when and how long a person's attention is directed towards it for communication. As in open environments several persons can be present simultaneously, the detection of the communication partner is of particular importance. In this paper we present an attention system for a mobile robot which enables the robot to shift its attention to the person of interest and to maintain attention during Interaction. Our approach is based on a method for multi-modal person tracking which uses a pan-tilt camera for face recognition, two microphones for sound source localization, and a laser range finder for leg detection. Shifting of attention is realized by turning the camera into the direction of the person which is currently speaking. From the orientation of the head it is decided whether the speaker addresses the robot. The performance of the proposed approach is demonstrated with an evaluation. In addition, qualitative results from the performance of the robot at the exhibition part of the ICVS'03 are provided.