Virtual Interaction

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 120852 Experts worldwide ranked by ideXlab platform

Haruo Takemura - One of the best experts on this subject based on the ideXlab platform.

  • Virtual Interaction surface decoupling of Interaction and view dimensions for flexible indirect 3d Interaction
    Symposium on 3D User Interfaces, 2012
    Co-Authors: Takayuki Ohnishi, Nicholas Katzakis, Kiyoshi Kiyokawa, Haruo Takemura
    Abstract:

    2D pointing devices such as mice and touchpads are widely used for 3D Interaction. Most existing mapping schemes from 2D input to 3D spaces rely on a ray-casting based technique, where a 3D point of action is specified by the intersection of a 3D `ray' from the viewpoint and a 3D object in space. While this view-dependent 2D-3D mapping scheme provides an intuitive Interaction experience, we are exploring the possibility of mapping with different ways depending on the task. In this paper, we propose a Virtual Interaction surface (VIS) technique to decouple Interaction dimensions from view dimensions. A VIS is an arbitrary shaped 2D surface in a 3D space onto which 2D input is mapped thereby allowing flexible task-oriented 3D Interactions. We present the basic concept of the VIS technique, some Interaction examples and a prototype system as well as observations from an informal evaluation.

  • 3DUI - Virtual Interaction surface: Decoupling of Interaction and view dimensions for flexible indirect 3D Interaction
    2012 IEEE Symposium on 3D User Interfaces (3DUI), 2012
    Co-Authors: Takayuki Ohnishi, Nicholas Katzakis, Kiyoshi Kiyokawa, Haruo Takemura
    Abstract:

    2D pointing devices such as mice and touchpads are widely used for 3D Interaction. Most existing mapping schemes from 2D input to 3D spaces rely on a ray-casting based technique, where a 3D point of action is specified by the intersection of a 3D `ray' from the viewpoint and a 3D object in space. While this view-dependent 2D-3D mapping scheme provides an intuitive Interaction experience, we are exploring the possibility of mapping with different ways depending on the task. In this paper, we propose a Virtual Interaction surface (VIS) technique to decouple Interaction dimensions from view dimensions. A VIS is an arbitrary shaped 2D surface in a 3D space onto which 2D input is mapped thereby allowing flexible task-oriented 3D Interactions. We present the basic concept of the VIS technique, some Interaction examples and a prototype system as well as observations from an informal evaluation.

Anthony Savidis - One of the best experts on this subject based on the ideXlab platform.

  • Supporting Virtual Interaction objects with polymorphic platform bindings in a user interface programming language
    Lecture Notes in Computer Science, 2005
    Co-Authors: Anthony Savidis
    Abstract:

    Today, there are numerous software patterns for the software engineering of User Interfaces through Interaction object classes that can be automatically retargeted to different graphical environments. Such methods are usually deployed in implementing multi-platform User Interface libraries, delivering Application Programming Interfaces (APIs) typically split in two layers: (a) the top layer, encompassing the platform independent programming elements available to client programmers; and (b) the bottom layer, delivering the platform specific bindings, implemented differently for each distinct graphical environment. While multi-platform Interaction objects primarily constitute programming generalizations of graphical Interaction elements, Virtual Interaction objects play the role of abstractions defined above any particular physical realization or dialogue metaphor. In this context, a sub-set of a User Interface programming language is presented, providing programming facilities for: (a) the definition of Virtual Interaction object classes; and (b) the specification of the mapping-logic to physically bind Virtual object classes across different target platforms.

  • RISE - Supporting Virtual Interaction objects with polymorphic platform bindings in a user interface programming language
    Rapid Integration of Software Engineering Techniques, 2005
    Co-Authors: Anthony Savidis
    Abstract:

    Today, there are numerous software patterns for the software engineering of User Interfaces through Interaction object classes that can be automatically retargeted to different graphical environments. Such methods are usually deployed in implementing multi-platform User Interface libraries, delivering Application Programming Interfaces (APIs) typically split in two layers: (a) the top layer, encompassing the platform independent programming elements available to client programmers; and (b) the bottom layer, delivering the platform specific bindings, implemented differently for each distinct graphical environment. While multi-platform Interaction objects primarily constitute programming generalizations of graphical Interaction elements, Virtual Interaction objects play the role of abstractions defined above any particular physical realization or dialogue metaphor. In this context, a sub-set of a User Interface programming language is presented, providing programming facilities for: (a) the definition of Virtual Interaction object classes; and (b) the specification of the mapping-logic to physically bind Virtual object classes across different target platforms.

Sanna Järvelä - One of the best experts on this subject based on the ideXlab platform.

  • Sharing and constructing perspectives in web-based conferencing
    Computers & Education, 2006
    Co-Authors: Päivi Häkkinen, Sanna Järvelä
    Abstract:

    Abstract This study investigates the quality and nature of Virtual Interaction in a higher education context. The study aims to find out variables that mediate Virtual Interaction, particularly the emerging processes of sharing and constructing perspectives in web-based conferencing. The purpose of this paper is to report the results on different levels of web-based discussions with parallel findings on the amount of sharing perspectives. The findings of two empirical studies are compared, and thereby also the impact of the pedagogical model designed between these two studies is evaluated. Possible explanations for why some discussions reach higher levels and include more perspective sharing than others are also searched for. Particular attention is paid to the qualitatively distinct ways in which individual students interpret their participation in Virtual Interaction and the impact of group working on their own learning. These findings lead us on to discuss specific processes by which participants could better understand each other, create joint goals and construct meanings in Virtual Interaction.

  • What is reciprocal understanding in Virtual Interaction?
    Instructional Science, 2005
    Co-Authors: Arja Byman, Sanna Järvelä, Päivi Häkkinen
    Abstract:

    The aim of this study is to investigate what is reciprocal understanding in Virtual web-based Interaction and what does it consist of. The context of this study was an international web-based pre-service teacher education ( N =116) course. The study is based on an idea of shared cognition and reciprocal understanding, in particular. It is assumed that reciprocity is an essential component not only of social Interaction, but also of successful Virtual Interaction. Since many kinds of Virtual Interaction, such as e-learning, web-based courses, and Virtual universities are rapidly getting more common in education, there is a need to analyse the issue of reciprocal understanding in web-based learning in order to develop more profound pedagogical models. The results widely show that during this particular web-based course there was reciprocal discussion between the participants. Participants had mutual negotiations and they discussed about issues from a variety of different viewpoints. Seven different ideas were found for the mechanisms of reciprocal understanding. It is concluded that this information is useful for developing a new pedagogical model for web-based learning and enhancing the quality of Virtual Interaction.

  • CSCL - Sharing Perspectives in Virtual Interaction: Review of Methods of Analysis
    Designing for Change in Networked Learning Environments, 2003
    Co-Authors: Päivi Häkkinen, Sanna Järvelä, Kati Mäkitalo
    Abstract:

    The aim of this paper is to describe the methodological solutions made in the studies that are part of the SHAPE research project. The SHAPE project investigates the quality and nature of Virtual Interaction in a higher education context. The study aims to find out variables that mediate the process of collaboration, particularly the emerging processes of sharing and constructing perspectives in Virtual Interaction. For conducting these studies we have developed various methods and models of analysis in order to gain better understanding of the process of collaboration in Virtual Interaction. In this paper, we will make a review of some of the SHAPE analysis methods used in the series of our studies.

Takayuki Ohnishi - One of the best experts on this subject based on the ideXlab platform.

  • Virtual Interaction surface decoupling of Interaction and view dimensions for flexible indirect 3d Interaction
    Symposium on 3D User Interfaces, 2012
    Co-Authors: Takayuki Ohnishi, Nicholas Katzakis, Kiyoshi Kiyokawa, Haruo Takemura
    Abstract:

    2D pointing devices such as mice and touchpads are widely used for 3D Interaction. Most existing mapping schemes from 2D input to 3D spaces rely on a ray-casting based technique, where a 3D point of action is specified by the intersection of a 3D `ray' from the viewpoint and a 3D object in space. While this view-dependent 2D-3D mapping scheme provides an intuitive Interaction experience, we are exploring the possibility of mapping with different ways depending on the task. In this paper, we propose a Virtual Interaction surface (VIS) technique to decouple Interaction dimensions from view dimensions. A VIS is an arbitrary shaped 2D surface in a 3D space onto which 2D input is mapped thereby allowing flexible task-oriented 3D Interactions. We present the basic concept of the VIS technique, some Interaction examples and a prototype system as well as observations from an informal evaluation.

  • 3DUI - Virtual Interaction surface: Decoupling of Interaction and view dimensions for flexible indirect 3D Interaction
    2012 IEEE Symposium on 3D User Interfaces (3DUI), 2012
    Co-Authors: Takayuki Ohnishi, Nicholas Katzakis, Kiyoshi Kiyokawa, Haruo Takemura
    Abstract:

    2D pointing devices such as mice and touchpads are widely used for 3D Interaction. Most existing mapping schemes from 2D input to 3D spaces rely on a ray-casting based technique, where a 3D point of action is specified by the intersection of a 3D `ray' from the viewpoint and a 3D object in space. While this view-dependent 2D-3D mapping scheme provides an intuitive Interaction experience, we are exploring the possibility of mapping with different ways depending on the task. In this paper, we propose a Virtual Interaction surface (VIS) technique to decouple Interaction dimensions from view dimensions. A VIS is an arbitrary shaped 2D surface in a 3D space onto which 2D input is mapped thereby allowing flexible task-oriented 3D Interactions. We present the basic concept of the VIS technique, some Interaction examples and a prototype system as well as observations from an informal evaluation.

Andrew Y. C. Nee - One of the best experts on this subject based on the ideXlab platform.

  • Augmented reality for assembly guidance using a Virtual interactive tool
    International Journal of Production Research, 2008
    Co-Authors: M. L. Yuan, Soh-khim Ong, Andrew Y. C. Nee
    Abstract:

    The application of augmented reality (AR) technology for assembly guidance is a novel approach in the traditional manufacturing domain. In this paper, we propose an AR approach for assembly guidance using a Virtual interactive tool that is intuitive and easy to use. The Virtual interactive tool, termed the Virtual Interaction panel (VirIP), is an easy-to-use tool that can be used to interactively control AR systems. The VirIP is composed of Virtual buttons, which have meaningful assembly information that can be activated by an Interaction pen during the assembly process. The Interaction pen can be any general pen-like object with a certain colour distribution. It is tracked using a restricted coulomb energy (RCE) network in real-time and used to trigger the relevant buttons in the VirIPs for assembly guidance. Meanwhile, a visual assembly tree structure (VATS) is used for information management and assembly instructions retrieval in this AR environment. VATS is a hierarchical tree structure that can be ea...

  • The Virtual Interaction panel: an easy control tool in augmented reality systems: Research Articles
    Computer Animation and Virtual Worlds, 2004
    Co-Authors: M. L. Yuan, Soh-khim Ong, Andrew Y. C. Nee
    Abstract:

    In this paper, we propose and develop an easy control tool called Virtual Interaction Panel (VirIP) for Augmented Reality (AR) systems, which can be used to control AR systems. This tool is composed of two parts: the design of the VirIPs and the tracking of an Interaction pen using a Restricted Coulomb Energy (RCE) neural network. The VirIP is composed of some Virtual buttons, which have meaningful information that can be activated by an Interaction pen during the augmentation process. The Interaction pen is a general pen-like object with a certain color distribution. It is tracked using a RCE network in real-time and used to trigger the VirIPs for AR systems. In our system, only one camera is used for capturing the real world. Therefore, 2D information is used to trigger the Virtual buttons to control the AR systems. The proposed method is real-time because the RCE-based image segmentation for a small region is fast. It can be used to control AR systems quite easily without any annoying sensors attached to entangling cables. This proposed method has good potential in many AR applications in manufacturing, such as assembly without the need for object recognition, collaborative product design, system control, etc. Copyright © 2004 John Wiley & Sons, Ltd.

  • The Virtual Interaction panel: an easy control tool in augmented reality systems
    Computer Animation and Virtual Worlds, 2004
    Co-Authors: M. L. Yuan, Soh-khim Ong, Andrew Y. C. Nee
    Abstract:

    In this paper, we propose and develop an easy control tool called Virtual Interaction Panel (VirIP) for Augmented Reality (AR) systems, which can be used to control AR systems. This tool is composed of two parts: the design of the VirIPs and the tracking of an Interaction pen using a Restricted Coulomb Energy (RCE) neural network. The VirIP is composed of some Virtual buttons, which have meaningful information that can be activated by an Interaction pen during the augmentation process. The Interaction pen is a general pen-like object with a certain color distribution. It is tracked using a RCE network in real-time and used to trigger the VirIPs for AR systems. In our system, only one camera is used for capturing the real world. Therefore, 2D information is used to trigger the Virtual buttons to control the AR systems. The proposed method is real-time because the RCE-based image segmentation for a small region is fast. It can be used to control AR systems quite easily without any annoying sensors attached to entangling cables. This proposed method has good potential in many AR applications in manufacturing, such as assembly without the need for object recognition, collaborative product design, system control, etc. Copyright # 2004 John Wiley & Sons, Ltd.