Computer Interaction

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 214437 Experts worldwide ranked by ideXlab platform

Thomas S. Huang - One of the best experts on this subject based on the ideXlab platform.

  • real time vision for human Computer Interaction
    2010
    Co-Authors: Branislav Kisacanin, Vladimir Pavlovic, Thomas S. Huang
    Abstract:

    The need for natural and effective Human-Computer Interaction (HCI) is increasingly important due to the prevalence of Computers in human activities. Computer vision and pattern recognition continue to play a dominant role in the HCI realm. However, Computer vision methods often fail to become pervasive in the field due to the lack of real-time, robust algorithms, and novel and convincing applications. This state-of-the-art contributed volumeis comprised of articles by prominent experts in Computer vision, pattern recognition and HCI. It is the first published text to capture the latest research in this rapidly advancing field with exclusive focus on real-time algorithms and practical applications in diverse and numerous industries, and it outlines further challenges in these areas. Real-Time Vision for Human-Computer Interaction is an invaluable reference for HCI researchers in both academia and industry, and a useful supplement for advanced-level courses in HCI and Computer Vision.

  • semisupervised learning of classifiers theory algorithms and their application to human Computer Interaction
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004
    Co-Authors: Ira Cohen, Nicu Sebe, Marcelo Cesar Cirelo, Fabio Gagliardi Cozman, Thomas S. Huang
    Abstract:

    Automatic classification is one of the basic tasks required in any pattern recognition and human Computer Interaction application. In this paper, we discuss training probabilistic classifiers with labeled and unlabeled data. We provide a new analysis that shows under what conditions unlabeled data can be used in learning to improve classification performance. We also show that, if the conditions are violated, using unlabeled data can be detrimental to classification performance. We discuss the implications of this analysis to a specific type of probabilistic classifiers, Bayesian networks, and propose a new structure learning algorithm that can utilize unlabeled data to improve classification. Finally, we show how the resulting algorithms are successfully employed in two applications related to human-Computer Interaction and pattern recognition: facial expression recognition and face detection.

  • Nonstationary color tracking for vision-based human - Computer Interaction
    IEEE Transactions on Neural Networks, 2002
    Co-Authors: Thomas S. Huang
    Abstract:

    Skin color offers a strong cue for efficient localization and tracking of human body parts in video sequences for vision-based human-Computer Interaction. Color-based target localization could be achieved by analyzing segmented skin color regions. However, one of the challenges of color-based target tracking is that color distributions would change in different lighting conditions such that fixed color models would be inadequate to capture nonstationary color distributions over time. Meanwhile, using a fixed skin color model trained by the data of a specific person would probably not work well for other people. Although some work has been done on adaptive color models, this problem still needs further studies. We present our investigation of color-based image segmentation and nonstationary color-based target tracking, by studying two different representations for color distributions. We propose the structure adaptive self-organizing map (SASOM) neural network that serves as a new color model. Our experiments show that such a representation is powerful for efficient image segmentation. Then, we formulate the nonstationary color tracking problem as a model transduction problem, the solution of which offers a way to adapt and transduce color classifiers in nonstationary color distributions. To fulfill model transduction, we propose two algorithms, the SASOM transduction and the discriminant expectation-maximization (EM), based on the SASOM color model and the Gaussian mixture color model, respectively. Our extensive experiments on the task of real-time face/hand localization show that these two algorithms can successfully handle some difficulties in nonstationary color tracking. We also implemented a real-time face/hand localization system based on such algorithms for vision-based human-Computer Interaction.

  • Visual interpretation of hand gestures for human-Computer Interaction: A review
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997
    Co-Authors: Vladimir I. Pavlovic, Rajeev Sharma, Thomas S. Huang
    Abstract:

    The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-Computer Interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with Computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient “purposive” approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of human-Computer Interaction

Anupam Agrawal - One of the best experts on this subject based on the ideXlab platform.

  • hand data glove a new generation real time mouse for human Computer Interaction
    International Conference on Recent Advances in Information Technology, 2012
    Co-Authors: Piyush Kumar, Siddharth Swarup Rautaray, Anupam Agrawal
    Abstract:

    This Human-Computer Interaction (HCI) is a field in which the developer makes a user friendly system. In this paper, a real-time Human-Computer Interaction based on the hand data glove and k-NN classifier for gesture recognition is proposed. HCI is becoming more and more natural and intuitive to be used. The important part of body that is hand is most frequently used as Interaction in digital environment and thus complexity and flexibility of motion of hand is a research topic. To recognize hand gesture accurately and successfully data glove is used. Here, glove is used to capture current position and angle of hand and fingers, and further classify it using k-NN classifier. The gestures classified are clicking, dragging, rotating, pointing and ideal position. Recognizing these gestures relevant actions are taken, such as air writing and 3D sketching by tracking the path. This can be also used in the controlling of an image browser tool using data glove. The results show that glove used for Interaction is better than normal static keyboard and mouse as the Interaction process is more accurate and natural. Also it enhances the user's Interaction and immersion feeling.

  • Vision based hand gesture recognition for human Computer Interaction: a survey
    Artificial Intelligence Review, 2012
    Co-Authors: Siddharth Swarup Rautaray, Anupam Agrawal
    Abstract:

    As Computers become more pervasive in society, facilitating natural human–Computer Interaction (HCI) will have a positive impact on their use. Hence, there has been growing interest in the development of new approaches and technologies for bridging the human–Computer barrier. The ultimate aim is to bring HCI to a regime where Interactions with Computers will be as natural as an Interaction between humans, and to this end, incorporating gestures in HCI is an important research area. Gestures have long been considered as an Interaction technique that can potentially deliver more natural, creative and intuitive methods for communicating with our Computers. This paper provides an analysis of comparative surveys done in this area. The use of hand gestures as a natural interface serves as a motivating force for research in gesture taxonomies, its representations and recognition techniques, software platforms and frameworks which is discussed briefly in this paper. It focuses on the three main phases of hand gesture recognition i.e. detection, tracking and recognition. Different application which employs hand gestures for efficient Interaction has been discussed under core and advanced application domains. This paper also provides an analysis of existing literature related to gesture recognition systems for human Computer Interaction by categorizing it under different key parameters. It further discusses the advances that are needed to further improvise the present hand gesture recognition systems for future perspective that can be widely used for efficient human Computer Interaction. The main goal of this survey is to provide researchers in the field of gesture based HCI with a summary of progress achieved to date and to help identify areas where further research is needed.

Siddharth Swarup Rautaray - One of the best experts on this subject based on the ideXlab platform.

  • hand data glove a new generation real time mouse for human Computer Interaction
    International Conference on Recent Advances in Information Technology, 2012
    Co-Authors: Piyush Kumar, Siddharth Swarup Rautaray, Anupam Agrawal
    Abstract:

    This Human-Computer Interaction (HCI) is a field in which the developer makes a user friendly system. In this paper, a real-time Human-Computer Interaction based on the hand data glove and k-NN classifier for gesture recognition is proposed. HCI is becoming more and more natural and intuitive to be used. The important part of body that is hand is most frequently used as Interaction in digital environment and thus complexity and flexibility of motion of hand is a research topic. To recognize hand gesture accurately and successfully data glove is used. Here, glove is used to capture current position and angle of hand and fingers, and further classify it using k-NN classifier. The gestures classified are clicking, dragging, rotating, pointing and ideal position. Recognizing these gestures relevant actions are taken, such as air writing and 3D sketching by tracking the path. This can be also used in the controlling of an image browser tool using data glove. The results show that glove used for Interaction is better than normal static keyboard and mouse as the Interaction process is more accurate and natural. Also it enhances the user's Interaction and immersion feeling.

  • Vision based hand gesture recognition for human Computer Interaction: a survey
    Artificial Intelligence Review, 2012
    Co-Authors: Siddharth Swarup Rautaray, Anupam Agrawal
    Abstract:

    As Computers become more pervasive in society, facilitating natural human–Computer Interaction (HCI) will have a positive impact on their use. Hence, there has been growing interest in the development of new approaches and technologies for bridging the human–Computer barrier. The ultimate aim is to bring HCI to a regime where Interactions with Computers will be as natural as an Interaction between humans, and to this end, incorporating gestures in HCI is an important research area. Gestures have long been considered as an Interaction technique that can potentially deliver more natural, creative and intuitive methods for communicating with our Computers. This paper provides an analysis of comparative surveys done in this area. The use of hand gestures as a natural interface serves as a motivating force for research in gesture taxonomies, its representations and recognition techniques, software platforms and frameworks which is discussed briefly in this paper. It focuses on the three main phases of hand gesture recognition i.e. detection, tracking and recognition. Different application which employs hand gestures for efficient Interaction has been discussed under core and advanced application domains. This paper also provides an analysis of existing literature related to gesture recognition systems for human Computer Interaction by categorizing it under different key parameters. It further discusses the advances that are needed to further improvise the present hand gesture recognition systems for future perspective that can be widely used for efficient human Computer Interaction. The main goal of this survey is to provide researchers in the field of gesture based HCI with a summary of progress achieved to date and to help identify areas where further research is needed.

Nicu Sebe - One of the best experts on this subject based on the ideXlab platform.

  • multimodal human Computer Interaction a survey
    Computer Vision and Image Understanding, 2007
    Co-Authors: Alejandro Jaimes, Nicu Sebe
    Abstract:

    In this paper, we review the major approaches to multimodal human-Computer Interaction, giving an overview of the field from a Computer vision perspective. In particular, we focus on body, gesture, gaze, and affective Interaction (facial expression recognition and emotion in audio). We discuss user and task modeling, and multimodal fusion, highlighting challenges, open issues, and emerging applications for multimodal human-Computer Interaction (MMHCI) research.

  • multimodal human Computer Interaction a survey
    International Conference on Computer Vision, 2005
    Co-Authors: Alejandro Jaimes, Nicu Sebe
    Abstract:

    In this paper we review the major approaches to multimodal human Computer Interaction from a Computer vision perspective. In particular, we focus on body, gesture, gaze, and affective Interaction (facial expression recognition, and emotion in audio). We discuss user and task modeling, and multimodal fusion, highlighting challenges, open issues, and emerging applications for Multimodal Human Computer Interaction (MMHCI) research.

  • semisupervised learning of classifiers theory algorithms and their application to human Computer Interaction
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004
    Co-Authors: Ira Cohen, Nicu Sebe, Marcelo Cesar Cirelo, Fabio Gagliardi Cozman, Thomas S. Huang
    Abstract:

    Automatic classification is one of the basic tasks required in any pattern recognition and human Computer Interaction application. In this paper, we discuss training probabilistic classifiers with labeled and unlabeled data. We provide a new analysis that shows under what conditions unlabeled data can be used in learning to improve classification performance. We also show that, if the conditions are violated, using unlabeled data can be detrimental to classification performance. We discuss the implications of this analysis to a specific type of probabilistic classifiers, Bayesian networks, and propose a new structure learning algorithm that can utilize unlabeled data to improve classification. Finally, we show how the resulting algorithms are successfully employed in two applications related to human-Computer Interaction and pattern recognition: facial expression recognition and face detection.

Anton Nijholt - One of the best experts on this subject based on the ideXlab platform.

  • brain Computer interfaces applying our minds to human Computer Interaction
    Human-Computer Interaction Series, 2012
    Co-Authors: Anton Nijholt
    Abstract:

    For generations, humans have fantasized about the ability to create devices that can see into a persons mind and thoughts, or to communicate and interact with machines through thought alone. Such ideas have long captured the imagination of humankind in the form of ancient myths and modern science fiction stories. Recent advances in cognitive neuroscience and brain imaging technologies have started to turn these myths into a reality, and are providing us with the ability to interface directly with the human brain. This ability is made possible through the use of sensors that monitor physical processes within the brain which correspond with certain forms of thought. Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction broadly surveys research in the Brain-Computer Interface domain. More specifically, each chapter articulates some of the challenges and opportunities for using brain sensing in Human-Computer Interaction work, as well as applying Human-Computer Interaction solutions to brain sensing work. For researchers with little or no expertise in neuroscience or brain sensing, the book provides background information to equip them to not only appreciate the state-of-the-art, but also ideally to engage in novel research. For expert Brain-Computer Interface researchers, the book introduces ideas that can help in the quest to interpret intentional brain control and develop the ultimate input device. It challenges researchers to further explore passive brain sensing to evaluate interfaces and feed into adaptive computing systems. Most importantly, the book will connect multiple communities allowing research to leverage their work and expertise and blaze into the future.

  • brain Computer Interaction can multimodality help
    International Conference on Multimodal Interfaces, 2011
    Co-Authors: Anton Nijholt, Brendan Z Allison, Robert J K Jacob
    Abstract:

    This paper is a short introduction to a special ICMI session on brain-Computer Interaction. During this paper, we first discuss problems, solutions, and a five-year view for brain-Computer Interaction. We then talk further about unique issues with multimodal and hybrid brain-Computer interfaces, which could help address many current challenges. This paper presents some potentially controversial views, which will hopefully inspire discussion about the different views on brain-Computer interfacing, how to embed brain-Computer interfacing in a multimodal and multi-party context, and, more generally, how to look at brain-Computer interfacing from an ambient intelligence point of view.