Human Computer Interaction

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 86373 Experts worldwide ranked by ideXlab platform

Philippe Palanque - One of the best experts on this subject based on the ideXlab platform.

  • The Handbook of Formal Methods in Human-Computer Interaction - The Handbook of Formal Methods in Human-Computer Interaction
    Human–Computer Interaction Series, 2017
    Co-Authors: Benjamin Weyers, Judy Bowen, Alan Dix, Philippe Palanque
    Abstract:

    This book provides a comprehensive collection of methods and approaches for using formal methods within Human-Computer Interaction (HCI) research, the use of which is a prerequisite for usability and user-experience (UX) when engineering interactive systems. World-leading researchers present methods, tools and techniques to design and develop reliable interactive systems, offering an extensive discussion of the current state-of-the-art with case studies which highlight relevant scenarios and topics in HCI as well as presenting current trends and gaps in research and future opportunities and developments within this emerging field. The Handbook of Formal Methods in Human-Computer Interaction is intended for HCI researchers and engineers of interactive systems interested in facilitating formal methods into their research or practical work.

Nicu Sebe - One of the best experts on this subject based on the ideXlab platform.

  • multimodal Human Computer Interaction a survey
    Computer Vision and Image Understanding, 2007
    Co-Authors: Alejandro Jaimes, Nicu Sebe
    Abstract:

    In this paper, we review the major approaches to multimodal Human-Computer Interaction, giving an overview of the field from a Computer vision perspective. In particular, we focus on body, gesture, gaze, and affective Interaction (facial expression recognition and emotion in audio). We discuss user and task modeling, and multimodal fusion, highlighting challenges, open issues, and emerging applications for multimodal Human-Computer Interaction (MMHCI) research.

  • multimodal Human Computer Interaction a survey
    International Conference on Computer Vision, 2005
    Co-Authors: Alejandro Jaimes, Nicu Sebe
    Abstract:

    In this paper we review the major approaches to multimodal Human Computer Interaction from a Computer vision perspective. In particular, we focus on body, gesture, gaze, and affective Interaction (facial expression recognition, and emotion in audio). We discuss user and task modeling, and multimodal fusion, highlighting challenges, open issues, and emerging applications for Multimodal Human Computer Interaction (MMHCI) research.

Benjamin Weyers - One of the best experts on this subject based on the ideXlab platform.

  • The Handbook of Formal Methods in Human-Computer Interaction - The Handbook of Formal Methods in Human-Computer Interaction
    Human–Computer Interaction Series, 2017
    Co-Authors: Benjamin Weyers, Judy Bowen, Alan Dix, Philippe Palanque
    Abstract:

    This book provides a comprehensive collection of methods and approaches for using formal methods within Human-Computer Interaction (HCI) research, the use of which is a prerequisite for usability and user-experience (UX) when engineering interactive systems. World-leading researchers present methods, tools and techniques to design and develop reliable interactive systems, offering an extensive discussion of the current state-of-the-art with case studies which highlight relevant scenarios and topics in HCI as well as presenting current trends and gaps in research and future opportunities and developments within this emerging field. The Handbook of Formal Methods in Human-Computer Interaction is intended for HCI researchers and engineers of interactive systems interested in facilitating formal methods into their research or practical work.

Dusan Starčević - One of the best experts on this subject based on the ideXlab platform.

  • Modeling multimodal Human-Computer Interaction
    Computer, 2004
    Co-Authors: Zeljko Obrenovic, Dusan Starčević
    Abstract:

    Incorporating the well-known Unified Modeling Language into a generic modeling framework makes research on multimodal Human-Computer Interaction accessible to a wide range off software engineers. Multimodal Interaction is part of everyday Human discourse: We speak, move, gesture, and shift our gaze in an effective flow of communication. Recent initiatives such as perceptual and attentive user interfaces put these natural Human behaviors in the center of the Human-Computer Interaction (HCI). We've designed a generic modeling framework for specifying multimodal HCI using the Object Management Group's Unified Modeling Language. Because it's a well-known and widely supported standard - Computer science departments typically cover it in undergraduate courses, and many books, training courses, and tools support it - UML makes it easier for software engineers unfamiliar with multimodal research to apply HCI knowledge, resulting in broader and more practical effects. Standardization provides a significant driving force for further progress because it codifies best practices, enables and encourages reuse, and facilitates interworking between complementary tools.

Alejandro Jaimes - One of the best experts on this subject based on the ideXlab platform.

  • multimodal Human Computer Interaction a survey
    Computer Vision and Image Understanding, 2007
    Co-Authors: Alejandro Jaimes, Nicu Sebe
    Abstract:

    In this paper, we review the major approaches to multimodal Human-Computer Interaction, giving an overview of the field from a Computer vision perspective. In particular, we focus on body, gesture, gaze, and affective Interaction (facial expression recognition and emotion in audio). We discuss user and task modeling, and multimodal fusion, highlighting challenges, open issues, and emerging applications for multimodal Human-Computer Interaction (MMHCI) research.

  • multimodal Human Computer Interaction a survey
    International Conference on Computer Vision, 2005
    Co-Authors: Alejandro Jaimes, Nicu Sebe
    Abstract:

    In this paper we review the major approaches to multimodal Human Computer Interaction from a Computer vision perspective. In particular, we focus on body, gesture, gaze, and affective Interaction (facial expression recognition, and emotion in audio). We discuss user and task modeling, and multimodal fusion, highlighting challenges, open issues, and emerging applications for Multimodal Human Computer Interaction (MMHCI) research.