Virtual Humans

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 22626 Experts worldwide ranked by ideXlab platform

Jonathan Gratch - One of the best experts on this subject based on the ideXlab platform.

  • the benefits of Virtual Humans for teaching negotiation
    Intelligent Virtual Agents, 2016
    Co-Authors: Jonathan Gratch, David Devault, Gale M Lucas
    Abstract:

    This article examines the potential for teaching negotiation with Virtual Humans. Many people find negotiations to be aversive. We conjecture that students may be more comfortable practicing negotiation skills with an agent than with another person. We test this using the Conflict Resolution Agent, a semi-automated Virtual human that negotiates with people via natural language. In a between-participants design, we independently manipulated two pedagogically-relevant factors while participants engaged in repeated negotiations with the agent: perceived agency (participants either believed they were negotiating with a computer program or another person) and pedagogical feedback (participants received instructional advice or no advice between negotiations). Findings indicate that novice negotiators were more comfortable negotiating with a computer program (they self-reported more comfort and punished their opponent less often) and expended more effort on the exercise following instructional feedback (both in time spent and in self-reported effort). These findings lend support to the notion of using Virtual Humans to teach interpersonal skills.

  • negotiation as a challenge problem for Virtual Humans
    Intelligent Virtual Agents, 2015
    Co-Authors: Jonathan Gratch, David Devault, Gale M Lucas, Stacy Marsella
    Abstract:

    We argue for the importance of negotiation as a challenge problem for Virtual human research, and introduce a Virtual conversational agent that allows people to practice a wide range of negotiation skills. We describe the multi-issue bargaining task, which has become a de facto standard for teaching and research on negotiation in both the social and computer sciences. This task is popular as it allows scientists or instructors to create a variety of distinct situations that arise in real-life negotiations, simply by manipulating a small number of mathematical parameters. We describe the development of a Virtual human that will allow students to practice the interpersonal skills they need to recognize and navigate these situations. An evaluation of an early wizard-controlled version of the system demonstrates the promise of this technology for teaching negotiation and supporting scientific research on social intelligence.

  • beyond believability quantifying the differences between real and Virtual Humans
    Intelligent Virtual Agents, 2015
    Co-Authors: Celso M De Melo, Jonathan Gratch
    Abstract:

    “Believable” agents are supposed to “suspend the audience’s disbelief” and provide the “illusion of life”. However, beyond such high-level definitions, which are prone to subjective interpretation, there is not much more to help researchers systematically create or assess whether their agents are believable. In this paper we propose a more pragmatic and useful benchmark than believability for designing Virtual agents. This benchmark requires people, in a specific social situation, to act with the Virtual agent in the same manner as they would with a real human. We propose that perceptions of mind in Virtual agents, especially pertaining to agency – the ability to act and plan – and experience – the ability to sense and feel emotion – are critical for achieving this new benchmark. We also review current computational systems that fail, pass, and even surpass this benchmark and show how a theoretical framework based on perceptions of mind can shed light into these systems. We also discuss a few important cases where it is better if Virtual Humans do not pass the benchmark. We discuss implications for the design of Virtual agents that can be as natural and efficient to interact with as real Humans.

  • it s in their eyes a study on female and male Virtual Humans gaze
    Intelligent Virtual Agents, 2011
    Co-Authors: Philipp Kulms, Jonathan Gratch, Nicole C Kramer, Sinhwa Kang
    Abstract:

    Social psychological research demonstrates that the same behavior might lead to different evaluations depending on whether it is shown by a man or a woman. With a view to design decisions with regard to Virtual Humans it is relevant to test whether this pattern also applies to gendered Virtual Humans. In a 2×2 between subjects experiment we manipulated the Rapport Agent's gaze behavior and its gender in order to test whether especially female agents are evaluated more negatively when they do not show gender specific immediacy behavior and avoid gazing at the interaction partner. Instead of this interaction effect we found two main effects: gaze avoidance was evaluated negatively and female agents were rated more positively than male agents.

  • Virtual Humans elicit socially anxious interactants verbal self disclosure
    Computer Animation and Virtual Worlds, 2010
    Co-Authors: Sinhwa Kang, Jonathan Gratch
    Abstract:

    We explored the relationship between interactants' social anxiety and the interactional fidelity of Virtual Humans. We specifically addressed whether the contingent non-verbal feedback of Virtual Humans affects the association between interactants' social anxiety and their verbal self-disclosure. This subject was investigated across three experimental conditions where participants interacted with real human videos and Virtual Humans in computer-mediated interview interactions. The results demonstrated that socially anxious people revealed more information and greater intimate information about themselves when interacting with a Virtual human when compared with real human video interaction, whereas less socially anxious people did not show this difference. We discuss the implication of this association between the interactional fidelity of Virtual Humans and social anxiety in a human interactant on the design of an embodied Virtual agent for social skills' training and psychotherapy. Copyright © 2010 John Wiley & Sons, Ltd. We explored the relationship between interactants' social anxiety and the interactional fidelity of Virtual Humans. We specifically addressed whether the contingent non-verbal feedback of Virtual Humans affects the association between interactants' social anxiety and their verbal self-disclosure.

Daniel Thalmann - One of the best experts on this subject based on the ideXlab platform.

  • an ontology of Virtual Humans incorporating semantics into human shapes
    The Visual Computer, 2007
    Co-Authors: Mario A Gutierrez, Daniel Thalmann, Alejandra Garciarojas, Frederic Vexo, Laurent Moccozet, Nadia Magnenatthalmann, Michela Mortara, Michela Spagnuolo
    Abstract:

    Most of the efforts concerning graphical representations of Humans (Virtual Humans) have been focused on synthesizing geometry for static or animated shapes. The next step is to consider a human body not only as a 3D shape, but as an active semantic entity with features, functionalities, interaction skills, etc. We are currently working on an ontology-based approach to make Virtual Humans more active and understandable both for Humans and machines. The ontology for Virtual Humans we are defining will provide the “semantic layer” required to reconstruct, stock, retrieve, reuse and share content and knowledge related to Virtual Humans.

  • a motivational model of action selection for Virtual Humans
    Computer Graphics International, 2005
    Co-Authors: E De Sevin, Daniel Thalmann
    Abstract:

    Nowadays Virtual Humans such as non-player characters in computer games need to have a real autonomy in order to live their own life in persistent Virtual worlds. When designing autonomous Virtual Humans, the action selection problem needs to be considered, as it is responsible for decision making at each moment in time. Action selection architectures for autonomous Virtual Humans should be individual, motivational, reactive and proactive to obtain a high degree of autonomy. This paper describes in detail our motivational model of action selection for autonomous Virtual Humans in which overlapping hierarchical classifier systems, working in parallel to generate coherent behavioral plans, are associated with the functionalities of a free flow hierarchy to give reactivity to the hierarchical system. Finally, results of our model in a complex simulated environment, with conflicting motivations, demonstrate that the model is sufficiently robust and flexible for designing motivational autonomous Virtual Humans in real-time.

  • handbook of Virtual Humans
    2004
    Co-Authors: Nadia Magnenatthalmann, Daniel Thalmann
    Abstract:

    Preface.List of Contributors.List of Figures.List of Tables.1. An Overview of Virtual Humans (Nadia Magnenat Thalmann and Daniel Thalmann).2. Face Cloning and Face Motion Capture (Wonsook Lee, Taro Goto, Sumedha Kshirsagar, Tom Molet).3. Body Cloning and Body Motion Capture (Pascal Fua, Ralf Plaenkers, WonSook Lee, Tom Molet).4. Anthropometric Body Modeling (Hyewon Seo).5. Body Motion Control (Ronan Boulic, Paolo Baerlocher).6. Facial Deformation Models (Prem Kalra, Stephane Garchery, Sumedha Kshirsagar).7. Body Deformations (Amaury Aubel).8. Hair Simulation (Sunil Hadap).9. Cloth Simulation (Pascal Volino, Frederic Cordier).10. Expressive Speech Animation and Facial Communication (Sumedha Kshirsagar, Arjan Egges, Stephane Garchery).11. Behavioral Animation (Jean-Sebastien Monzani, Anthony Guye-Vuilleme, Etienne de Sevin).12. Body Gesture Recognition and Action Response (Luc Emering, Bruno Herbelin).13. Interaction with 3-D Objects (Marcello Kallmann).14. Groups and Crowd Simulation (Soraia Raupp Musse, Branislav Ulicny, Amaury Aubel).15. Rendering of Skin and Clothes (Neeharika Adabala).16. Standards for Virtual Humans (Stephane Garchery, Ronan Boulic, Tolga Capin, Prem Kalra).Appendix A: Damped Least Square Pseudo-Inverse J+A.Appendix B: H-Anim Joint and Segment Topology.Appendix C: Facial Animation Parameter Set .References.Index.

  • the role of Virtual Humans in Virtual environment technology and interfaces
    Frontiers of human-centred computing online communities and virtual environments, 2001
    Co-Authors: Daniel Thalmann
    Abstract:

    The purpose of this chapter is to show the importance of Virtual Humans in Virtual Reality and to identify the main problems to solve to create believable Virtual Humans.

  • real time display of Virtual Humans levels of details and impostors
    IEEE Transactions on Circuits and Systems for Video Technology, 2000
    Co-Authors: Amaury Aubel, Ronan Boulic, Daniel Thalmann
    Abstract:

    Rendering and animating in real-time a multitude of articulated characters presents a real challenge, and few hardware systems are up to the task. Up to now, little research has been conducted to tackle the issue of real-time rendering of numerous Virtual Humans. This paper presents a hardware-independent technique that improves the display rate of animated characters by acting on the sole geometric and rendering information. We first review the acceleration techniques traditionally in use in computer graphics and highlight their suitability to articulated characters. We then show how impostors can be used to render Virtual Humans. We introduce concrete case studies that demonstrate the effectiveness of our approach. Finally, we tackle the visibility issue.

Stacy Marsella - One of the best experts on this subject based on the ideXlab platform.

  • negotiation as a challenge problem for Virtual Humans
    Intelligent Virtual Agents, 2015
    Co-Authors: Jonathan Gratch, David Devault, Gale M Lucas, Stacy Marsella
    Abstract:

    We argue for the importance of negotiation as a challenge problem for Virtual human research, and introduce a Virtual conversational agent that allows people to practice a wide range of negotiation skills. We describe the multi-issue bargaining task, which has become a de facto standard for teaching and research on negotiation in both the social and computer sciences. This task is popular as it allows scientists or instructors to create a variety of distinct situations that arise in real-life negotiations, simply by manipulating a small number of mathematical parameters. We describe the development of a Virtual human that will allow students to practice the interpersonal skills they need to recognize and navigate these situations. An evaluation of an early wizard-controlled version of the system demonstrates the promise of this technology for teaching negotiation and supporting scientific research on social intelligence.

  • real time expressive gaze animation for Virtual Humans
    Adaptive Agents and Multi-Agents Systems, 2009
    Co-Authors: Marcus Thiebaux, Brent J Lance, Stacy Marsella
    Abstract:

    Gaze is an extremely important aspect of human face to face interaction. Over the course of an interaction, a single individual's gaze can perform many different functions, such as regulating communication, expressing emotion, and attending to task performance. When gaze shifts occur, where they are directed, and how they are performed all provide critical information to an observer of the gaze shift. The goal of this work is to allow Virtual Humans to mimic the gaze capabilities of Humans in face to face interaction. This paper introduces the SmartBody Gaze Controller (SBGC), a highly versatile framework for realizing various manners of gaze through a rich set of input parameters. Using these parameters, the SBCG controls aspects of movement such as velocity, postural bias, and the selection of joints committed to a particular gaze task. We provide a preliminary implementation that demonstrates how related work on the Expressive Gaze Model (EGM) can be used to inform management of these input parameters. The EGM is a model for manipulating the style of gaze shifts for the purpose of expressing emotion [11]. The SBGC is fully compatible with all aspects of the SmartBody system [23].

  • teaching negotiation skills through practice and reflection with Virtual Humans
    International Conference on Advances in System Simulation, 2006
    Co-Authors: Mark G Core, William Swartout, Jonathan Gratch, David Traum, Chad H Lane, Michael Van Lent, Stacy Marsella
    Abstract:

    Although the representation of physical environments and behaviors will continue to play an important role in simulation-based training, an emerging challenge is the representation of Virtual Humans with rich mental models (e.g., including emotions, trust) that interact through conversational as well as physical behaviors. The motivation for such simulations is training soft skills such as leadership, cultural awareness, and negotiation, where the majority of actions are conversational, and the problem solving involves consideration of the emotions, attitudes, and desires of others.The educational power of such simulations can be enhanced by the integration of an intelligent tutoring system to support learners' understanding of the effect of their actions on Virtual Humans and how they might improve their performance. In this paper, we discuss our efforts to build such Virtual Humans, along with an accompanying intelligent tutor, for the domain of negotiation and cultural awareness.

  • toward Virtual Humans
    Ai Magazine, 2006
    Co-Authors: William Swartout, Jonathan Gratch, Jeff Rickel, Stacy Marsella, Randall W Hill, Eduard Hovy, David Traum
    Abstract:

    This article describes the Virtual Humans developed as part of the Mission Rehearsal Exercise project, a Virtual reality-based training system. This project is an ambitious exercise in integration, both in the sense of integrating technology with entertainment industry content, but also in that we have joined a number of component technologies that have not been integrated before. This integration has not only raised new research issues, but it has also suggested some new approaches to difficult problems. We describe the key capabilities of the Virtual Humans, including task representation and reasoning, natural language dialogue, and emotion reasoning, and show how these capabilities are integrated to provide more human-level intelligence than would otherwise be possible.

  • hierarchical motion controllers for real time autonomous Virtual Humans
    Intelligent Virtual Agents, 2005
    Co-Authors: Marcelo Kallmann, Stacy Marsella
    Abstract:

    Continuous and synchronized whole-body motions are essential for achieving believable autonomous Virtual Humans in interactive applications.We present a new motion control architecture based on generic controllers that can be hierarchically interconnected and reused in real-time. The hierarchical organization implies that leaf controllers are motion generators while the other nodes are connectors, performing operations such as interpolation, blending, and precise scheduling of children controllers.We also describe how the system can correctly handle the synchronization of gestures with speech in order to achieve believable conversational characters. For that purpose, different types of controllers implement a generic model of the different phases of a gesture.

Norman I. Badler - One of the best experts on this subject based on the ideXlab platform.

  • navigation and steering for autonomous Virtual Humans
    Wiley Interdisciplinary Reviews: Cognitive Science, 2013
    Co-Authors: Mubbasir Kapadia, Norman I. Badler
    Abstract:

    The ever-increasing applicability of interactive Virtual worlds in industry and academia has given rise to the need for robust, versatile autonomous Virtual Humans to inject life into these environments. There are two fundamental problems that must be addressed to produce functional, purposeful autonomous populaces: (1)Navigation: finding a collision-free global path from an agent's start position to its target in large complex environments, and (2) Steering: moving an agent along the path while avoiding static and dynamic threats such as other agents. In this review, we survey the large body of contributions in steering and navigation for autonomous agents in dynamic Virtual worlds. We describe the benefits and limitations of different proposed solutions and identify potential future research directions to meet the needs for the next generation of interactive Virtual world applications. WIREs Cogn Sci 2013, 4:263–272. doi: 10.1002/wcs.1223 For further resources related to this article, please visit the WIREs website.

  • what s next the new era of autonomous Virtual Humans
    Motion in Games, 2012
    Co-Authors: Mubbasir Kapadia, Alexander Shoulson, Cory D Boatright, Pengfei Huang, Funda Durupinar, Norman I. Badler
    Abstract:

    This paper identifies several key limitations in the representation, control, locomotion, and authoring of autonomous Virtual Humans that must be addressed to enter the new age of interactive Virtual world applications. These limitations include simplified particle representations of agents which decouples control and locomotion, the lack of multi-modal perception in Virtual environments, the need for multiple levels of control granularity, homogeneity in character animation, and monolithic agent architectures which cannot scale to complex multi-agent interactions and global narrative constraints. We present this broad perspective with the objective of providing the stimulus for an exciting new era of Virtual human research.

  • Virtual Humans for validating maintenance procedures
    Communications of The ACM, 2002
    Co-Authors: Norman I. Badler, Charles A. Erignac
    Abstract:

    They can be sent to check the human aspects of complex physical systems by simulating assembly, repair, and maintenance tasks in a 3D Virtual environment.

  • Creating interactive Virtual Humans: some assembly required
    IEEE Intelligent Systems, 2002
    Co-Authors: Jonathan Gratch, Jeff Rickel, Justine Cassell, Elisabeth André, E. Petajan, Norman I. Badler
    Abstract:

    Discusses some of the key issues that must be addressed in creating Virtual Humans, or androids. As a first step, we overview the issues and available tools in three key areas of Virtual human research: face-to-face conversation, emotions and personality, and human figure animation. Assembling a Virtual human is still a daunting task, but the building blocks are getting bigger and better every day.

  • animation control for real time Virtual Humans
    Communications of The ACM, 1999
    Co-Authors: Norman I. Badler, Martha Palmer, Rama Bindiganavale
    Abstract:

    The computation speed and control methods needed to portray 3D Virtual Humans suitable for interactive applications have improved dramatically in recent years. Real-time Virtual Humans show increasingly complex features along the dimensions of appearance, function, time, autonomy, and individuality. The Virtual human architecture we’ve been developing at the University of Pennsylvania is representative of an emerging generation of such architectures and includes low-level motor skills, a mid-level parallel automata controller, and a high-level conceptual representation for driving Virtual Humans through complex tasks. The architecture—called Jack— provides a level of abstraction generic enough to encompass natural-language instruction representation as well as direct links from those instructions to animation control.

William Swartout - One of the best experts on this subject based on the ideXlab platform.

  • Virtual Humans for learning
    Ai Magazine, 2013
    Co-Authors: William Swartout, Ron Artstein, Eric Forbell, Susan Foutz, Chad H Lane, Belinda Lange, Jacquelyn Ford Morie, Albert Rizzo, David Traum
    Abstract:

    Virtual Humans are computer-generated characters designed to look and behave like real people. Studies have shown that Virtual Humans can mimic many of the social effects that one finds in human-human interactions such as creating rapport, and people respond to Virtual Humans in ways that are similar to how they respond to real people. We believe that Virtual Humans represent a new metaphor for interacting with computers, one in which working with a computer becomes much like interacting with a person and this can bring social elements to the interaction that are not easily supported with conventional interfaces. We present two systems that embody these ideas. The first, the Twins are Virtual docents in the Museum of Science, Boston, designed to engage visitors and raise their awareness and knowledge of science. The second SimCoach, uses an empathetic Virtual human to provide veterans and their families with information about PTSD and depression.

  • lessons learned from Virtual Humans
    Ai Magazine, 2010
    Co-Authors: William Swartout
    Abstract:

    Normal.dotm 0 0 1 70 401 USC Institute for Creative Technologies 3 1 492 12.0 0 false 18 pt 18 pt 0 0 false false false Over the past decade, we have been engaged in an extensive research effort to build Virtual Humans and applications that use them. Building a Virtual human might be considered the quintessential AI problem, because it brings together many of the key features, such as autonomy, natural communication, sophisticated reasoning and behavior, that distinguish AI systems. This paper describes major Virtual human systems we have built and important lessons we have learned along the way.

  • Building Interactive Virtual Humans for Training Environments
    Interservice Industry Training Simulation and Education Conference (I ITSEC) 2007, 2007
    Co-Authors: Sander Bakkes, Diane Piepol, Marina Del Rey, Jonathan [ed] Gratch, Diederik Roijers, Chek Tien Tan, Stefano Marsella, William Swartout, Roberto Valenti, Patrick Kenny, Arno Hartholt, David Traum
    Abstract:

    There is a great need in the Joint Forces to have human to human interpersonal training for skills such as negotiation, leadership, interviewing and cultural training. Virtual environments can be incredible training tools if used properly and used for the correct training application. Virtual environments have already been very successful in training Warfighters how to operate vehicles and weapons systems. At the Institute for Creative Technologies (ICT) we have been exploring a new question: can Virtual environments be used to train Warfighters in interpersonal skills such as negotiation, tactical questioning and leadership that are so critical for success in the contemporary operating environment? Using embodied conversational agents to create this type of training system has been one of the goals of the Virtual Humans project at the institute. ICT has a great deal of experience building complex, integrated and immersive training systems that address the human factor needs for training experiences. This paper will address the research, technology and value of developing Virtual Humans for training environments. This research includes speech recognition, natural language understanding & generation, dialogue management, cognitive agents, emotion modeling, question response managers, speech generation and non-verbal behavior. Also addressed will be the diverse set of training environments we have developed for the system, from single computer laptops to multi-computer immersive displays to real and Virtual integrated environments. This paper will also discuss the problems, issues and solutions we encountered while building these systems. The paper will recount subject testing we have performed in these environments and results we have obtained from users. Finally the future of this type of Virtual Humans technology and training applications will be discussed.

  • teaching negotiation skills through practice and reflection with Virtual Humans
    International Conference on Advances in System Simulation, 2006
    Co-Authors: Mark G Core, William Swartout, Jonathan Gratch, David Traum, Chad H Lane, Michael Van Lent, Stacy Marsella
    Abstract:

    Although the representation of physical environments and behaviors will continue to play an important role in simulation-based training, an emerging challenge is the representation of Virtual Humans with rich mental models (e.g., including emotions, trust) that interact through conversational as well as physical behaviors. The motivation for such simulations is training soft skills such as leadership, cultural awareness, and negotiation, where the majority of actions are conversational, and the problem solving involves consideration of the emotions, attitudes, and desires of others.The educational power of such simulations can be enhanced by the integration of an intelligent tutoring system to support learners' understanding of the effect of their actions on Virtual Humans and how they might improve their performance. In this paper, we discuss our efforts to build such Virtual Humans, along with an accompanying intelligent tutor, for the domain of negotiation and cultural awareness.

  • toward Virtual Humans
    Ai Magazine, 2006
    Co-Authors: William Swartout, Jonathan Gratch, Jeff Rickel, Stacy Marsella, Randall W Hill, Eduard Hovy, David Traum
    Abstract:

    This article describes the Virtual Humans developed as part of the Mission Rehearsal Exercise project, a Virtual reality-based training system. This project is an ambitious exercise in integration, both in the sense of integrating technology with entertainment industry content, but also in that we have joined a number of component technologies that have not been integrated before. This integration has not only raised new research issues, but it has also suggested some new approaches to difficult problems. We describe the key capabilities of the Virtual Humans, including task representation and reasoning, natural language dialogue, and emotion reasoning, and show how these capabilities are integrated to provide more human-level intelligence than would otherwise be possible.