The Experts below are selected from a list of 27423 Experts worldwide ranked by ideXlab platform
Mohamed Chetouani - One of the best experts on this subject based on the ideXlab platform.
-
Proceedings of the International Workshop on Social Learning and Multimodal Interaction for Designing Artificial Agents
2016Co-Authors: Mohamed Chetouani, Salvatore Maria Anzalone, Giovanna Varni, Isabelle Hupont Torres, Ginevra Castellano, Angelica Lim, Gentiane VentureAbstract:The "social learning and Multimodal Interaction for designing artificial agents" workshop aims at presenting scientific and philosophical advances related to social learning and Multimodal Interaction for enhancing the design of artificial agents. Papers presented in the workshop include studies on human behavior modeling, on social robotics and on virtual agents. Our two invited speakers, Prof. Catherine Pelachaud and Prof. Louis-Philippe Morency will enrich and open the door to further discussion by bringing their widely acknowledged expertise in the field.
-
ICMI - ASSP4MI2016: 2nd international workshop on advancements in social signal processing for Multimodal Interaction (workshop summary)
Proceedings of the 18th ACM International Conference on Multimodal Interaction, 2016Co-Authors: Khiet P. Truong, Dirk Heylen, Toyoaki Nishida, Mohamed ChetouaniAbstract:This paper gives a summary of the 2nd International Workshop on Advancements in Social Signal Processing for Multimodal Interaction (ASSP4MI). Following our successful 1st International Workshop on Advancements in Social Signal Processing for Multimodal Interaction, held during ICMI-2015, we proposed the 2nd ASSP4MI workshop during ICMI-2016. The topics addressed and discussions fostered during last year's workshop are considered very relevant and alive in the research community. In this year's workshop, we continued addressing important topics and fostering fruitful discussions among researchers from different disciplines working in the fields of Social Signal Processing (SSP) and Multimodal Interaction.
-
ICMI - International workshop on social learning and Multimodal Interaction for designing artificial agents (workshop summary)
Proceedings of the 18th ACM International Conference on Multimodal Interaction, 2016Co-Authors: Mohamed Chetouani, Salvatore Maria Anzalone, Giovanna Varni, Isabelle Hupont Torres, Ginevra Castellano, Angelica Lim, Gentiane VentureAbstract:The “social learning and Multimodal Interaction for designing artificial agents” workshop aims at presenting scientific and philosophical advances related to social learning and Multimodal Interaction for enhancing the design of artificial agents. Papers presented in the workshop include studies on human behavior modeling, on social robotics and on virtual agents. Our two invited speakers, Prof. Catherine Pelachaud and Prof. Louis-Philippe Morency will enrich and open the door to further discussion by bringing their widely acknowledged expertise in the field.
Beat Signer - One of the best experts on this subject based on the ideXlab platform.
-
Mudra: A Unified Multimodal Interaction Framework
Proceedings of the 13th international conference on multimodal interfaces, 2011Co-Authors: Lode Hoste, Bruno Dumas, Beat SignerAbstract:In recent years, Multimodal interfaces have gained momentum as an alternative to traditional WIMP Interaction styles. Existing Multimodal fusion engines and frameworks range from low-level data stream-oriented approaches to high-level semantic inference-based solutions. However, there is a lack of Multimodal Interaction engines offering native fusion support across different levels of abstractions to fully exploit the power of Multimodal Interactions. We present Mudra, a unified Multimodal Interaction framework supporting the integrated processing of low-level data streams as well as high-level semantic inferences. Our solution is based on a central fact base in combination with a declarative rule-based language to derive new facts at different abstraction levels. Our innovative architecture for Multimodal Interaction encourages the use of software engineering principles such as modularisation and composition to support a growing set of input modalities as well as to enable the integration of existing or novel Multimodal fusion engines.
-
ICMI - Mudra: a unified Multimodal Interaction framework
Proceedings of the 13th international conference on multimodal interfaces - ICMI '11, 2011Co-Authors: Lode Hoste, Bruno Dumas, Beat SignerAbstract:In recent years, Multimodal interfaces have gained momentum as an alternative to traditional WIMP Interaction styles. Existing Multimodal fusion engines and frameworks range from low-level data stream-oriented approaches to high-level semantic inference-based solutions. However, there is a lack of Multimodal Interaction engines offering native fusion support across different levels of abstractions to fully exploit the power of Multimodal Interactions. We present Mudra, a unified Multimodal Interaction framework supporting the integrated processing of low-level data streams as well as high-level semantic inferences. Our solution is based on a central fact base in combination with a declarative rule-based language to derive new facts at different abstraction levels. Our innovative architecture for Multimodal Interaction encourages the use of software engineering principles such as modularisation and composition to support a growing set of input modalities as well as to enable the integration of existing or novel Multimodal fusion engines.
Gentiane Venture - One of the best experts on this subject based on the ideXlab platform.
-
Proceedings of the International Workshop on Social Learning and Multimodal Interaction for Designing Artificial Agents
2016Co-Authors: Mohamed Chetouani, Salvatore Maria Anzalone, Giovanna Varni, Isabelle Hupont Torres, Ginevra Castellano, Angelica Lim, Gentiane VentureAbstract:The "social learning and Multimodal Interaction for designing artificial agents" workshop aims at presenting scientific and philosophical advances related to social learning and Multimodal Interaction for enhancing the design of artificial agents. Papers presented in the workshop include studies on human behavior modeling, on social robotics and on virtual agents. Our two invited speakers, Prof. Catherine Pelachaud and Prof. Louis-Philippe Morency will enrich and open the door to further discussion by bringing their widely acknowledged expertise in the field.
-
ICMI - International workshop on social learning and Multimodal Interaction for designing artificial agents (workshop summary)
Proceedings of the 18th ACM International Conference on Multimodal Interaction, 2016Co-Authors: Mohamed Chetouani, Salvatore Maria Anzalone, Giovanna Varni, Isabelle Hupont Torres, Ginevra Castellano, Angelica Lim, Gentiane VentureAbstract:The “social learning and Multimodal Interaction for designing artificial agents” workshop aims at presenting scientific and philosophical advances related to social learning and Multimodal Interaction for enhancing the design of artificial agents. Papers presented in the workshop include studies on human behavior modeling, on social robotics and on virtual agents. Our two invited speakers, Prof. Catherine Pelachaud and Prof. Louis-Philippe Morency will enrich and open the door to further discussion by bringing their widely acknowledged expertise in the field.
Michael Johnston - One of the best experts on this subject based on the ideXlab platform.
-
IUI - Multimodal Interaction patterns in mobile local search
Proceedings of the 2012 ACM international conference on Intelligent User Interfaces - IUI '12, 2012Co-Authors: Patrick Ehlen, Michael JohnstonAbstract:Speak4it™ is a mobile search application that leverages Multimodal input and integration to allow users to search for and act on local business information. We present an initial empirical analysis of user Interaction with a Multimodal local search application deployed in the field with real users. Specifically, we focus on queries involving Multimodal commands, and analyze Multimodal Interaction behaviors seen in a deployed Multimodal system.
-
Speech and Multimodal Interaction in mobile search
IEEE Signal Processing Magazine, 2011Co-Authors: Junlan Feng, Michael Johnston, Srinivas BangaloreAbstract:With the widespread adoption of highspeed wireless networks symbiotically tcomplemented by the burgeoning demand for smart mobile devices, access to the Internet is evolving from personal computers (PCs) to mobile devices. In this article, we highlight the characteristics of mobile search, discuss the state of speech-based mobile search, and present opportunities for exploiting Multimodal Interaction to optimize the efficiency of mobile search.
-
SLT - Speak4IT: Multimodal Interaction in the wild
2010 IEEE Spoken Language Technology Workshop, 2010Co-Authors: Michael Johnston, Patrick EhlenAbstract:Speak4itSM is a consumer-oriented mobile search application that leverages Multimodal input and output to allow users to search for and act on local business information. In addition to specifying queries by voice (e.g., “bike repair shops near the Golden Gate Bridge”) users can combine speech and gesture. For example, “gas stations” + <route drawn on display> will return the gas stations along the specified route traced on the display. We provide interactive demonstrations of Speak4it on both the iPhone and iPad platforms and explain the underlying Multimodal architecture and challenges of supporting true Multimodal Interaction in a deployed mobile service.
-
quickset Multimodal Interaction for distributed applications
ACM Multimedia, 1997Co-Authors: Philip R. Cohen, Michael Johnston, Sharon Oviatt, David Mcgee, Jay Pittman, Ira Smith, Liang Chen, Josh ClowAbstract:QuickSet: Multimodal Interaction for Distributed Applications Philip R. Cohen, Michael Johnston, David McGee, Sharon Oviatt, Jay Pittman, Ira Smith, Liang Chen and Josh Glow Center for Human Computer Communication Oregon Graduate Institute of Science and Technology
-
ACM Multimedia - QuickSet: Multimodal Interaction for distributed applications
Proceedings of the fifth ACM international conference on Multimedia - MULTIMEDIA '97, 1997Co-Authors: Philip R. Cohen, Michael Johnston, Sharon Oviatt, David Mcgee, Jay Pittman, Ira Smith, Liang Chen, Josh ClowAbstract:QuickSet: Multimodal Interaction for Distributed Applications Philip R. Cohen, Michael Johnston, David McGee, Sharon Oviatt, Jay Pittman, Ira Smith, Liang Chen and Josh Glow Center for Human Computer Communication Oregon Graduate Institute of Science and Technology
Philip R. Cohen - One of the best experts on this subject based on the ideXlab platform.
-
Multimodal Interaction with Computers
Encyclopedia of Language & Linguistics, 2006Co-Authors: Philip R. Cohen, Sharon OviattAbstract:Multimodal Interaction involves a person's use of more than one of their natural communication modes, such as spoken language, gesture, sketch, writing, eye gaze, body posture, and so forth, to communicate with a partner. For a computer to respond appropriately, it will need to derive coherent communicative intent from this combined set of inputs. This article summarizes the advantages of Multimodal Interaction, progress in our understanding of humans' Multimodal performance, the state of the art in Multimodal system processing, and challenges and opportunities for future science and technology of Multimodal Interaction. This article does not cover Multimodal presentation planning and output.
-
Multimodal Interaction for Wearable Augmented Reality Environments
2003Co-Authors: Philip R. CohenAbstract:Abstract : We describe an approach to natural 3D Multimodal Interaction in immersive environments. Our approach fuses symbolic and statistical information from a set of 3D gesture and speech agents, building in part on prior research on disambiguating the user's intent in 2D and 2.5D user interfaces. We present an experimental system architecture that embodies this approach, and provide examples from a preliminary 3D Multimodal testbed to explore our ideas in augmented and virtual reality.
-
Multimodal Interaction for Virtual Environments
1999Co-Authors: Philip R. CohenAbstract:Abstract : We describe an approach to natural 3D Multimodal Interaction in immersive environments. Our approach fuses symbolic and statistical information from a set of 3D gesture and speech agents building in part on prior research on disambiguating the user's intent in 2D and 2.5D user interfaces. We present an experimental system architecture that embodies this approach, and provide examples from a preliminary 3D Multimodal testbed to explore our ideas in augmented and virtual reality
-
quickset Multimodal Interaction for distributed applications
ACM Multimedia, 1997Co-Authors: Philip R. Cohen, Michael Johnston, Sharon Oviatt, David Mcgee, Jay Pittman, Ira Smith, Liang Chen, Josh ClowAbstract:QuickSet: Multimodal Interaction for Distributed Applications Philip R. Cohen, Michael Johnston, David McGee, Sharon Oviatt, Jay Pittman, Ira Smith, Liang Chen and Josh Glow Center for Human Computer Communication Oregon Graduate Institute of Science and Technology
-
ACM Multimedia - QuickSet: Multimodal Interaction for distributed applications
Proceedings of the fifth ACM international conference on Multimedia - MULTIMEDIA '97, 1997Co-Authors: Philip R. Cohen, Michael Johnston, Sharon Oviatt, David Mcgee, Jay Pittman, Ira Smith, Liang Chen, Josh ClowAbstract:QuickSet: Multimodal Interaction for Distributed Applications Philip R. Cohen, Michael Johnston, David McGee, Sharon Oviatt, Jay Pittman, Ira Smith, Liang Chen and Josh Glow Center for Human Computer Communication Oregon Graduate Institute of Science and Technology