User Interfaces

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 272091 Experts worldwide ranked by ideXlab platform

Mary Lou Maher - One of the best experts on this subject based on the ideXlab platform.

  • The Impact of Tangible User Interfaces on Designers' Spatial Cognition
    Human-Computer Interaction, 2008
    Co-Authors: Mary Lou Maher
    Abstract:

    ABSTRACT Most studies on tangible User Interfaces for the tabletop design systems are being undertaken from a technology viewpoint. Although there have been studies that focus on the development of new interactive environments employing tangible User Interfaces for designers, there is a lack of evaluation with respect to designers' spatial cognition. In this research we study the effects of tangible User Interfaces on designers' spatial cognition to provide empirical evidence for the anecdotal views of the effect of tangible User Interfaces. To highlight the expected changes in spatial cognition while using tangible User Interfaces, we compared designers using a tangible User interface on a tabletop system with 3D blocks to designers using a graphical User interface on a desktop computer with a mouse and keyboard. The ways in which designers use the two different Interfaces for 3D design were examined using a protocol analysis method. The result reveals that designers using 3D blocks perceived more spatia...

  • CSCWD - Collaborative Design in a Tabletop System employing Tangible User Interfaces
    2007 11th International Conference on Computer Supported Cooperative Work in Design, 2007
    Co-Authors: Mary Lou Maher
    Abstract:

    Tabletop systems including novel User Interfaces provide an effective platform for design collaboration. This research studied the effects of a tabletop system with tangible User Interfaces on designers' spatial cognition and design communication in collaborative design. The empirical results reveal that the use of TUIs changed designers' spatial cognition, and that these changes affected the design process by increasing characteristics associated with creative design processes.

  • CDVE - Do tangible User Interfaces impact spatial cognition in collaborative design
    Lecture Notes in Computer Science, 2005
    Co-Authors: Mary Lou Maher
    Abstract:

    Developments in digital design workbenches that combine Augmented Reality (AR) systems and tangible User Interfaces (TUIs) on a horizontal display surface provide a new kind of physical and digital environment for collaborative design. The combination of tangible interaction with AR display techniques change the dynamics of the collaboration and have an impact on the designers’ perception of 3D models. We are studying the effects of TUIs on designers’ spatial cognition and design communication in order to identify how such tangible systems can be used to provide better support for collaborative design. Specifically, we compared tangible User Interfaces (TUIs) with graphical User Interfaces (GUIs) in a collaborative design task with a focus on characterising the impact these User Interfaces have on spatial cognition.

Francisco Montero - One of the best experts on this subject based on the ideXlab platform.

  • a transformational approach for multimodal web User Interfaces based on usixml
    International Conference on Multimodal Interfaces, 2005
    Co-Authors: Adrian Stanciulescu, Jean Vanderdonckt, Quentin Limbourg, Benjamin Michotte, Francisco Montero
    Abstract:

    A transformational approach for developing multimodal web User Interfaces is presented that progressively moves from a task model and a domain model to a final User interface. This approach consists of three steps: deriving one or many abstract User Interfaces from a task model and a domain model, deriving one or many concrete User Interfaces from each abstract one, and producing the code of the corresponding final User Interfaces. To ensure these steps, transformations are encoded as graph transformations performed on the involved models expressed in their graph equivalent. For each step, a graph grammar gathers relevant graph transformations for accomplishing the sub-steps. The final User interface is multimodal as it involves graphical (keyboard, mouse) and vocal interaction. The approach outlined in the paper is illustrated throughout a running example for a graphical interface, a vocal interface, and two multimodal Interfaces with graphical and vocal predominances, respectively.

  • ICMI - A transformational approach for multimodal web User Interfaces based on UsiXML
    Proceedings of the 7th international conference on Multimodal interfaces - ICMI '05, 2005
    Co-Authors: Adrian Stanciulescu, Jean Vanderdonckt, Quentin Limbourg, Benjamin Michotte, Francisco Montero
    Abstract:

    A transformational approach for developing multimodal web User Interfaces is presented that progressively moves from a task model and a domain model to a final User interface. This approach consists of three steps: deriving one or many abstract User Interfaces from a task model and a domain model, deriving one or many concrete User Interfaces from each abstract one, and producing the code of the corresponding final User Interfaces. To ensure these steps, transformations are encoded as graph transformations performed on the involved models expressed in their graph equivalent. For each step, a graph grammar gathers relevant graph transformations for accomplishing the sub-steps. The final User interface is multimodal as it involves graphical (keyboard, mouse) and vocal interaction. The approach outlined in the paper is illustrated throughout a running example for a graphical interface, a vocal interface, and two multimodal Interfaces with graphical and vocal predominances, respectively.

Jean Vanderdonckt - One of the best experts on this subject based on the ideXlab platform.

  • Towards a linguistic modeling of graphical User Interfaces: Eliciting modeling requirements
    2015 3rd International Conference on Control Engineering & Information Technology (CEIT), 2015
    Co-Authors: Iyad Khaddam, Nesrine Mezhoudi, Jean Vanderdonckt
    Abstract:

    Our research focuses on exploring alternative perspectives to developing and modeling graphical User Interfaces to enhance adaptability of a graphical User interface. The aim of our research is to model graphical User Interfaces from a linguistic perspective: a linguistic point of view to graphical User Interfaces. The linguistic model aims at enhancing adaptability by enhancing two quality properties: the traceability and the maintainability. The linguistic perspective is based on the linguistic classification originally proposed by Nielsen and consists of several linguistic levels. This paper elaborates on the linguistic perspective to elicit modeling requirements per level. Our contribution is a set of modeling requirements on each level.

  • Context-Aware Adaptation of User Interfaces
    2011
    Co-Authors: Vivian Motti, Jean Vanderdonckt
    Abstract:

    Efficient adaptation aims at ensuring that a User interface is adapted to a User’s task according to the context of use, since the end User is carrying out a task with one or several computing platforms in a physical environment. This tutorial presents key concepts of adaptation: principles that guide it, relevant context information and how to consider it, dimensions and abstraction levels subject to adaptation, as well as, languages, methods and techniques used in this domain. This tutorial aims at teaching major aspects to be considered for adaptation of User Interfaces in general and concerning the context of use in particular, including the end User (or several of them, as in multi-User Interfaces), the platform (or several of them, as in multi-device environments), and the physical environment (or several of them, as in multi-location systems).

  • Designing Workflow User Interfaces with UsiXML
    2010
    Co-Authors: Josefina Guerrero García, Jean Vanderdonckt
    Abstract:

    Supporting business processes with the help of workflow systems is a necessary prerequisite for many companies to stay competitive. An important task is the specification of workflow, i.e. these parts of a business process that can be supported by a computer system. This paper is about the definition and development of User Interfaces for workflow information systems. XML-based User interface description languages express various aspects of the User Interfaces, including the abstract and concrete elements of the User interface, the tasks to be performed by the Users, and the User interface dialogue. We have developed a framework for expressing workflow aspects, and using UsiXML for rendering the User Interfaces.

  • a transformational approach for multimodal web User Interfaces based on usixml
    International Conference on Multimodal Interfaces, 2005
    Co-Authors: Adrian Stanciulescu, Jean Vanderdonckt, Quentin Limbourg, Benjamin Michotte, Francisco Montero
    Abstract:

    A transformational approach for developing multimodal web User Interfaces is presented that progressively moves from a task model and a domain model to a final User interface. This approach consists of three steps: deriving one or many abstract User Interfaces from a task model and a domain model, deriving one or many concrete User Interfaces from each abstract one, and producing the code of the corresponding final User Interfaces. To ensure these steps, transformations are encoded as graph transformations performed on the involved models expressed in their graph equivalent. For each step, a graph grammar gathers relevant graph transformations for accomplishing the sub-steps. The final User interface is multimodal as it involves graphical (keyboard, mouse) and vocal interaction. The approach outlined in the paper is illustrated throughout a running example for a graphical interface, a vocal interface, and two multimodal Interfaces with graphical and vocal predominances, respectively.

  • ICMI - A transformational approach for multimodal web User Interfaces based on UsiXML
    Proceedings of the 7th international conference on Multimodal interfaces - ICMI '05, 2005
    Co-Authors: Adrian Stanciulescu, Jean Vanderdonckt, Quentin Limbourg, Benjamin Michotte, Francisco Montero
    Abstract:

    A transformational approach for developing multimodal web User Interfaces is presented that progressively moves from a task model and a domain model to a final User interface. This approach consists of three steps: deriving one or many abstract User Interfaces from a task model and a domain model, deriving one or many concrete User Interfaces from each abstract one, and producing the code of the corresponding final User Interfaces. To ensure these steps, transformations are encoded as graph transformations performed on the involved models expressed in their graph equivalent. For each step, a graph grammar gathers relevant graph transformations for accomplishing the sub-steps. The final User interface is multimodal as it involves graphical (keyboard, mouse) and vocal interaction. The approach outlined in the paper is illustrated throughout a running example for a graphical interface, a vocal interface, and two multimodal Interfaces with graphical and vocal predominances, respectively.

Adrian Stanciulescu - One of the best experts on this subject based on the ideXlab platform.

  • a transformational approach for multimodal web User Interfaces based on usixml
    International Conference on Multimodal Interfaces, 2005
    Co-Authors: Adrian Stanciulescu, Jean Vanderdonckt, Quentin Limbourg, Benjamin Michotte, Francisco Montero
    Abstract:

    A transformational approach for developing multimodal web User Interfaces is presented that progressively moves from a task model and a domain model to a final User interface. This approach consists of three steps: deriving one or many abstract User Interfaces from a task model and a domain model, deriving one or many concrete User Interfaces from each abstract one, and producing the code of the corresponding final User Interfaces. To ensure these steps, transformations are encoded as graph transformations performed on the involved models expressed in their graph equivalent. For each step, a graph grammar gathers relevant graph transformations for accomplishing the sub-steps. The final User interface is multimodal as it involves graphical (keyboard, mouse) and vocal interaction. The approach outlined in the paper is illustrated throughout a running example for a graphical interface, a vocal interface, and two multimodal Interfaces with graphical and vocal predominances, respectively.

  • ICMI - A transformational approach for multimodal web User Interfaces based on UsiXML
    Proceedings of the 7th international conference on Multimodal interfaces - ICMI '05, 2005
    Co-Authors: Adrian Stanciulescu, Jean Vanderdonckt, Quentin Limbourg, Benjamin Michotte, Francisco Montero
    Abstract:

    A transformational approach for developing multimodal web User Interfaces is presented that progressively moves from a task model and a domain model to a final User interface. This approach consists of three steps: deriving one or many abstract User Interfaces from a task model and a domain model, deriving one or many concrete User Interfaces from each abstract one, and producing the code of the corresponding final User Interfaces. To ensure these steps, transformations are encoded as graph transformations performed on the involved models expressed in their graph equivalent. For each step, a graph grammar gathers relevant graph transformations for accomplishing the sub-steps. The final User interface is multimodal as it involves graphical (keyboard, mouse) and vocal interaction. The approach outlined in the paper is illustrated throughout a running example for a graphical interface, a vocal interface, and two multimodal Interfaces with graphical and vocal predominances, respectively.

Quentin Limbourg - One of the best experts on this subject based on the ideXlab platform.

  • a transformational approach for multimodal web User Interfaces based on usixml
    International Conference on Multimodal Interfaces, 2005
    Co-Authors: Adrian Stanciulescu, Jean Vanderdonckt, Quentin Limbourg, Benjamin Michotte, Francisco Montero
    Abstract:

    A transformational approach for developing multimodal web User Interfaces is presented that progressively moves from a task model and a domain model to a final User interface. This approach consists of three steps: deriving one or many abstract User Interfaces from a task model and a domain model, deriving one or many concrete User Interfaces from each abstract one, and producing the code of the corresponding final User Interfaces. To ensure these steps, transformations are encoded as graph transformations performed on the involved models expressed in their graph equivalent. For each step, a graph grammar gathers relevant graph transformations for accomplishing the sub-steps. The final User interface is multimodal as it involves graphical (keyboard, mouse) and vocal interaction. The approach outlined in the paper is illustrated throughout a running example for a graphical interface, a vocal interface, and two multimodal Interfaces with graphical and vocal predominances, respectively.

  • ICMI - A transformational approach for multimodal web User Interfaces based on UsiXML
    Proceedings of the 7th international conference on Multimodal interfaces - ICMI '05, 2005
    Co-Authors: Adrian Stanciulescu, Jean Vanderdonckt, Quentin Limbourg, Benjamin Michotte, Francisco Montero
    Abstract:

    A transformational approach for developing multimodal web User Interfaces is presented that progressively moves from a task model and a domain model to a final User interface. This approach consists of three steps: deriving one or many abstract User Interfaces from a task model and a domain model, deriving one or many concrete User Interfaces from each abstract one, and producing the code of the corresponding final User Interfaces. To ensure these steps, transformations are encoded as graph transformations performed on the involved models expressed in their graph equivalent. For each step, a graph grammar gathers relevant graph transformations for accomplishing the sub-steps. The final User interface is multimodal as it involves graphical (keyboard, mouse) and vocal interaction. The approach outlined in the paper is illustrated throughout a running example for a graphical interface, a vocal interface, and two multimodal Interfaces with graphical and vocal predominances, respectively.

  • a unifying reference framework for multi target User Interfaces
    Interacting with Computers, 2003
    Co-Authors: Gaëlle Calvary, Quentin Limbourg, Joelle Coutaz, David Thevenin, Laurent Bouillon, Jean Vanderdonckt
    Abstract:

    This paper describes a framework that serves as a reference for classifying User Interfaces supporting multiple targets, or multiple contexts of use in the field of context-aware computing. In this framework, a context of use is decomposed in three facets: the end Users of the interactive system, the hardware and software computing platform with which the User have to carry out their interactive tasks and the physical environment where they are working. Therefore, a context-sensitive User interface is a User interface that exhibits some capability to be aware of the context (context awareness) and to react to changes of this context. This paper attempts to provide a unified understanding of context-sensitive User Interfaces rather than a prescription of various ways or methods of tackling different steps of development. Rather, the framework structures the development life cycle into four levels of abstraction: task and concepts, abstract User interface, concrete User interface and final User interface. These levels are structured with a relationship of reification going from an abstract level to a concrete one and a relationship of abstraction going from a concrete level to an abstract one. Most methods and tools can be more clearly understood and compared relative to each other against the levels of this framework. In addition, the framework expresses when, where and how a change of context is considered and supported in the context-sensitive User interface thanks to a relationship of translation. In the field of multi-target User Interfaces is also introduced, defined, and exemplified the notion of plastic User Interfaces. These User Interfaces support some adaptation to changes of the context of use while preserving a predefined set of usability properties