Wearable Computer

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 8607 Experts worldwide ranked by ideXlab platform

Thad Starner - One of the best experts on this subject based on the ideXlab platform.

  • an underwater Wearable Computer for two way human dolphin communication experimentation
    International Symposium on Wearable Computers, 2013
    Co-Authors: Daniel Kohlsdorf, Thad Starner, Scott Gilliland, Peter Presti, Denise Herzing
    Abstract:

    Research in dolphin cognition and communication in the wild is still a challenging task for marine biologists. Most problems arise from the uncontrolled nature of field studies and the challenges of building suitable underwater research equipment. We present a novel underwater Wearable Computer enabling researchers to engage in an audio-based interaction between humans and dolphins. The design requirements are based on a research protocol developed by a team of marine biologists associated with the Wild Dolphin Project.

  • an underwater Wearable Computer for two way human dolphin communication experimentation
    ISWC Conference, 2009
    Co-Authors: Daniel Kohlsdorf, Thad Starner, Scott Gilliland, Peter Presti, Denise Herzing
    Abstract:

    Research in dolphin cognition and communication in the wild is still a challenging task for marine biologists. Most problems arise from the uncontrolled nature of field studies and the challenges of building suitable underwater research equipment. We present a novel underwater Wearable Computer enabling researchers to engage in an audio-based interaction between humans and dolphins. The design requirements are based on a research protocol developed by a team of marine biologists associated...

  • mobile capture for Wearable Computer usability testing
    International Symposium on Wearable Computers, 2001
    Co-Authors: Kent Lyons, Thad Starner
    Abstract:

    The-mobility of Wearable Computers makes usability-testing; difficult. In order to fully understand how a user interacts with the Wearable, the researcher must examine, both the user's direct interactions, with the, Computer, as well as the external context the user perceives during their interaction. We present, a tool that augments a Wearable Computer with additional hardware and software to capture the information needed to perform a usability study in the field under realistic conditions. We examine the challenges in doing the capture and present our implementation. We also describe VizWear a tool for examining the captured data. Finally, we present our experiences using the system for a sample user study.

  • Heat dissipation in Wearable Computers aided by thermal coupling with the user
    Mobile Networks and Applications, 1999
    Co-Authors: Thad Starner, Yael Maguire
    Abstract:

    Wearable Computers and PDA's are physically close to, or are in contact with, the user during most of the day. This proximity would seemingly limit the amount of heat such a device may generate, conflicting with user demands for increasing processor speeds and wireless capabilities. However, this paper explores significantly increasing the heat dissipation capability per unit surface area of a mobile Computer by thermally coupling it to the user. In particular, a heat dissipation model of a forearm-mounted Wearable Computer is developed, and the model is verified experimentally. In the process, this paper also provides tools and novel suggestions for heat dissipation that may influence the design of a Wearable Computer.

  • real time american sign language recognition using desk and Wearable Computer based video
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998
    Co-Authors: Thad Starner, Joshua Weaver, Alex Pentland
    Abstract:

    We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon.

Alex Pentland - One of the best experts on this subject based on the ideXlab platform.

  • real time american sign language recognition using desk and Wearable Computer based video
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998
    Co-Authors: Thad Starner, Joshua Weaver, Alex Pentland
    Abstract:

    We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon.

  • visual contextual awareness in Wearable computing
    International Symposium on Wearable Computers, 1998
    Co-Authors: Thad Starner, Bernt Schiele, Alex Pentland
    Abstract:

    Small, body-mounted video cameras enable a different style of Wearable computing interface. As processing power increases, a Wearable Computer can spend more time observing its user to provide serendipitous information, manage interruptions and tasks, and predict future needs without being directly commanded by the user. This paper introduces an assistant for playing the real-space game Patrol. This assistant tracks the wearer's location and current task through Computer vision techniques and without off-body infrastructure. In addition, this paper continues augmented reality research, started in 1995, for binding virtual data to physical locations.

  • a Wearable Computer based american sign language recognizer
    Lecture Notes in Computer Science, 1998
    Co-Authors: Thad Starner, Joshua Weaver, Alex Pentland
    Abstract:

    Modern Wearable Computer designs package workstation level performance in systems small enough to be worn as clothing. These machines enable technology to be brought where it is needed the most for the handicapped: everyday mobile environments. This paper describes a research effort to make a Wearable Computer that can recognize (with the possible goal of translating) sentence level American Sign Language (ASL) using only a baseball cap mounted camera for input. Current accuracy exceeds 97% per word on a 40 word lexicon.

  • ISWC - A Wearable Computer based American sign language recognizer
    1997
    Co-Authors: Thad Starner, Joshua Weaver, Alex Pentland
    Abstract:

    Modern Wearable Computer designs package work-station level performance in systems small enough to be worn as clothing. These machines enable technology to be brought where it is needed the most for the handicapped: everyday mobile environments. This paper describes a research effort to make a Wearable Computer that can recognize (with the possible goal of translating) sentence level American Sign Language (ASL) using only a baseball cap mounted camera for input. Current accuracy exceeds 97% per word on a 40 word lexicon.

  • A Wearable Computer-based American sign Language Recogniser
    Personal Technologies, 1997
    Co-Authors: Thad Starner, Joshua Weaver, Alex Pentland
    Abstract:

    Modern Wearable Computer designs package workstation-level performance in systems small enough to be worn as clothing. These machines enable technology to be brought where it is needed most for the handicapped: everyday mobile environments. This paper describes a research effort to make a Wearable Computer that can recognise (with the possible goal of translating) sentence-level American Sign Language (ASL) using only a baseball cap mounted camera for input. Current accuracy exceeds 97% per word on a 40-word lexicon.

Ali H. Sayed - One of the best experts on this subject based on the ideXlab platform.

  • a robust finger tracking method for multimodal Wearable Computer interfacing
    IEEE Transactions on Multimedia, 2006
    Co-Authors: Sylvia M. Dominguez, Trish Keaton, Ali H. Sayed
    Abstract:

    Mobile Wearable Computers are intended to provide users with real-time access to information in a natural and unobtrusive manner. Computing and sensing in these devices must be reliable, easy to interact with, transparent, and configured to support different needs and complexities. This paper presents a vision-based robust finger tracking algorithm combined with audio-based control commands that is integrated into a multimodal unobtrusive user interface, wherein the interface may be used to segment out objects of interest in the environment by encircling them with the user's pointing fingertip. In order to quickly extract the objects encircled by the user from a complex scene, this unobtrusive interface uses a single head-mounted camera to capture color images, which are then processed using algorithms to perform: color segmentation, fingertip shape analysis, perturbation model learning, and robust fingertip tracking. This interface is designed to be robust to changes in the environment and user's movements by incorporating a state-space estimation with uncertain models algorithm, which attempts to control the influence of uncertain environment conditions on the system's fingertip tracking performance by adapting the tracking model to compensate for the uncertainties inherent in the data collected with a Wearable Computer

  • A multimodal Wearable Computer interface using state-space estimation with uncertain models
    2006
    Co-Authors: Ali H. Sayed, Sylvia Margarita Dominguez-aguayo
    Abstract:

    Mobile Wearable Computers are intended to provide users with real-time access to information in a natural and unobtrusive manner. Computing and sensing in these devices must be reliable, easy to interact with, transparent, and configured to support different needs and complexities. Therefore, one critical factor for the success of a Wearable Computer is its user interface. This dissertation presents a real-time robust multimodal unobtrusive user interface comprised of a vision-based robust finger tracking algorithm combined with audio-based control commands, wherein the interface is used to segment out objects of interest in the environment by encircling them with the user's pointing fingertip. In order to quickly extract the objects encircled by the user, this unobtrusive interface uses a single head-mounted camera to capture color images, which are then processed using algorithms to perform: color segmentation, fingertip shape analysis, perturbation model learning, and robust fingertip tracking. Then, a Wearable Computer system may use object recognition algorithms to identify the object segmented by the user's hand gesture, and may return an audio narration, telling the user information concerning the object's classification, historical facts, usage, etc. This interface is designed to be robust to changes in the environment and user's movements by incorporating a state-space estimation with uncertain models algorithm, which attempts to control the influence of uncertain environment conditions on the system's tracking performance by adapting the tracking model to compensate for the uncertainties inherent in the data collected with a Wearable Computer. For a Wearable Computer system, these uncertainties arise from the camera moving along with the user's head motion, the background and object of interest moving independently of each other, the user standing still or randomly walking, and the user's pointing finger abruptly changing directions at variable speeds. The robust unobtrusive multimodal interface developed in this dissertation has been tested on a real Wearable Computer system, and the performance results obtained during these tests are presented in this dissertation.

  • Browsing the environment with the SNAP&TELL Wearable Computer system
    Personal and Ubiquitous Computing, 2005
    Co-Authors: Trish Keaton, Sylvia M. Dominguez, Ali H. Sayed
    Abstract:

    This paper provides an overview of a multi-modal Wearable Computer system, SNAP&TELL. The system performs real-time gesture tracking, combined with audio-based control commands, in order to recognize objects in an environment, including outdoor landmarks. The system uses a single camera to capture images, which are then processed to perform color segmentation, fingertip shape analysis, robust tracking, and invariant object recognition, in order to quickly identify the objects encircled and SNAPped by the user’s pointing gesture. In addition, the system returns an audio narration, TELLing the user information concerning the object’s classification, historical facts, usage, etc. This system provides enabling technology for the design of intelligent assistants to support “Web-On-The-World” applications, with potential uses such as travel assistance, business advertisement, the design of smart living and working spaces, and pervasive wireless services and internet vehicles.

  • snap tell a multi modal Wearable Computer interface for browsing the environment
    International Symposium on Wearable Computers, 2002
    Co-Authors: Trish Keaton, Sylvia M. Dominguez, Ali H. Sayed
    Abstract:

    This paper gives an overview of a multi-modal Wearable Computer system 'SNAP&TELL', which performs real-time gesture tracking combined with audio-based system control commands to recognize objects in the environment including outdoor landmarks. Our system uses a single camera to capture images which are then processed using several algorithms to perform segmentation based on color fingertip shape analysis, robust tracking, and invariant object recognition, in order to quickly identify the objects encircled (SNAPshot) by the user's pointing gesture. In turn, the system returns an audio narration, telling the user information concerning the object's classification, historical facts, usage, etc. This system provides enabling technology for the design of intelligent assistants to support "Web-On-The-World" applications, with potential uses such as travel assistance, business advertisement, the design of smart living and working spaces, and pervasive wireless services and internet vehicles.

  • robust finger tracking for Wearable Computer interfacing
    Workshop on Perceptive User Interfaces, 2001
    Co-Authors: Sylvia M. Dominguez, Trish Keaton, Ali H. Sayed
    Abstract:

    Key to the design of human-machine gesture interface applications is the ability of the machine to quickly and efficiently identify and track the hand movements of its user. In a Wearable Computer system equipped with head-mounted cameras, this task is extremely difficult due to the uncertain camera motion caused by the user's head movement, the user standing still then randomly walking, and the user's hand or pointing finger abruptly changing directions at variable speeds. This paper presents a tracking methodology based on a robust state-space estimation algorithm, which attempts to control the influence of uncertain environment conditions on the system's performance by adapting the tracking model to compensate for the uncertainties inherent in the data. Our system tracks a user's pointing gesture from a single head mounted camera, to allow the user to encircle an object of interest, thereby coarsely segmenting the object. The snapshot of the object is then passed to a recognition engine for identification, and retrieval of any pre-stored information regarding the object. A comparison of our robust tracker against a plain Kalman tracker showed a 15% improvement in the estimated position error, and exhibited a faster response time.

Bruce H Thomas - One of the best experts on this subject based on the ideXlab platform.

  • ISWC - Have We Achieved the Ultimate Wearable Computer
    2012 16th International Symposium on Wearable Computers, 2012
    Co-Authors: Bruce H Thomas
    Abstract:

    This paper provides a provocative view of Wearable Computer research over the years, starting with the first IEEE International Symposium on Wearable Computers in 1997. The goal of this paper is to reflect on the original research challenges from the first few years. With this goal in mind, two questions can be examined: 1) have we achieved the goals we set out? and 2) how has the direction of research changed in the past fifteen years? This is not a survey paper, but a platform to stimulate discussion.

  • evaluation of three Wearable Computer pointing devices for selection tasks
    International Symposium on Wearable Computers, 2005
    Co-Authors: Joanne E Zucco, Bruce H Thomas, Karen Grimmer
    Abstract:

    This paper presents the results of an experiment comparing three commercially available pointing devices (a trackball, gyroscopic mouse and Twiddler2 mouse) performing selection tasks for use with Wearable Computers. The study involved 30 participants performing selection tasks with the pointing devices while wearing a Wearable Computer on their back and using a head-mounted display. The error rate and time to complete the selection of the circular targets was measured. When examining the results, the gyroscopic mouse showed the fastest mean time for selecting the targets, while the trackball performed with the lowest error rate.

  • tinmith metro new outdoor techniques for creating city models with an augmented reality Wearable Computer
    International Symposium on Wearable Computers, 2001
    Co-Authors: Wayne Piekarski, Bruce H Thomas
    Abstract:

    This paper presents new techniques for capturing and viewing on site 3D graphical models for large outdoor objects. Using an augmented reality Wearable Computer, we have developed a software system, known as Tinmith-Metro. Tinmith-Metro allows users to control a 3D constructive solid geometry modeller for building graphical objects of large physical artefacts, for example buildings, in the physical world. The 3D modeller is driven by a new user interface known as Tinmith-Hand, which allows the user to control the modeller using a set of pinch gloves and hand tracking. These techniques allow user to supply their AR renderers with models that would previously have to be captured with manual, time-consuming, and/or expensive methods.

  • a Wearable Computer system with augmented reality to support terrestrial navigation
    International Symposium on Wearable Computers, 1998
    Co-Authors: Bruce H Thomas, Victor Demczuk, Wayne Piekarski, D Hepworth, B Gunther
    Abstract:

    To date augmented realities are typically operated in only a small defined area, in the order of a large room. This paper reports on our investigation into expanding augmented realities to outdoor environments. The project entails providing visual navigation aids to users. A Wearable Computer system with a see-through display, digital compass, and a differential GPS are used to provide visual cues while performing a standard orienteering task. This paper reports the outcomes of a set of trials using an off the shelf Wearable Computer, equipped with a custom built navigation software package, "map-in-the-hat".

  • Evaluation of text input mechanisms for Wearable Computers
    Virtual Reality, 1998
    Co-Authors: Bruce H Thomas, Susan P. Tyerman, Karen Grimmer
    Abstract:

    This paper reports on an experiment investigating the functionality and usability of novel input devices on a Wearable Computer for text entry tasks. Over a 3-week period, 12 subjects used three different input devices to create and save short textual messages. The virtual keyboard, forearm keyboard, and Kordic keypad input devices were assessed for their efficiency and usability in simple text-entry tasks. Results collected included the textual data created by the subjects, the duration of activities, the survey data and observations made by supervisors. The results indicated that the forearm keyboard is the best performer for accurate and efficient text entry while other devices may benefit from more work on designing specialist graphical user interfaces (GUIs) for the Wearable Computer.

Joshua Weaver - One of the best experts on this subject based on the ideXlab platform.

  • real time american sign language recognition using desk and Wearable Computer based video
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998
    Co-Authors: Thad Starner, Joshua Weaver, Alex Pentland
    Abstract:

    We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon.

  • a Wearable Computer based american sign language recognizer
    Lecture Notes in Computer Science, 1998
    Co-Authors: Thad Starner, Joshua Weaver, Alex Pentland
    Abstract:

    Modern Wearable Computer designs package workstation level performance in systems small enough to be worn as clothing. These machines enable technology to be brought where it is needed the most for the handicapped: everyday mobile environments. This paper describes a research effort to make a Wearable Computer that can recognize (with the possible goal of translating) sentence level American Sign Language (ASL) using only a baseball cap mounted camera for input. Current accuracy exceeds 97% per word on a 40 word lexicon.

  • ISWC - A Wearable Computer based American sign language recognizer
    1997
    Co-Authors: Thad Starner, Joshua Weaver, Alex Pentland
    Abstract:

    Modern Wearable Computer designs package work-station level performance in systems small enough to be worn as clothing. These machines enable technology to be brought where it is needed the most for the handicapped: everyday mobile environments. This paper describes a research effort to make a Wearable Computer that can recognize (with the possible goal of translating) sentence level American Sign Language (ASL) using only a baseball cap mounted camera for input. Current accuracy exceeds 97% per word on a 40 word lexicon.

  • A Wearable Computer-based American sign Language Recogniser
    Personal Technologies, 1997
    Co-Authors: Thad Starner, Joshua Weaver, Alex Pentland
    Abstract:

    Modern Wearable Computer designs package workstation-level performance in systems small enough to be worn as clothing. These machines enable technology to be brought where it is needed most for the handicapped: everyday mobile environments. This paper describes a research effort to make a Wearable Computer that can recognise (with the possible goal of translating) sentence-level American Sign Language (ASL) using only a baseball cap mounted camera for input. Current accuracy exceeds 97% per word on a 40-word lexicon.