Teleconferencing

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 252 Experts worldwide ranked by ideXlab platform

Fumio Kishino - One of the best experts on this subject based on the ideXlab platform.

  • Virtual Space Teleconferencing : Real - time reproduction of 3D human images
    Journal of Visual Communication and Image Representation, 1995
    Co-Authors: Jun Ohya, Fumio Kishino, Nobuyoshi Terashima, Yasuichi Kitamura, Haruo Takemura, Hirofumi Ishii
    Abstract:

    Abstract Real-time reproduction of 3D human images is realized by an experimental system recently built as a prototype for virtual space Teleconferencing, in which participants at different sites can feel as if they are colocated and can work cooperatively. At each sending and receiving site of the Teleconferencing system, a 3D model of each participant is constructed from a wire frame model mapped by color texture and is rendered on a 3D display. In the current experimental system, real-time detection of facial features at the sending site is achieved by visually tracking tape marks pasted to the participant's face. Movements of the head, body, hands, and fingers are detected in real time using magnetic sensors and data gloves. At the receiving site, the detected motion parameters are used to move nodes in the wire frame model to reproduce the movements of the participants at each sending site. Realistic facial expressions are reproduced through the use of simple motion rules applied to the tape mark tracking information. Through experimental evaluation, the optimal number of nodes for best quality has been obtained. Reproduction of facial expressions and synthesis of arm movements are examined. The reproduction speed using the optimal human model is approximately 6 frames/s. Examples of cooperative work by participants using the experimental system illustrate the effectiveness of virtual space Teleconferencing.

  • Virtual space Teleconferencing system
    5th IEEE COMSOC International Workshop on Multimedia Communications, 1994
    Co-Authors: Noritsugu Terashima, Fumio Kishino
    Abstract:

    Describes a virtual space Teleconferencing system which realizes communication with realistic sensations. Real-time reproduction of a 3D human image is realized by this experimental system, which ATR recently built for the realization of virtual space Teleconferencing in which participants at different sites can feel as if they are at one site and can work cooperatively. In the Teleconferencing system, the 3D model of a participant is constructed by a wireframe model mapped by color texture, and is displayed on the 3D screen at the receiving site. Promising results for real-time cooperative work using the experimental system are demonstrated. This system is an application of multimedia technology. In this paper, a higher level of interaction is described that allows verbal instructions to be combined with 3D input devices for generating, manipulating or modifying 3D virtual objects

  • ICMCS - Emotion enhanced multimedia meetings using the concept of virtual space Teleconferencing
    Proceedings of the Third IEEE International Conference on Multimedia Computing and Systems, 1
    Co-Authors: L.c. De Silva, Tsutomu Miyasato, Fumio Kishino
    Abstract:

    We investigate the unique advantages of our proposed virtual space Teleconferencing system (VST) in the area of multimedia Teleconferencing, with emphasis on facial emotion transmission and recognition. We show that, using this concept, emotions of a local participant can be transmitted to the remote party with a higher recognition rate by enhancing the emotions using some intelligence processing between the local and the remote participants. This leads to a kind of emotion enhanced Teleconferencing system which can supersede face to face meetings, by effectively alleviating the barriers in recognizing emotions between different nations. We consider a concept known as a virtual person, which is a better alternative to blurred or mosaic facial images that one can find in some television interviews with people who are not willing to be exposed in public.

H. Kato - One of the best experts on this subject based on the ideXlab platform.

  • Real world Teleconferencing
    IEEE Computer Graphics and Applications, 2002
    Co-Authors: Mark Billinghurst, Simon Prince, Adrian Cheok, H. Kato
    Abstract:

    We've been exploring how augmented reality (AR) technology can create fundamentally new forms of remote collaboration for mobile devices. AR involves the overlay of virtual graphics and audio on reality. Typically, the user views the world through a handheld or head-mounted display (HMD) that's either see-through or overlays graphics on video of the surrounding environment. Unlike other computer interfaces that draw users away from the real world and onto the screen, AR interfaces enhance the real world experience. For example, with this technology doctors could see virtual ultrasound information superimposed on a patient's body.

  • real world Teleconferencing
    Human Factors in Computing Systems, 1999
    Co-Authors: Mark Billinghurst, H. Kato
    Abstract:

    We describe a prototype Augmented Reality conferencing application which uses the overlay of virtual images on the real world to facilitate computer supported collaborative work. Remote collaborators are represented as live video images or virtual avatars which are attached to tangible objects that can be freely positioned about a user in space. The use of augmented reality overcomes some of the limitations associated with traditional video conferencing and allows the user to conference from anywhere in their physical environment.

Ruigang Yang - One of the best experts on this subject based on the ideXlab platform.

  • Immersive Video Teleconferencing with User-Steerable Views
    Presence: Teleoperators and Virtual Environments, 2007
    Co-Authors: Ruigang Yang, Andrew Nashel, Herman Towles, Celso Setsuo Kurashima, Marcelo Knörich Zuffo
    Abstract:

    Existing video Teleconferencing techniques suffer from limited field of view, low resolution, and fixed viewpoint. We present a set of novel techniques to overcome these limitations. Based on the light field rendering concept, we utilize an array of cameras to capture the participant(s) from multiple view angles. Based on either assumed or estimated scene geometry, these images are assembled to create a high-resolution seamless image from a user-controllable viewpoint. Two different camera configurations, one dense and one sparse, are presented; the dense format is optimized for high-fidelity view synthesis while the sparse configuration is for expanded viewing volume. For the dense configuration, we provide a detailed analysis on the number of camera images required. For the sparse configuration, we present a robust technique to estimate an approximation of the scene geometry to provide smooth transition when the viewpoint is changed. The novelty of our approach is that it allows the users to electronically steer the viewpoint in real time for a live scene. Therefore it can be used in 3D Teleconferencing systems to generate stereoscopic views, or in group Teleconferencing to provide virtual camera views that minimize the overall perspective distortions. We demonstrate the effectiveness of our approach with a point-to-point Teleconferencing system distributed in several locations across the continental United States.

  • Interactive 3D Teleconferencing with user-adaptive views
    Proceedings of the 2004 ACM SIGMM workshop on Effective telepresence - ETP '04, 2004
    Co-Authors: Ruigang Yang, Andrew Nashel, Herman Towles
    Abstract:

    We present a system and techniques for synthesizing views for three dimensional video Teleconferencing. Instead of performing complex 3D scene acquisition, we decided to trade storage/hardware for computation, i.e., using more cameras. While it is expensive to directly capture a scene from all possible viewpoints, we observed that the participants' viewpoints usually remain at a constant height (eye level) during video Teleconferencing. Therefore we can restrict the possible viewpoint to be within a virtual plane without sacrificing much of the realism. Doing so signicantly reduces the number of cameras required. We demonstrate a realtime system that uses a linear array of cameras to perform Light-Field style rendering. The simplicity and robustness of light fielding rendering, combined with the natural restrictions of limited view volume in video Teleconferencing, allow us to synthesize photo-realistic views persuser request at interactive rate.

  • geometrically correct imagery for Teleconferencing
    ACM Multimedia, 1999
    Co-Authors: Ruigang Yang, Michael S Brown, Brent W Seales, Henry Fuchs
    Abstract:

    Current camera-monitor Teleconferencing applications produce unrealistic imagery and break any sense of presence for the participants. Other capture/display technologies can be used to provide more compelling Teleconferencing. However, complex geometries in capture/display systems make producing geometrically correct imagery difficult. It is usually impractical to detect, model and compensate for all effects introduced by the capture/display system. Most applications simply ignore these issues and rely on the user acceptance of the camera-monitor paradigm. This paper presents a new and simple technique for producing geometrically correct imagery for Teleconferencing environments. The necessary image transformations are derived by finding a mapping between a capture and display device for a fixed viewer location. The capture/display relationship is computed directly in device coordinates and completely avoids the need for any intermediate, complex representations of screen geometry, capture and display distortions, and viewer location. We describe our approach and demonstrate it via several prototype implementations that operate in real-time and provide a substantially more compelling sense of presence than the standard Teleconferencing paradigm.

Daniel Sumorok - One of the best experts on this subject based on the ideXlab platform.

  • scalable practical voip Teleconferencing with end to end homomorphic encryption
    IEEE Transactions on Information Forensics and Security, 2017
    Co-Authors: Kurt Rohloff, David Bruce Cousins, Daniel Sumorok
    Abstract:

    We present an approach to scalable, secure voice over IP (VoIP) Teleconferencing on commodity mobile devices and data networks with end-to-end homomorphic encryption. We assume an honest-but-curious threat model where an adversary, despite observing all communications between participants and having access to Teleconferencing servers, is unable to obtain unencrypted data and subsequently listen to the conversation. Prior secure VoIP Teleconferencing services have relied on: 1) Teleconferencing clients to maintain point-to-point encrypted links with other clients or 2) a Teleconferencing server which can access and manipulate VoIP streams unencrypted. Our approach mixes VoIP data streams at a single Teleconferencing server only while encrypted. Data streams are never decrypted at the Teleconferencing server. Innovation comes from an efficient VoIP encoding to reduce circuit depth for homomorphic mixing of encrypted VoIP data, parameterization for low bandwidth usage and integration into an existing open-source VoIP infrastructure. We experimentally evaluate on commodity iPhones, mixing at the VoIP servers on lowest cost Amazon AWS cloud server instances and communicating on commercial data networks and 802.11n access points.

R. Dunnill - One of the best experts on this subject based on the ideXlab platform.

  • Internet Teleconferencing as a clinical tool for anesthesiologists.
    Journal of clinical monitoring and computing, 1998
    Co-Authors: Keith J. Ruskin, T. E. A. Palmer, R. R. P. M. Hagenouw, A. Lack, R. Dunnill
    Abstract:

    Internet Teleconferencing software can be used to hold "virtual" meetings, during which participants around the world can share ideas. A core group of anesthetic medical practitioners, largely consisting of the Society for Advanced Telecommunications in Anesthesia (SATA), has begun to hold regularly scheduled "virtual grand rounds." This paper examines currently available software and offers impressions of our own early experiences with this technology. Two Teleconferencing systems have been used: White Pine Software CU-SeeMe and Microsoft NetMeeting. While both provided acceptable results, each had specific advantages and disadvantages. CU-SeeMe is easier to use when conferences include more than two participants. NetMeeting provides higher quality audio and video signals under crowded network conditions, and is better for conferences with only two participants. While some effort is necessary to get these Teleconferencing systems to work well, we have been using desktop conferencing for six months to hold virtual Internet meetings. The sound and video images produced by Internet Teleconferencing software are inferior to dedicated point-to-point Teleconferencing systems. However, low cost, wide availability, and ease of use make this technology a potentially valuable tool for clinicians and researchers.