Video Projection

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1623 Experts worldwide ranked by ideXlab platform

François Michaud - One of the best experts on this subject based on the ideXlab platform.

  • Egocentric and exocentric teleoperation interface using real-time, 3D Video Projection
    2009 4th ACM IEEE International Conference on Human-Robot Interaction (HRI), 2009
    Co-Authors: François Ferland, François Pomerleau, Chon Tam Le Dinh, François Michaud
    Abstract:

    The user interface is the central element of a telepresence robotic system and its visualization modalities greatly affect the operator's situation awareness, and thus its performance. Depending on the task at hand and the operator's preferences, going from ego- and exocentric viewpoints and improving the depth representation can provide better perspectives of the operation environment. Our system, which combines a 3D reconstruction of the environment using laser range finder readings with two Video Projection methods, allows the operator to easily switch from ego- to exocentric viewpoints. This paper presents the interface developed and demonstrates its capabilities by having 13 operators teleoperate a mobile robot in a navigation task.

  • HRI - Egocentric and exocentric teleoperation interface using real-time, 3D Video Projection
    Proceedings of the 4th ACM IEEE international conference on Human robot interaction - HRI '09, 2009
    Co-Authors: François Ferland, François Pomerleau, Chon Tam Le Dinh, François Michaud
    Abstract:

    The user interface is the central element of a telepresence robotic system and its visualization modalities greatly affect the operator's situation awareness, and thus its performance. Depending on the task at hand and the operator's preferences, going from ego- and exocentric viewpoints and improving the depth representation can provide better perspectives of the operation environment. Our system, which combines a 3D reconstruction of the environment using laser range finder readings with two Video Projection methods, allows the operator to easily switch from ego- to exocentric viewpoints. This paper presents the interface developed and demonstrates its capabilities by having 13 operators teleoperate a mobile robot in a navigation task.

Yong Chen - One of the best experts on this subject based on the ideXlab platform.

  • Mask Video Projection-Based Stereolithography With Continuous Resin Flow
    Journal of Manufacturing Science and Engineering, 2019
    Co-Authors: Xiangjia Li, Yong Chen
    Abstract:

    The mask image Projection-based stereolithography (MIP-SL) is a low-cost and high-resolution additive manufacturing (AM) process. However, the slow speed of part separation and resin refilling is the primary bottleneck that limits the fabrication speed of the MIP-SL process. In addition, the stair-stepping effect due to the layer-based fabrication process limits the surface quality of built parts. To address the critical issues in the MIP-SL process related to resin refilling and layer-based fabrication, we present a mask Video Projection-based stereolithography (MVP-SL) process with continuous resin flow and light exposure. The newly developed AM process enables the continuous fabrication of three-dimensional (3D) objects with ultra-high fabrication speed. In the paper, the system design to achieve mask Video Projection and the process settings to achieve ultrafast fabrication speed are presented. The relationship between process parameters and the surface quality of the built parts is discussed. Test results illustrate that the MVP-SL process with a continuous resin flow can build three-dimensional objects within minutes, and the surface quality of the fabricated objects is significantly improved.

  • Mask Video Projection Based Stereolithography With Continuous Resin Flow to Build Digital Models in Minutes
    Volume 1: Additive Manufacturing; Bio and Sustainable Manufacturing, 2018
    Co-Authors: Xiangjia Li, Yong Chen
    Abstract:

    The mask image Projection based stereolithography (MIP-SL) is a low cost and high-resolution additive manufacturing (AM) process. However, the slow speed of part separation and resin refilling is the primary bottleneck that limits the fabrication speed of the MIP-SL process. In addition, the stair steeping effect due to the layer-based fabrication process limits the surface quality of built parts. To address the critical issues in the MIP-SL process related to resin refilling and layer-based fabrication, we present a mask Video Projection based stereolithography (MVP-SL) process with continuous resin flow and light exposure. The newly developed AM process enables the continuous fabrication of three-dimensional (3D) objects with ultra-high fabrication speed. In the paper. The system design to achieve mask Video Projection and the process settings to achieve ultrafast fabrication speed are presented. The relationship between process parameters and the surface quality of the fabricated parts is discussed. Test results illustrate the MVP-SL process with continuous resin flow can build three-dimensional objects within minutes and the surface quality of the fabricated objects can be significantly improved.

  • additive manufacturing based on optimized mask Video Projection for improved accuracy and resolution
    Journal of Manufacturing Processes, 2012
    Co-Authors: Chi Zhou, Yong Chen
    Abstract:

    Abstract Additive manufacturing (AM) processes based on mask image Projection such as digital micro-mirror devices (DMD) have the potential to be fast and inexpensive. More and more research and commercial systems have been developed based on such digital devices. However, the accuracy and resolution of the related AM processes are constrained by the limited number of mirrors in a DMD. In this paper, a novel AM process based on the mask Video Projection has been presented. For each layer, a set of mask images instead of a single image are planned based on the principle of the optimized pixel blending. The planned images are then projected in synchronization with the small movement of the building platform. A mask image planning method has been presented for the formulated optimization problem. Experimental results have verified that the mask Video Projection process can significantly improve the accuracy and resolution of built components.

Jan M. Wiener - One of the best experts on this subject based on the ideXlab platform.

  • Can People Not Tell Left from Right in VR? Point-to-origin Studies Revealed Qualitative Errors in Visual Path Integration
    2007 IEEE Virtual Reality Conference, 2007
    Co-Authors: Bernhard E. Riecke, Jan M. Wiener
    Abstract:

    Even in state-of-the-art virtual reality (VR) setups, participants often feel lost when navigating through virtual environments. In psychological experiments, such disorientation is often compensated for by extensive training. The current study investigated participants' sense of direction by means of a rapid point-to-origin task without any training or performance feedback. This allowed us to study participants' intuitive spatial orientation in VR while minimizing the influence of higher cognitive abilities and compensatory strategies. After visually displayed passive excursions along one-or two-segment trajectories, participants were asked to point back to the origin of locomotion "as accurately and quickly as possible". Despite using a high-quality Video Projection with a 84deg times 63deg field of view, participants' overall performance was rather poor. Moreover, six of the 16 participants exhibited striking qualitative errors, i.e., consistent left-right confusions that have not been observed in comparable real world experiments. Taken together, this study suggests that even an immersive high-quality Video Projection system is not necessarily sufficient for enabling natural spatial orientation in VR. We propose that a rapid point-to-origin paradigm can be a useful tool for evaluating and improving the effectiveness of VR setups in terms of enabling natural and unencumbered spatial orientation and performance.

  • SIGGRAPH Research Posters - Point-to-origin experiments in VR revealed novel qualitative errors in visual path integration
    ACM SIGGRAPH 2006 Research posters on - SIGGRAPH '06, 2006
    Co-Authors: Bernhard E. Riecke, Jan M. Wiener
    Abstract:

    Even in state-of-the-art virtual reality (VR) setups, participants often feel lost when navigating through virtual environments. In psychological experiments, such disorientation is often compensated for by extensive training. The current study investigated participants’ sense of direction by means of a rapid point-to-origin task without any training or performance feedback. This allowed us to study participants’ intuitive spatial orientation in VR while minimizing the influence of higher cognitive abilities and compensatory strategies. After visually displayed passive excursions along oneor two-segment trajectories, participants were asked to point back to the origin of locomotion "as accurately and quickly as possible". Despite using a high-quality Video Projection with a 84°×63° field of view, participants’ overall performance was rather poor. Moreover, six of the 16 participants exhibited striking qualitative errors, i.e., consistent left-right confusions that have not been observed in comparable real world experiments. Taken together, this study suggests that even an immersive high-quality Video Projection system is not necessarily sufficient for enabling natural spatial orientation in VR. We propose that a rapid point-to-origin paradigm can be a useful tool for evaluating and improving the effectiveness of VR setups in terms of enabling natural and unencumbered spatial orientation and performance

Jeremy R. Cooperstock - One of the best experts on this subject based on the ideXlab platform.

  • Shadow Removal in Front Projection Environments Using Object Tracking
    2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007
    Co-Authors: Samuel Audet, Jeremy R. Cooperstock
    Abstract:

    When an occluding object, such as a person, stands between a projector and a display surface, a shadow results. We can compensate by positioning multiple projectors so they produce identical and overlapping images and by using a system to locate shadows. Existing systems work by detecting either the shadows or the occluders. Shadow detection methods cannot remove shadows before they appear and are sensitive to Video Projection, while current occluder detection methods require near infrared cameras and illumination. Instead, we propose using a camera-based object tracker to locate the occluder and an algorithm to model the shadows. The algorithm can adapt to other tracking technologies as well. Despite imprecision in the calibration and tracking process, we found that our system performs effective shadow removal with sufficiently low processing delay for interactive applications with Video Projection.

  • CVPR - Shadow Removal in Front Projection Environments Using Object Tracking
    2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007
    Co-Authors: Samuel Audet, Jeremy R. Cooperstock
    Abstract:

    When an occluding object, such as a person, stands between a projector and a display surface, a shadow results. We can compensate by positioning multiple projectors so they produce identical and overlapping images and by using a system to locate shadows. Existing systems work by detecting either the shadows or the occluders. Shadow detection methods cannot remove shadows before they appear and are sensitive to Video Projection, while current occluder detection methods require near infrared cameras and illumination. Instead, we propose using a camera-based object tracker to locate the occluder and an algorithm to model the shadows. The algorithm can adapt to other tracking technologies as well. Despite imprecision in the calibration and tracking process, we found that our system performs effective shadow removal with sufficiently low processing delay for interactive applications with Video Projection.

Samuel Audet - One of the best experts on this subject based on the ideXlab platform.

  • Interactive Video Projection on a Moving Planar Surface of Arbitrary Texture Tracked with a Color Camera
    2020
    Co-Authors: Samuel Audet, Masatoshi Okutomi, Masayuki Tanaka
    Abstract:

    Interactive Video Projection, to be effective, must track moving targets, but current solutions consider the displayed content as interference and largely depend on channels orthogonal to visible light. Instead, we propose an algorithm that considers the content as additional information useful for direct alignment. Using a color camera, our implemented software successfully tracks with subpixel accuracy a planar surface of diffuse reflectance properties at about eight frames per second on commodity hardware, providing a solid base for future enhancements.

  • Augmenting moving planar surfaces robustly with Video Projection and direct image alignment
    Virtual Reality, 2013
    Co-Authors: Samuel Audet, Masatoshi Okutomi, Masayuki Tanaka
    Abstract:

    Augmented reality applications based on Video Projection, to be effective, must track moving targets and make sure that the display remains aligned even when they move, but the Projection can severely alter their appearances to the point where traditional computer vision algorithms fail. Current solutions consider the displayed content as interference and largely depend on channels orthogonal to visible light. They cannot directly align projector images with real-world surfaces, even though this may be the actual goal. We propose instead to model the light emitted by projectors and reflected into cameras and to consider the displayed content as additional information useful for direct alignment. Using a color camera, our implemented software successfully tracks with subpixel accuracy a planar surface of diffuse reflectance properties at an average of eight frames per second on commodity hardware, providing a solid base for future enhancements.

  • Augmenting moving planar surfaces interactively with Video Projection and a color camera
    Proceedings - IEEE Virtual Reality, 2012
    Co-Authors: Samuel Audet, Masatoshi Okutomi, Masayuki Tanaka
    Abstract:

    Traditional applications of augmented reality superimpose generated images onto the real world through goggles or monitors held between objects of interest and the user. To render the augmented surfaces interactive, we may exploit directly existing computer vision techniques. However, when using Video Projection to alter directly the appearance of surfaces, most vision-based algorithms fail. Even Wear Ur World [5], a recent and otherwise well-received interactive projector-camera system, relies on colored thimbles as markers. As notable exception, Tele-Graffiti [6] was designed for normal visible-light cameras without markers, but still considers the light emitted from the projector as unwanted interference, limiting its application.

  • Shadow Removal in Front Projection Environments Using Object Tracking
    2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007
    Co-Authors: Samuel Audet, Jeremy R. Cooperstock
    Abstract:

    When an occluding object, such as a person, stands between a projector and a display surface, a shadow results. We can compensate by positioning multiple projectors so they produce identical and overlapping images and by using a system to locate shadows. Existing systems work by detecting either the shadows or the occluders. Shadow detection methods cannot remove shadows before they appear and are sensitive to Video Projection, while current occluder detection methods require near infrared cameras and illumination. Instead, we propose using a camera-based object tracker to locate the occluder and an algorithm to model the shadows. The algorithm can adapt to other tracking technologies as well. Despite imprecision in the calibration and tracking process, we found that our system performs effective shadow removal with sufficiently low processing delay for interactive applications with Video Projection.

  • CVPR - Shadow Removal in Front Projection Environments Using Object Tracking
    2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007
    Co-Authors: Samuel Audet, Jeremy R. Cooperstock
    Abstract:

    When an occluding object, such as a person, stands between a projector and a display surface, a shadow results. We can compensate by positioning multiple projectors so they produce identical and overlapping images and by using a system to locate shadows. Existing systems work by detecting either the shadows or the occluders. Shadow detection methods cannot remove shadows before they appear and are sensitive to Video Projection, while current occluder detection methods require near infrared cameras and illumination. Instead, we propose using a camera-based object tracker to locate the occluder and an algorithm to model the shadows. The algorithm can adapt to other tracking technologies as well. Despite imprecision in the calibration and tracking process, we found that our system performs effective shadow removal with sufficiently low processing delay for interactive applications with Video Projection.