Visual Computing

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 26193 Experts worldwide ranked by ideXlab platform

George Bebis - One of the best experts on this subject based on the ideXlab platform.

  • Advances in Visual Computing: Proceedings of the 12th International Symposium (ISVC, December 12–14, Las Vegas, NV, USA), Part 2
    2016
    Co-Authors: George Bebis, Richard Boyle, Bahram Parvin, Darko Koracin, Fatih Porikli, Sandra Skaff, Alireza Entezari, Jianyuan Min, Daisuke Iwai, Amela Sadagic
    Abstract:

    The two volume set LNCS 10072 and LNCS 10073 constitutes the refereed proceedings of the 12th International Symposium on Visual Computing, ISVC 2016, held in Las Vegas, NV, USA in December 2016. The 102 revised full papers and 34 poster papers presented in this book were carefully reviewed and selected from 220 submissions. The papers are organized in topical sections: Part I (LNCS 10072) comprises computational bioimaging; computer graphics; motion and tracking; segmentation; pattern recognition; Visualization; 3D mapping; modeling and surface reconstruction; advancing autonomy for aerial robotics; medical imaging; virtual reality; computer vision as a service; Visual perception and robotic systems; and biometrics. Part II (LNCS 9475): applications; Visual surveillance; computer graphics; and virtual reality.

  • Advances in Visual Computing : 9th International Symposium, ISVC 2013, Rethymnon, Crete, Greece, July 29-31, 2013. Proceedings, Part II
    2013
    Co-Authors: George Bebis
    Abstract:

    The two volume set LNCS 8033 and 8034 constitutes the refereed proceedings of the 9th International Symposium on Visual Computing, ISVC 2013, held in Rethymnon, Crete, Greece, in July 2013. The 63 revised full papers and 35 poster papers presented together with 32 special track papers were carefully reviewed and selected from more than 220 submissions. The papers are organized in topical sections: Part I (LNCS 8033) comprises computational bioimaging; computer graphics; motion, tracking, and recognition; segmentation; Visualization; 3D mapping, modeling and surface reconstruction; feature extraction, matching, and recognition; sparse methods for computer vision, graphics, and medical imaging; and face processing and recognition. Part II (LNCS 8034) comprises topics such as Visualization; Visual Computing with multimodal data streams; Visual Computing in digital cultural heritage; intelligent environments: algorithms and applications; applications; and virtual reality

  • Advances in Visual Computing : 8th International Symposium, ISVC 2012, Rethymnon, Crete, Greece, July 16-18, 2012, Revised Selected Papers, Part I
    2012
    Co-Authors: George Bebis
    Abstract:

    The two volume set LNCS 7431 and 7432 constitutes the refereed proceedings of the 8th International Symposium on Visual Computing, ISVC 2012, held in Rethymnon, Crete, Greece, in July 2012. The 68 revised full papers and 35 poster papers presented together with 45 special track papers were carefully reviewed and selected from more than 200 submissions. The papers are organized in topical sections: Part I (LNCS 7431) comprises computational bioimaging; computer graphics; calibration and 3D vision; object recognition; illumination, modeling, and segmentation; Visualization; 3D mapping, modeling and surface reconstruction; motion and tracking; optimization for vision, graphics, and medical imaging, HCI and recognition. Part II (LNCS 7432) comprises topics such as unconstrained biometrics: advances and trends; intelligent environments: algorithms and applications; applications; virtual reality; face processing and recognition

  • Advances in Visual Computing: 7th International Symposium, ISVC 2011, Las Vegas, NV, USA, September 26-28, 2011. Proceedings, Part I
    2011
    Co-Authors: George Bebis, Richard Boyle, Bahram Parvin, Darko Koracin, Song Wang, Kim Kyungnam, Bedrich Benes, Kenneth Moreland, Christoph W. Borst, Stephen Diverdi
    Abstract:

    The two volume set LNCS 6938 and LNCS 6939 constitutes the refereed proceedings of the 7th International Symposium on Visual Computing, ISVC 2011, held in Las Vegas, NV, USA, in September 2011. The 68 revised full papers and 46 poster papers presented together with 30 papers in the special tracks were carefully reviewed and selected from more than 240 submissions. The papers of part I (LNCS 6938) are organized in computational bioimaging, computer graphics, motion and tracking, segmentation, Visualization; mapping modeling and surface reconstruction, biomedical imaging, computer graphics, interactive Visualization in novel and heterogeneous display environments, object detection and recognition. Part II (LNCS 6939) comprises topics such as immersive Visualization, applications, object detection and recognition, virtual reality, and best practices in teaching Visual Computing.

  • Advances in Visual Computing : 5th International Symposium, ISVC 2009, Las Vegas, NV, USA, November 30-December 2, 2009. Proceedings, Part II
    2009
    Co-Authors: Cláudio T. Silva, George Bebis, Bahram Parvin, Darko Koracin, Renato Pajarola, Yoshinori Kuno, Junxian Wang, Daniel Coming, Miguel L. Encarnação, Richard Boyle
    Abstract:

    The two volume set LNCS 5875 and LNCS 5876 constitutes the refereed proceedings of the 5th International Symposium on Visual Computing, ISVC 2009, held in Las Vegas, NV, USA, in November/December 2009. The 97 revised full papers and 63 poster papers presented together with 40 full and 15 poster papers of 7 special tracks were carefully reviewed and selected from more than 320 submissions. The papers are organized in topical sections on computer graphics; Visualization; feature extraction and matching; medical imaging; motion; virtual reality; face processing; reconstruction; detection and tracking; applications; and video analysis and event recognition. The 7 additional special tracks address issues such as object recognition; Visual Computing for robotics; computational bioimaging; 3D mapping, modeling and surface reconstruction; deformable models: theory and applications; Visualization enhanced data analysis for health applications; and optimization for vision, graphics and medical imaging: theory and applications

Miguel Sousa - One of the best experts on this subject based on the ideXlab platform.

  • webvis instant3dhub Visual Computing as a service infrastructure to deliver adaptive secure and scalable user centric data Visualisation
    international conference on 3D web technology, 2015
    Co-Authors: Johannes Behr, Clotilde Jeulin, Christophe Mouton, Samuel Parfouru, Julien Champeau, Maik Thoner, Christian Stein, Max Limper, Michael Schmitt, Miguel Sousa
    Abstract:

    This paper presents the webVis/instant3DHub platform, which combines a novel Web-Components based framework and a Visual Computing as a Service infrastructure to deliver an interactive 3D data Visualisation solution. The system focuses on minimising resource consumption, while maximising the end-user experience. It utilises an adaptive and automated combination of client, server and hybrid Visualisation techniques, while orchestrating transmission, caching and rendering services to deliver structural and semantically complex data sets on any device class and network architecture. The API and Web Component framework allow the application developer to compose and manipulate complex data setups with a simple set of commands inside the browser, without requiring knowledge about the underlying service infrastructure, interfaces and the fully automated processes. This results in a new class of interactive applications, built around a canvas for real-time Visualisation of massive data sets.

  • Web3D - webVis/instant3DHub: Visual Computing as a service infrastructure to deliver adaptive, secure and scalable user centric data Visualisation
    Proceedings of the 20th International Conference on 3D Web Technology - Web3D '15, 2015
    Co-Authors: Johannes Behr, Clotilde Jeulin, Christophe Mouton, Samuel Parfouru, Julien Champeau, Maik Thoner, Christian Stein, Max Limper, Michael Schmitt, Miguel Sousa
    Abstract:

    This paper presents the webVis/instant3DHub platform, which combines a novel Web-Components based framework and a Visual Computing as a Service infrastructure to deliver an interactive 3D data Visualisation solution. The system focuses on minimising resource consumption, while maximising the end-user experience. It utilises an adaptive and automated combination of client, server and hybrid Visualisation techniques, while orchestrating transmission, caching and rendering services to deliver structural and semantically complex data sets on any device class and network architecture. The API and Web Component framework allow the application developer to compose and manipulate complex data setups with a simple set of commands inside the browser, without requiring knowledge about the underlying service infrastructure, interfaces and the fully automated processes. This results in a new class of interactive applications, built around a canvas for real-time Visualisation of massive data sets.

Thomas S. Huang - One of the best experts on this subject based on the ideXlab platform.

  • gestural interface to a Visual Computing environment for molecular biologists
    International Conference on Automatic Face and Gesture Recognition, 1996
    Co-Authors: Vladimir Pavlovic, Rajeev Sharma, Thomas S. Huang
    Abstract:

    In recent years there has been tremendous progress in 3-D, immersive display and virtual reality (VR) technologies. Scientific Visualization af data is one of many applications that has benefited from this progress. To fully exploit the potential of these applications in the new environment there is a need for "natural" interfaces that allow the manipulation of such displays without burdensome attachments. This paper describes the use of Visual hand gesture analysis enhanced with speech recognition for developing a bimodal gesture/speech interface for controlling a 3-D display. The interface augments an existing application, VMD, which is a VR Visual Computing environment for molecular biologists. The free hand gestures are used for manipulating the 3-D graphical display together with a set of speech commands. We concentrate on the Visual gesture analysis techniques used in developing this interface. The dual modality of gesture/speech is found to greatly aid the interaction capability.

  • FG - Invited Speech: Gestural Interface to a Visual Computing Environment for Molecular biologists
    1996
    Co-Authors: Vladimir Pavlovic, Rajeev Sharma, Thomas S. Huang
    Abstract:

    In recent years there has been tremendous progress in 3-D immersive display and virtual reality (VR) technologies. Scientific Visualization of data is one of many applications that has benefited from this progress. To fully exploit the potential of these applications in the new environment there is a need for "natural" interfaces that allow the manipulation of such displays without burdensome attachments. This paper describes the use of Visual hand gesture analysis enhanced with speech recognition for developing a bimodal gesture/speech interface for controlling a 3-D display. The interface augments an existing application, VMD, which is a VR Visual Computing environment for molecular biolo-gists. The free hand gestures are used for manipulating, the37D graphical display together with a set of speech commands. We concentrate on the Visual gesture analysis techniques used in developing this interface. The dual modality of gesture/speech is found to greatly aid the interaction capability.

  • speech gesture interface to a Visual Computing environment for molecular biologists
    International Conference on Pattern Recognition, 1996
    Co-Authors: Rajeev Sharma, Vladimir Pavlovic, Thomas S. Huang, Yunxin Zhao, Stephen M Chu, K Schul
    Abstract:

    Recent progress in 3-D, immersive display and virtual reality (VR) technologies has made possible many exciting applications, for example interactive Visualization of complex scientific data. To fully exploit this potential there is a need for "natural" interfaces that allow the manipulation of such displays without cumbersome attachments. In this paper we describe the use of Visual hand gesture analysis and speech recognition for developing a speech/gesture interface for controlling a 3-D display. The interface enhances an existing application, VMD, which is a VR Visual Computing environment far molecular biologists. The free hand gestures are used for manipulating the 3-D graphical display together with a set of speech commands. We describe the Visual gesture analysis and the speech analysis techniques used in developing this interface. The dual modality of speech/gesture is found to greatly aid the interaction capability.

  • Gestural Interface to a Visual Computing Environment for Molecular
    1996
    Co-Authors: Vladimir Pavlovic, Rajeev Sharma, Thomas S. Huang
    Abstract:

    In recent years there has been tremendous progress in 3D, immersive display and virtual reality (VR) technologies. Scientific Visualization of data is one of many applications that has benefited from this progress. To fully exploit the potential of these applications in the new environment there is a need for “natural” intelfaces that allow the manipulation of such displays without burdensome attachments. This paper describes the use of Visual hand gesture analysis enhanced with speech recognition for developing a bimodal gesture/speech inter$ace for controlling a 3-D display. The interface augments an existing application, VMD, which is a VR Visual Computing environment for molecular biologists. The free hand gestures are used for manipulating ,the 37D graphical display together with a set of speech commands. We concentrate on the Visual gesture analysis techniques used in developing this interface. The dual modality of gesture/speech is found to greatly aid the interaction capability.

  • ICPR - Speech/gesture interface to a Visual Computing environment for molecular biologists
    Proceedings of 13th International Conference on Pattern Recognition, 1996
    Co-Authors: Rajeev Sharma, Vladimir Pavlovic, Thomas S. Huang, Yunxin Zhao, Stephen M Chu, K Schul
    Abstract:

    Recent progress in 3-D, immersive display and virtual reality (VR) technologies has made possible many exciting applications, for example interactive Visualization of complex scientific data. To fully exploit this potential there is a need for "natural" interfaces that allow the manipulation of such displays without cumbersome attachments. In this paper we describe the use of Visual hand gesture analysis and speech recognition for developing a speech/gesture interface for controlling a 3-D display. The interface enhances an existing application, VMD, which is a VR Visual Computing environment far molecular biologists. The free hand gestures are used for manipulating the 3-D graphical display together with a set of speech commands. We describe the Visual gesture analysis and the speech analysis techniques used in developing this interface. The dual modality of speech/gesture is found to greatly aid the interaction capability.

Vladimir Pavlovic - One of the best experts on this subject based on the ideXlab platform.

  • gestural interface to a Visual Computing environment for molecular biologists
    International Conference on Automatic Face and Gesture Recognition, 1996
    Co-Authors: Vladimir Pavlovic, Rajeev Sharma, Thomas S. Huang
    Abstract:

    In recent years there has been tremendous progress in 3-D, immersive display and virtual reality (VR) technologies. Scientific Visualization af data is one of many applications that has benefited from this progress. To fully exploit the potential of these applications in the new environment there is a need for "natural" interfaces that allow the manipulation of such displays without burdensome attachments. This paper describes the use of Visual hand gesture analysis enhanced with speech recognition for developing a bimodal gesture/speech interface for controlling a 3-D display. The interface augments an existing application, VMD, which is a VR Visual Computing environment for molecular biologists. The free hand gestures are used for manipulating the 3-D graphical display together with a set of speech commands. We concentrate on the Visual gesture analysis techniques used in developing this interface. The dual modality of gesture/speech is found to greatly aid the interaction capability.

  • FG - Invited Speech: Gestural Interface to a Visual Computing Environment for Molecular biologists
    1996
    Co-Authors: Vladimir Pavlovic, Rajeev Sharma, Thomas S. Huang
    Abstract:

    In recent years there has been tremendous progress in 3-D immersive display and virtual reality (VR) technologies. Scientific Visualization of data is one of many applications that has benefited from this progress. To fully exploit the potential of these applications in the new environment there is a need for "natural" interfaces that allow the manipulation of such displays without burdensome attachments. This paper describes the use of Visual hand gesture analysis enhanced with speech recognition for developing a bimodal gesture/speech interface for controlling a 3-D display. The interface augments an existing application, VMD, which is a VR Visual Computing environment for molecular biolo-gists. The free hand gestures are used for manipulating, the37D graphical display together with a set of speech commands. We concentrate on the Visual gesture analysis techniques used in developing this interface. The dual modality of gesture/speech is found to greatly aid the interaction capability.

  • speech gesture interface to a Visual Computing environment for molecular biologists
    International Conference on Pattern Recognition, 1996
    Co-Authors: Rajeev Sharma, Vladimir Pavlovic, Thomas S. Huang, Yunxin Zhao, Stephen M Chu, K Schul
    Abstract:

    Recent progress in 3-D, immersive display and virtual reality (VR) technologies has made possible many exciting applications, for example interactive Visualization of complex scientific data. To fully exploit this potential there is a need for "natural" interfaces that allow the manipulation of such displays without cumbersome attachments. In this paper we describe the use of Visual hand gesture analysis and speech recognition for developing a speech/gesture interface for controlling a 3-D display. The interface enhances an existing application, VMD, which is a VR Visual Computing environment far molecular biologists. The free hand gestures are used for manipulating the 3-D graphical display together with a set of speech commands. We describe the Visual gesture analysis and the speech analysis techniques used in developing this interface. The dual modality of speech/gesture is found to greatly aid the interaction capability.

  • Gestural Interface to a Visual Computing Environment for Molecular
    1996
    Co-Authors: Vladimir Pavlovic, Rajeev Sharma, Thomas S. Huang
    Abstract:

    In recent years there has been tremendous progress in 3D, immersive display and virtual reality (VR) technologies. Scientific Visualization of data is one of many applications that has benefited from this progress. To fully exploit the potential of these applications in the new environment there is a need for “natural” intelfaces that allow the manipulation of such displays without burdensome attachments. This paper describes the use of Visual hand gesture analysis enhanced with speech recognition for developing a bimodal gesture/speech inter$ace for controlling a 3-D display. The interface augments an existing application, VMD, which is a VR Visual Computing environment for molecular biologists. The free hand gestures are used for manipulating ,the 37D graphical display together with a set of speech commands. We concentrate on the Visual gesture analysis techniques used in developing this interface. The dual modality of gesture/speech is found to greatly aid the interaction capability.

  • ICPR - Speech/gesture interface to a Visual Computing environment for molecular biologists
    Proceedings of 13th International Conference on Pattern Recognition, 1996
    Co-Authors: Rajeev Sharma, Vladimir Pavlovic, Thomas S. Huang, Yunxin Zhao, Stephen M Chu, K Schul
    Abstract:

    Recent progress in 3-D, immersive display and virtual reality (VR) technologies has made possible many exciting applications, for example interactive Visualization of complex scientific data. To fully exploit this potential there is a need for "natural" interfaces that allow the manipulation of such displays without cumbersome attachments. In this paper we describe the use of Visual hand gesture analysis and speech recognition for developing a speech/gesture interface for controlling a 3-D display. The interface enhances an existing application, VMD, which is a VR Visual Computing environment far molecular biologists. The free hand gestures are used for manipulating the 3-D graphical display together with a set of speech commands. We describe the Visual gesture analysis and the speech analysis techniques used in developing this interface. The dual modality of speech/gesture is found to greatly aid the interaction capability.

Yenkuang Chen - One of the best experts on this subject based on the ideXlab platform.

  • algorithm architecture co exploration of Visual Computing on emergent platforms overview and future prospects
    IEEE Transactions on Circuits and Systems for Video Technology, 2009
    Co-Authors: Yenkuang Chen, Marco Mattavelli, Eueeseon Jang
    Abstract:

    Concurrently exploring both algorithmic and architectural optimizations is a new design paradigm. This survey paper addresses the latest research and future perspectives on the simultaneous development of video coding, processing, and Computing algorithms with emerging platforms that have multiple cores and reconfigurable architecture. As the algorithms in forthcoming Visual systems become increasingly complex, many applications must have different profiles with different levels of performance. Hence, with expectations that the Visual experience in the future will become continuously better, it is critical that advanced platforms provide higher performance, better flexibility, and lower power consumption. To achieve these goals, algorithm and architecture co-design is significant for characterizing the algorithmic complexity used to optimize targeted architecture. This paper shows that seamless weaving of the development of previously autonomous Visual Computing algorithms and multicore or reconfigurable architectures will unavoidably become the leading trend in the future of video technology.

  • An Introduction to the Special Issue on Algorithm/Architecture Co-Exploration of Visual Computing on Emerging Platforms
    IEEE Transactions on Circuits and Systems for Video Technology, 2009
    Co-Authors: Gwo Giun Lee, Yenkuang Chen, Marco Mattavelli, Euee S. Jang
    Abstract:

    Concurrently exploring both algorithmic and architectural optimizations is a new design paradigm. This survey paper addresses the latest research and future perspectives on the simultaneous development of video coding, processing, and Computing algorithms with emerging platforms that have multiple cores and reconfigurable architecture. As the algorithms in forthcoming Visual systems become increasingly complex, many applications must have different profiles with different levels of performance. Hence, with expectations that the Visual experience in the future will become continuously better, it is critical that advanced platforms provide higher performance, better flexibility, and lower power consumption. To achieve these goals, algorithm and architecture co-design is significant for characterizing the algorithmic complexity used to optimize targeted architecture. This paper shows that seamless weaving of the development of previously autonomous Visual Computing algorithms and multicore or reconfigurable architectures will unavoidably become the leading trend in the future of video technology.

  • an introduction to the special issue on algorithm architecture co exploration of Visual Computing on emerging platforms
    IEEE Transactions on Circuits and Systems for Video Technology, 2009
    Co-Authors: Gwo Giun Lee, Yenkuang Chen, Marco Mattavelli, Euee S. Jang
    Abstract:

    Concurrently exploring both algorithmic and architectural optimizations is a new design paradigm. This survey paper addresses the latest research and future perspectives on the simultaneous development of video coding, processing, and Computing algorithms with emerging platforms that have multiple cores and reconfigurable architecture. As the algorithms in forthcoming Visual systems become increasingly complex, many applications must have different profiles with different levels of performance. Hence, with expectations that the Visual experience in the future will become continuously better, it is critical that advanced platforms provide higher performance, better flexibility, and lower power consumption. To achieve these goals, algorithm and architecture co-design is significant for characterizing the algorithmic complexity used to optimize targeted architecture. This paper shows that seamless weaving of the development of previously autonomous Visual Computing algorithms and multicore or reconfigurable architectures will unavoidably become the leading trend in the future of video technology.