Fusion Scheme

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 21246 Experts worldwide ranked by ideXlab platform

Qi Tian - One of the best experts on this subject based on the ideXlab platform.

  • a Fusion Scheme of visual and auditory modalities for event detection in sports video
    International Conference on Acoustics Speech and Signal Processing, 2003
    Co-Authors: Lingyu Duan, Qi Tian
    Abstract:

    In this paper, we propose an effective Fusion Scheme of visual and auditory modalities to detect events in sports video. The proposed Scheme is built upon semantic shot classification, where we classify video shots into several major or interesting classes, each of which has clear semantic meanings. Among major shot classes we perform classification of the different auditory signal segments (i.e. silence, hitting ball, applause, commentator speech) with the goal of detecting events with strong semantic meaning. For instance, for tennis video, we have identified five interesting events: serve, reserve, ace, return, and score. Since we have developed a unified framework for semantic shot classification in sports videos and a set of audio mid-level representation with supervised learning methods, the proposed Fusion Scheme can be easily adapted to a new sports game. We are extending this Fusion Scheme to three additional typical sports videos: basketball, volleyball and soccer. Correctly detected sports video events will greatly facilitate further structural and temporal analysis, such as sports video skimming, table of content, etc.

  • ICASSP (3) - A Fusion Scheme of visual and auditory modalities for event detection in sports video
    2003 IEEE International Conference on Acoustics Speech and Signal Processing 2003. Proceedings. (ICASSP '03)., 1
    Co-Authors: Lingyu Duan, Qi Tian
    Abstract:

    In this paper, we propose an effective Fusion Scheme of visual and auditory modalities to detect events in sports video. The proposed Scheme is built upon semantic shot classification, where we classify video shots into several major or interesting classes, each of which has clear semantic meanings. Among major shot classes we perform classification of the different auditory signal segments (i.e. silence, hitting ball, applause, commentator speech) with the goal of detecting events with strong semantic meaning. For instance, for tennis video, we have identified five interesting events: serve, reserve, ace, return, and score. Since we have developed a unified framework for semantic shot classification in sports videos and a set of audio mid-level representation with supervised learning methods, the proposed Fusion Scheme can be easily adapted to a new sports game. We are extending this Fusion Scheme to three additional typical sports videos: basketball, volleyball and soccer. Correctly detected sports video events will greatly facilitate further structural and temporal analysis, such as sports video skimming, table of content, etc.

Lingyu Duan - One of the best experts on this subject based on the ideXlab platform.

  • a Fusion Scheme of visual and auditory modalities for event detection in sports video
    International Conference on Acoustics Speech and Signal Processing, 2003
    Co-Authors: Lingyu Duan, Qi Tian
    Abstract:

    In this paper, we propose an effective Fusion Scheme of visual and auditory modalities to detect events in sports video. The proposed Scheme is built upon semantic shot classification, where we classify video shots into several major or interesting classes, each of which has clear semantic meanings. Among major shot classes we perform classification of the different auditory signal segments (i.e. silence, hitting ball, applause, commentator speech) with the goal of detecting events with strong semantic meaning. For instance, for tennis video, we have identified five interesting events: serve, reserve, ace, return, and score. Since we have developed a unified framework for semantic shot classification in sports videos and a set of audio mid-level representation with supervised learning methods, the proposed Fusion Scheme can be easily adapted to a new sports game. We are extending this Fusion Scheme to three additional typical sports videos: basketball, volleyball and soccer. Correctly detected sports video events will greatly facilitate further structural and temporal analysis, such as sports video skimming, table of content, etc.

  • ICASSP (3) - A Fusion Scheme of visual and auditory modalities for event detection in sports video
    2003 IEEE International Conference on Acoustics Speech and Signal Processing 2003. Proceedings. (ICASSP '03)., 1
    Co-Authors: Lingyu Duan, Qi Tian
    Abstract:

    In this paper, we propose an effective Fusion Scheme of visual and auditory modalities to detect events in sports video. The proposed Scheme is built upon semantic shot classification, where we classify video shots into several major or interesting classes, each of which has clear semantic meanings. Among major shot classes we perform classification of the different auditory signal segments (i.e. silence, hitting ball, applause, commentator speech) with the goal of detecting events with strong semantic meaning. For instance, for tennis video, we have identified five interesting events: serve, reserve, ace, return, and score. Since we have developed a unified framework for semantic shot classification in sports videos and a set of audio mid-level representation with supervised learning methods, the proposed Fusion Scheme can be easily adapted to a new sports game. We are extending this Fusion Scheme to three additional typical sports videos: basketball, volleyball and soccer. Correctly detected sports video events will greatly facilitate further structural and temporal analysis, such as sports video skimming, table of content, etc.

Yu Song - One of the best experts on this subject based on the ideXlab platform.

  • A New Wavelet Based Multi-focus Image Fusion Scheme and Its Application on Optical Microscopy
    2006 IEEE International Conference on Robotics and Biomimetics, 2006
    Co-Authors: Yu Song, Mantian Li, Qingling Li
    Abstract:

    Multi-focus image Fusion is a process of combining two or more partially defocused images into a new image with all interested objects sharply imaged. In this paper, after reviewing the multi-focus image Fusion techniques, a wavelet based Fusion Scheme with new image activity level measurement is presented. The proposed multi-resolution image Fusion technique includes three steps: first, multi-resolution discrete wavelet transform (DWT) is applied to obtain the wavelet coefficients of the source images. Then, applying proposed coefficients Fusion Scheme to the obtained coefficients, the wavelet coefficients of the Fusion image are reconstructed. Finally, the final Fusion image is generated by applying inversed wavelet transform. In experiments, artificial defocused images (by applying low-pass filter to the specified regions) are utilized to investigate the performance of proposed Scheme and to select suitable wavelet family and wavelet decomposition scales. Then we apply the proposed image Fusion Scheme to optical microscopy domain to solve the short depth of focus characteristic of optical microscope. The experiments results verify the validity of the proposed multi-focus image Fusion Scheme.

  • ROBIO - A New Wavelet Based Multi-focus Image Fusion Scheme and Its Application on Optical Microscopy
    2006 IEEE International Conference on Robotics and Biomimetics, 2006
    Co-Authors: Yu Song, Lining Sun
    Abstract:

    Multi-focus image Fusion is a process of combining two or more partially defocused images into a new image with all interested objects sharply imaged. In this paper, after reviewing the multi-focus image Fusion techniques, a wavelet based Fusion Scheme with new image activity level measurement is presented. The proposed multi-resolution image Fusion technique includes three steps: first, multi-resolution discrete wavelet transform (DWT) is applied to obtain the wavelet coefficients of the source images. Then, applying proposed coefficients Fusion Scheme to the obtained coefficients, the wavelet coefficients of the Fusion image are reconstructed. Finally, the final Fusion image is generated by applying inversed wavelet transform. In experiments, artificial defocused images (by applying low-pass filter to the specified regions) are utilized to investigate the performance of proposed Scheme and to select suitable wavelet family and wavelet decomposition scales. Then we apply the proposed image Fusion Scheme to optical microscopy domain to solve the short depth of focus characteristic of optical microscope. The experiments results verify the validity of the proposed multi-focus image Fusion Scheme.

Wei Cai - One of the best experts on this subject based on the ideXlab platform.

  • ICIG - Infrared and Visible Image Fusion Scheme Based on Contourlet Transform
    2009 Fifth International Conference on Image and Graphics, 2009
    Co-Authors: Wei Cai
    Abstract:

    Focusing on the Fusion problem of infrared and visible images with the same scene, a novel multi-sensor image Fusion Scheme based on the contourlet transform (CT) was proposed, which can capture the intrinsic geometrical structure that is key in visual information. The Scheme uses the multiresolution contourlet decomposition to execute the detail extraction phase and a Fusion rule that combines the visual characteristic of human being is presented. Quantitative and visual analysis of the experimental results demonstrate that the proposed approach performs significantly better than the traditional methods based on the discrete wavelet transform.

  • A region-based multi-sensor image Fusion Scheme using pulse-coupled neural network
    Pattern Recognition Letters, 2006
    Co-Authors: Wei Cai, Zheng Tan
    Abstract:

    For most image Fusion algorithms split relationship among pixels and treat them more or less independently, this paper proposes a region-based image Fusion Scheme using pulse-coupled neural network (PCNN), which combines aspects of feature and pixel-level Fusion. The basic idea is to segment all different input images by PCNN and to use this segmentation to guide the Fusion process. In order to determine PCNN parameters adaptively, this paper brings forward an adaptive segmentation algorithm based on a modified PCNN with the multi-thresholds determined by a novel water region area method. Experimental results demonstrate that the proposed Fusion Scheme has extensive application scope and it outperforms the multi-scale decomposition based Fusion approaches, both in visual effect and objective evaluation criteria, particularly when there is movement in the objects or mis-registration of the source images.

Zheng Tan - One of the best experts on this subject based on the ideXlab platform.

  • A region-based multi-sensor image Fusion Scheme using pulse-coupled neural network
    Pattern Recognition Letters, 2006
    Co-Authors: Wei Cai, Zheng Tan
    Abstract:

    For most image Fusion algorithms split relationship among pixels and treat them more or less independently, this paper proposes a region-based image Fusion Scheme using pulse-coupled neural network (PCNN), which combines aspects of feature and pixel-level Fusion. The basic idea is to segment all different input images by PCNN and to use this segmentation to guide the Fusion process. In order to determine PCNN parameters adaptively, this paper brings forward an adaptive segmentation algorithm based on a modified PCNN with the multi-thresholds determined by a novel water region area method. Experimental results demonstrate that the proposed Fusion Scheme has extensive application scope and it outperforms the multi-scale decomposition based Fusion approaches, both in visual effect and objective evaluation criteria, particularly when there is movement in the objects or mis-registration of the source images.