Video Sequences

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 60096 Experts worldwide ranked by ideXlab platform

Y Q Shi - One of the best experts on this subject based on the ideXlab platform.

Andrea L Bertozzi - One of the best experts on this subject based on the ideXlab platform.

  • object tracking by hierarchical decomposition of hyperspectral Video Sequences application to chemical gas plume tracking
    IEEE Transactions on Geoscience and Remote Sensing, 2017
    Co-Authors: Guillaume Tochon, Jocelyn Chanussot, Mauro Dalla Mura, Andrea L Bertozzi
    Abstract:

    It is now possible to collect hyperspectral Video Sequences at a near real-time frame rate. The wealth of spectral, spatial, and temporal information of those Sequences is appealing for various applications, but classical Video processing techniques must be adapted to handle the high dimensionality and huge size of the data to process. In this paper, we introduce a novel method based on the hierarchical analysis of hyperspectral Video Sequences to perform object tracking. This latter operation is tackled as a sequential object detection process, conducted on the hierarchical representation of the hyperspectral Video frames. We apply the proposed methodology to the chemical gas plume tracking scenario and compare its performances with state-of-the-art methods, for two real hyperspectral Video Sequences, and show that the proposed approach performs at least equally well.

G E Elmasry - One of the best experts on this subject based on the ideXlab platform.

Andre Kaup - One of the best experts on this subject based on the ideXlab platform.

  • motion estimation for fisheye Video Sequences combining perspective projection with camera calibration information
    International Conference on Image Processing, 2016
    Co-Authors: Andrea Eichenseer, Michel Batz, Andre Kaup
    Abstract:

    Fisheye cameras prove a convenient means in surveillance and automotive applications as they provide a very wide field of view for capturing their surroundings. Contrary to typical rectilinear imagery, however, fisheye Video Sequences follow a different mapping from the world coordinates to the image plane which is not considered in standard Video processing techniques. In this paper, we present a motion estimation method for real-world fisheye Videos by combining perspective projection with knowledge about the underlying fisheye projection. The latter is obtained by camera calibration since actual lenses rarely follow exact models. Furthermore, we introduce a re-mapping for ultra-wide angles which would otherwise lead to wrong motion compensation results for the fisheye boundary. Both concepts extend an existing hybrid motion estimation method for equisolid fisheye Video Sequences that decides between traditional and fisheye block matching in a block-based manner. Compared to that method, the proposed calibration and re-mapping extensions yield gains of up to 0.58 dB in luminance PSNR for real-world fisheye Video Sequences. Overall gains amount to up to 3.32 dB compared to traditional block matching.

  • temporal error concealment for fisheye Video Sequences based on equisolid re projection
    European Signal Processing Conference, 2015
    Co-Authors: Andrea Eichenseer, Michel Batz, Jurgen Seiler, Andre Kaup
    Abstract:

    Wide-angle Video Sequences obtained by fisheye cameras exhibit characteristics that may not very well comply with standard image and Video processing techniques such as error concealment. This paper introduces a temporal error concealment technique designed for the inherent characteristics of equisolid fisheye Video Sequences by applying a re-projection into the equisolid domain after conducting part of the error concealment in the perspective domain. Combining this technique with conventional decoder motion vector estimation achieves average gains of 0.71 dB compared against pure decoder motion vector estimation for the test Sequences used. Maximum gains amount to up to 2.04 dB for selected frames.

  • a hybrid motion estimation technique for fisheye Video Sequences based on equisolid re projection
    International Conference on Image Processing, 2015
    Co-Authors: Andrea Eichenseer, Michel Batz, Jurgen Seller, Andre Kaup
    Abstract:

    Capturing large fields of view with only one camera is an important aspect in surveillance and automotive applications, but the wide-angle fisheye imagery thus obtained exhibits very special characteristics that may not be very well suited for typical image and Video processing methods such as motion estimation. This paper introduces a motion estimation method that adapts to the typical radial characteristics of fisheye Video Sequences by making use of an equisolid re-projection after moving part of the motion vector search into the perspective domain via a corresponding back-projection. By combining this approach with conventional translational motion estimation and compensation, average gains in luminance PSNR of up to 1.14 dB are achieved for synthetic fish-eye Sequences and up to 0.96 dB for real-world data. Maximum gains for selected frame pairs amount to 2.40 dB and 1.39 dB for synthetic and real-world data, respectively.

Guillaume Tochon - One of the best experts on this subject based on the ideXlab platform.

  • object tracking by hierarchical decomposition of hyperspectral Video Sequences application to chemical gas plume tracking
    IEEE Transactions on Geoscience and Remote Sensing, 2017
    Co-Authors: Guillaume Tochon, Jocelyn Chanussot, Mauro Dalla Mura, Andrea L Bertozzi
    Abstract:

    It is now possible to collect hyperspectral Video Sequences at a near real-time frame rate. The wealth of spectral, spatial, and temporal information of those Sequences is appealing for various applications, but classical Video processing techniques must be adapted to handle the high dimensionality and huge size of the data to process. In this paper, we introduce a novel method based on the hierarchical analysis of hyperspectral Video Sequences to perform object tracking. This latter operation is tackled as a sequential object detection process, conducted on the hierarchical representation of the hyperspectral Video frames. We apply the proposed methodology to the chemical gas plume tracking scenario and compare its performances with state-of-the-art methods, for two real hyperspectral Video Sequences, and show that the proposed approach performs at least equally well.