Sampled Image

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 186 Experts worldwide ranked by ideXlab platform

Anton Van Den Hengel - One of the best experts on this subject based on the ideXlab platform.

  • Cross-Convolutional-Layer Pooling for Image Recognition
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016
    Co-Authors: Chunhua Shen, Anton Van Den Hengel
    Abstract:

    Recent studies have shown that a Deep Convolutional Neural Network (DCNN) trained on a large Image dataset can be used as a universal Image descriptor and that doing so leads to impressive performance for a variety of Image recognition tasks. Most of these studies adopt activations from a single DCNN layer, usually a fully-connected layer, as the Image representation. In this paper, we proposed a novel way to extract Image representations from two consecutive convolutional layers: one layer is used for local feature extraction and the other serves as guidance to pool the extracted features. By taking different viewpoints of convolutional layers, we further develop two schemes to realize this idea. The first directly uses convolutional layers from a DCNN. The second applies the pre-trained CNN on densely Sampled Image regions and treats the fully-connected activations of each Image region as a convolutional layer's feature activations. We then train another convolutional layer on top of that as the pooling-guidance convolutional layer. By applying our method to three popular visual classification tasks, we find that our first scheme tends to perform better on applications which demand strong discrimination on lower-level visual patterns while the latter excels in cases that require discrimination on category-level patterns. Overall, the proposed method achieves superior performance over existing approaches for extracting Image representations from a DCNN. In addition, we apply cross-layer pooling to the problem of Image retrieval and propose schemes to reduce the computational cost. Experimental results suggest that the proposed method achieves promising results for the Image retrieval task.

  • Cross-convolutional-layer Pooling for Image Recognition
    arXiv: Computer Vision and Pattern Recognition, 2015
    Co-Authors: Chunhua Shen, Anton Van Den Hengel
    Abstract:

    Recent studies have shown that a Deep Convolutional Neural Network (DCNN) pretrained on a large Image dataset can be used as a universal Image descriptor, and that doing so leads to impressive performance for a variety of Image classification tasks. Most of these studies adopt activations from a single DCNN layer, usually the fully-connected layer, as the Image representation. In this paper, we proposed a novel way to extract Image representations from two consecutive convolutional layers: one layer is utilized for local feature extraction and the other serves as guidance to pool the extracted features. By taking different viewpoints of convolutional layers, we further develop two schemes to realize this idea. The first one directly uses convolutional layers from a DCNN. The second one applies the pretrained CNN on densely Sampled Image regions and treats the fully-connected activations of each Image region as convolutional feature activations. We then train another convolutional layer on top of that as the pooling-guidance convolutional layer. By applying our method to three popular visual classification tasks, we find our first scheme tends to perform better on the applications which need strong discrimination on subtle object patterns within small regions while the latter excels in the cases that require discrimination on category-level patterns. Overall, the proposed method achieves superior performance over existing ways of extracting Image representations from a DCNN.

  • Cross-convolutional-layer Pooling for Generic Visual Recognition.
    arXiv: Computer Vision and Pattern Recognition, 2015
    Co-Authors: Chunhua Shen, Anton Van Den Hengel
    Abstract:

    Recent studies have shown that a Deep Convolutional Neural Network (DCNN) pretrained on a large Image dataset can be used as a universal Image descriptor, and that doing so leads to impressive performance for a variety of Image classification tasks. Most of these studies adopt activations from a single DCNN layer, usually the fully-connected layer, as the Image representation. In this paper, we proposed a novel way to extract Image representations from two consecutive convolutional layers: one layer is utilized for local feature extraction and the other serves as guidance to pool the extracted features. By taking different viewpoints of convolutional layers, we further develop two schemes to realize this idea. The first one directly uses convolutional layers from a DCNN. The second one applies the pretrained CNN on densely Sampled Image regions and treats the fully-connected activations of each Image region as convolutional feature activations. We then train another convolutional layer on top of that as the pooling-guidance convolutional layer. By applying our method to three popular visual classification tasks, we find our first scheme tends to perform better on the applications which need strong discrimination on subtle object patterns within small regions while the latter excels in the cases that require discrimination on category-level patterns. Overall, the proposed method achieves superior performance over existing ways of extracting Image representations from a DCNN.

Chunhua Shen - One of the best experts on this subject based on the ideXlab platform.

  • Cross-Convolutional-Layer Pooling for Image Recognition
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016
    Co-Authors: Chunhua Shen, Anton Van Den Hengel
    Abstract:

    Recent studies have shown that a Deep Convolutional Neural Network (DCNN) trained on a large Image dataset can be used as a universal Image descriptor and that doing so leads to impressive performance for a variety of Image recognition tasks. Most of these studies adopt activations from a single DCNN layer, usually a fully-connected layer, as the Image representation. In this paper, we proposed a novel way to extract Image representations from two consecutive convolutional layers: one layer is used for local feature extraction and the other serves as guidance to pool the extracted features. By taking different viewpoints of convolutional layers, we further develop two schemes to realize this idea. The first directly uses convolutional layers from a DCNN. The second applies the pre-trained CNN on densely Sampled Image regions and treats the fully-connected activations of each Image region as a convolutional layer's feature activations. We then train another convolutional layer on top of that as the pooling-guidance convolutional layer. By applying our method to three popular visual classification tasks, we find that our first scheme tends to perform better on applications which demand strong discrimination on lower-level visual patterns while the latter excels in cases that require discrimination on category-level patterns. Overall, the proposed method achieves superior performance over existing approaches for extracting Image representations from a DCNN. In addition, we apply cross-layer pooling to the problem of Image retrieval and propose schemes to reduce the computational cost. Experimental results suggest that the proposed method achieves promising results for the Image retrieval task.

  • Cross-convolutional-layer Pooling for Image Recognition
    arXiv: Computer Vision and Pattern Recognition, 2015
    Co-Authors: Chunhua Shen, Anton Van Den Hengel
    Abstract:

    Recent studies have shown that a Deep Convolutional Neural Network (DCNN) pretrained on a large Image dataset can be used as a universal Image descriptor, and that doing so leads to impressive performance for a variety of Image classification tasks. Most of these studies adopt activations from a single DCNN layer, usually the fully-connected layer, as the Image representation. In this paper, we proposed a novel way to extract Image representations from two consecutive convolutional layers: one layer is utilized for local feature extraction and the other serves as guidance to pool the extracted features. By taking different viewpoints of convolutional layers, we further develop two schemes to realize this idea. The first one directly uses convolutional layers from a DCNN. The second one applies the pretrained CNN on densely Sampled Image regions and treats the fully-connected activations of each Image region as convolutional feature activations. We then train another convolutional layer on top of that as the pooling-guidance convolutional layer. By applying our method to three popular visual classification tasks, we find our first scheme tends to perform better on the applications which need strong discrimination on subtle object patterns within small regions while the latter excels in the cases that require discrimination on category-level patterns. Overall, the proposed method achieves superior performance over existing ways of extracting Image representations from a DCNN.

  • Cross-convolutional-layer Pooling for Generic Visual Recognition.
    arXiv: Computer Vision and Pattern Recognition, 2015
    Co-Authors: Chunhua Shen, Anton Van Den Hengel
    Abstract:

    Recent studies have shown that a Deep Convolutional Neural Network (DCNN) pretrained on a large Image dataset can be used as a universal Image descriptor, and that doing so leads to impressive performance for a variety of Image classification tasks. Most of these studies adopt activations from a single DCNN layer, usually the fully-connected layer, as the Image representation. In this paper, we proposed a novel way to extract Image representations from two consecutive convolutional layers: one layer is utilized for local feature extraction and the other serves as guidance to pool the extracted features. By taking different viewpoints of convolutional layers, we further develop two schemes to realize this idea. The first one directly uses convolutional layers from a DCNN. The second one applies the pretrained CNN on densely Sampled Image regions and treats the fully-connected activations of each Image region as convolutional feature activations. We then train another convolutional layer on top of that as the pooling-guidance convolutional layer. By applying our method to three popular visual classification tasks, we find our first scheme tends to perform better on the applications which need strong discrimination on subtle object patterns within small regions while the latter excels in the cases that require discrimination on category-level patterns. Overall, the proposed method achieves superior performance over existing ways of extracting Image representations from a DCNN.

M D Adams - One of the best experts on this subject based on the ideXlab platform.

  • an improved progressive lossy to lossless coding method for arbitrarily Sampled Image data
    Pacific Rim Conference on Communications Computers and Signal Processing, 2013
    Co-Authors: M D Adams
    Abstract:

    A method for the progressive lossy-to-lossless coding of arbitrarily-Sampled Image data is proposed. Through experimental results, the proposed method is demonstrated to have a rate-distortion performance that is vastly superior to that of the state-of-the-art Image-tree (IT) coding scheme. In particular, at intermediate rates (i.e., in progressive decoding scenarios), the proposed method yields Image reconstructions with a peak-signal-to-noise ratio that is much higher (sometimes by several dB) than the IT scheme, while simultaneously achieving a slightly lower lossless rate.

  • progressive lossy to lossless coding of arbitrarily Sampled Image data using the modified scattered data coding method
    International Conference on Acoustics Speech and Signal Processing, 2009
    Co-Authors: M D Adams
    Abstract:

    In earlier work, Demaret and Iske proposed the scattered data coding (SDC) method for (single-rate) coding of arbitrarily-Sampled Image data. In this paper, several modifications to the SDC method are proposed in order to remove some limitations of the original scheme, improve coding efficiency, and add a progressive lossy-to-lossless coding capability. Through experimental results, the proposed method is shown to yield a significant improvement in coding efficiency (relative to the original SDC method) as well as provide an efficient progressive lossy-to-lossless coding capability.

  • an efficient progressive coding method for arbitrarily Sampled Image data
    IEEE Signal Processing Letters, 2008
    Co-Authors: M D Adams
    Abstract:

    A simple highly-effective method for progressive lossy-to-lossless coding of arbitrarily-Sampled Image data is proposed. This scheme is based on a recursive quadtree partitioning of the Image domain along with an iterative sample-value averaging process. The proposed method is shown to offer much better progressive coding performance than a previously-proposed state-of-the-art coding method.

Y Altunbasak - One of the best experts on this subject based on the ideXlab platform.

  • restoration of bayer Sampled Image sequences
    The Computer Journal, 2009
    Co-Authors: Murat Gevrekci, Bahadir K Gunturk, Y Altunbasak
    Abstract:

    Spatial resolution of digital Images are limited due to optical/sensor blurring and sensor site density. In single-chip digital cameras, the resolution is further degraded because such devices use a color filter array to capture only one spectral component at a pixel location. The process of estimating the missing two color values at each pixel location is known as demosaicking. Demosaicking methods usually exploit the correlation among color channels. When there are multiple Images, it is possible not only to have better estimates of the missing color values but also to improve the spatial resolution further (using super-resolution reconstruction). In this paper, we propose a multi-frame spatial resolution enhancement algorithm based on the projections onto convex sets technique.

  • pocs based restoration of bayer Sampled Image sequences
    International Conference on Acoustics Speech and Signal Processing, 2007
    Co-Authors: Murat Gevrekci, Bahadir K Gunturk, Y Altunbasak
    Abstract:

    Spatial resolution of digital Images are limited due to optical/sensor blurring and sensor site density. In single-chip digital cameras, the resolution is further degraded because such devices use a color filter array to capture only one spectral component at a pixel location. The process of estimating the missing two color values at each pixel location is known as demosaicking. Demosaicking methods usually exploit the correlation among color channels. When there are multiple Images, it is possible not only to have better estimates of the missing color values but also to improve the spatial resolution further (using super-resolution reconstruction). Previously, we have proposed a demosaicking algorithm based on the projection onto convex sets (POCS) technique. In this paper, we improve the results of that algorithm adding a new constraint set based on the spatio-intensity neighborhood. We extend the algorithm to Image sequences for multi-frame demosaicking and super resolution.

Patrick J Wolfe - One of the best experts on this subject based on the ideXlab platform.

  • a framework for wavelet based analysis and processing of color filter array Images with applications to denoising and demosaicing
    International Conference on Acoustics Speech and Signal Processing, 2007
    Co-Authors: Keigo Hirakawa, Xiaoli Meng, Patrick J Wolfe
    Abstract:

    This paper presents a new approach to demosaicing of spatially Sampled Image data observed through a color filter array, in which properties of Smith-Barnwell filterbanks are employed to exploit the correlation of color components in order to reconstruct a subSampled Image. The method is shown to be amenable to wavelet-domain denoising prior to demosaicing, and a general framework for applying existing Image denoising algorithms to color filter array data is also described. Results indicate that the proposed method performs on a par with the state of the art for far lower computational cost, and provides a versatile, effective, and low-complexity solution to the problem of interpolating color filter array data observed in noise.