Image Distortion

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 324 Experts worldwide ranked by ideXlab platform

Zhou Wang - One of the best experts on this subject based on the ideXlab platform.

  • Image Distortion analysis based on normalized perceptual information distance
    Signal Image and Video Processing, 2013
    Co-Authors: Nima Nikvand, Zhou Wang
    Abstract:

    Image Distortion analysis is a fundamental issue in many Image processing problems, including compression, restoration, recognition, classification, and retrieval. Traditional Image Distortion evaluation approaches tend to be heuristic and are often limited to specific application environment. In this work, we investigate the problem of Image Distortion measurement based on the theory of Kolmogorov complexity, which has rarely been studied in the context of Image processing. This work is motivated by the normalized information distance (NID) measure that has been shown to be a valid and universal distance metric applicable to similarity measurement of any two objects (Li et al. in IEEE Trans Inf Theory 50:3250–3264, 2004). Similar to Kolmogorov complexity, NID is non-computable. A useful practical solution is to approximate it using normalized compression distance (NCD) (Li et al. in IEEE Trans Inf Theory 50:3250–3264, 2004), which has led to impressive results in many applications such as construction of phylogeny trees using DNA sequences (Li et al. in IEEE Trans Inf Theory 50:3250–3264, 2004). In our earlier work, we showed that direct use of NCD on Image processing problems is difficult and proposed a normalized conditional compression distance (NCCD) measure (Nikvand and Wang, 2010), which has significantly wider applicability than existing Image similarity/Distortion measures. To assess the Distortions between two Images, we first transform them into the wavelet transform domain. Assuming stationarity and good decorrelation of wavelet coefficients beyond local regions and across wavelet subbands, the Kolmogorov complexity may be approximated using Shannon entropy (Cover et al. in Elements of information theory. Wiley-Interscience, New York, 1991). Inspired by Sheikh and Bovik (IEEE Trans Image Process 15(2):430–444, 2006), we adopt a Gaussian scale mixture model for clusters of neighboring wavelet coefficients and a Gaussian channel model for the noise Distortions in the human visual system. Combining these assumptions with the NID framework, we derive a novel normalized perceptual information distance measure, where maximal likelihood estimation and least square regression are employed for parameter fitting. We validate the proposed Distortion measure using three large-scale, publicly available, and subject-rated Image databases, which include a wide range of practical Image Distortion types and levels. Our results demonstrate the good prediction power of the proposed method for perceptual Image Distortions.

  • PERCEPTUAL NORMALIZED INFORMATION DISTANCE FOR Image Distortion ANALYSIS BASED ON KOLMOGOROV COMPLEXITY
    2011
    Co-Authors: Nima Nikvand, Zhou Wang
    Abstract:

    Image Distortion analysis is a fundamental issue in many Image processing problems, including compression, restoration, recognition, classification, and retrieval. In this work, we investigate the problem of Image Distortion measurement based on the theories of Kolmogorov complexity and normalized information distance (NID), which have rarely been studied in the context of Image processing. Based on a wavelet domain Gaussian scale mixture model of Images, we approximate NID using a Shannon entropy based method. This leads to a series of novel Distortion measures that are competitive with state‐of‐the‐art Image quality assessment approaches.

  • an adaptive linear system framework for Image Distortion analysis
    International Conference on Image Processing, 2005
    Co-Authors: Zhou Wang, Eero P Simoncelli
    Abstract:

    We describe a framework for decomposing the Distortion between two Images into a linear combination of components. Unlike conventional linear bases such as those in Fourier or wavelet decompositions, a subset of the components in our representation are not fixed, but are adaptively computed from the input Images. We show that this framework is a generalization of a number of existing Image comparison approaches. As an example of a specific implementation, we select the components based on the structural similarity principle, separating the overall Image Distortions into non-structural Distortions (those that do not change the structures of the objects in the scene) and the remaining structural Distortions. We demonstrate that the resulting measure is effective in predicting Image Distortions as perceived by human observers.

  • ICIP (3) - An adaptive linear system framework for Image Distortion analysis
    IEEE International Conference on Image Processing 2005, 2005
    Co-Authors: Zhou Wang, Eero P Simoncelli
    Abstract:

    We describe a framework for decomposing the Distortion between two Images into a linear combination of components. Unlike conventional linear bases such as those in Fourier or wavelet decompositions, a subset of the components in our representation are not fixed, but are adaptively computed from the input Images. We show that this framework is a generalization of a number of existing Image comparison approaches. As an example of a specific implementation, we select the components based on the structural similarity principle, separating the overall Image Distortions into non-structural Distortions (those that do not change the structures of the objects in the scene) and the remaining structural Distortions. We demonstrate that the resulting measure is effective in predicting Image Distortions as perceived by human observers.

  • a universal Image quality index
    IEEE Signal Processing Letters, 2002
    Co-Authors: Zhou Wang, Alan C Bovik
    Abstract:

    We propose a new universal objective Image quality index, which is easy to calculate and applicable to various Image processing applications. Instead of using traditional error summation methods, the proposed index is designed by modeling any Image Distortion as a combination of three factors: loss of correlation, luminance Distortion, and contrast Distortion. Although the new index is mathematically defined and no human visual system model is explicitly employed, our experiments on various Image Distortion types indicate that it performs significantly better than the widely used Distortion metric mean squared error. Demonstrative Images and an efficient MATLAB implementation of the algorithm are available online at http://anchovy.ece.utexas.edu//spl sim/zwang/research/quality_index/demo.html.

Brian A Wandell - One of the best experts on this subject based on the ideXlab platform.

  • Color Image fidelity metrics evaluated using Image Distortion maps
    Signal Processing, 1998
    Co-Authors: Xuemei Zhang, Brian A Wandell
    Abstract:

    Several color Image Fidelity metrics are evaluated by comparing the metric predictions to empirical measurements. Subjects examined Image pairs consisting of an original and a reproduction. They marked locations on the reproduction that differed detectably from the original. We refer to the distribution of error marks by the subjects as Image Distortion maps. The empirically obtained Image Distortion maps are compared to the predicited visible difference calculated using (1) the widely used root mean square error (pointby- point RMS) computed in uncalibrated RGB values, (2) the point-by-point CIELAB ΔE94 values (CIE, 1994), and (3) S-CIELAB ΔE94, a spatial extension of CIELAB ΔE metric. The uncalibrated RMS metric did not predict the perceptual Image Distortion data well. The point-by-point CIELAB ΔE94 metric provided better predictions, and the S-CIELAB metric, which incorporated the spatial color sensitivity of the eye, gave the most accurate predictions. None of the metrics provided an excellent fit to the data. Image areas with poor predictions were concentrated in regions containing large negative local contrast. When these areas were excluded from our data analysis, both S-CIELAB and CIELAB predictions had much better agreement with the perceptual data. This suggests that the next step in improving color Image Fidelity metrics is to re-define color difference formula such as CIELAB ΔE94 in terms of local contrast.

  • Color Imaging Conference - Image Distortion Maps.
    1997
    Co-Authors: Xuemei M. Zhang, Erick Setiawan, Brian A Wandell
    Abstract:

    Subjects examined Image pairs consisting of an original and a reproduction created using either JPEG compression or digital halftoning. Subjects marked locations on the reproduction that differed detectably from the original. We refer to the distribution of error marks by the subjects as Image Distortion maps. The empirically obtained Image Distortion maps are compared to the visible difference calculated using three color difference metrics. These are color Distortions predicted by the widely used mean square error (point-bypoint MSE) computed in RGB values, the point-by-point CIELAB E color difference formula (CIE, 1971), and SCIELAB, a spatial extension of CIELAB that incorporates spatial filtering in an opponent colors representation prior to the CIELAB calculation (Zhang & Wandell, 1996). For halftoned reproductions the RMS, CIELAB, and SCIELAB error sizes correlated with the locations marked by subjects reasonably well, given the freedom to select a threshold level separately for each Image. The S-CIELAB metric had the most consistent threshold levels across Images; the RMS metric had the least consistent levels. For JPEG reproductions, all three metrics provided poor predictions of subjects’ marked locations.

Alan C Bovik - One of the best experts on this subject based on the ideXlab platform.

  • no reference Image quality assessment in curvelet domain
    Signal Processing-image Communication, 2014
    Co-Authors: Hongping Dong, Hua Huang, Alan C Bovik
    Abstract:

    Abstract We study the efficacy of utilizing a powerful Image descriptor, the curvelet transform, to learn a no-reference (NR) Image quality assessment (IQA) model. A set of statistical features are extracted from a computed Image curvelet representation, including the coordinates of the maxima of the log-histograms of the curvelet coefficients values, and the energy distributions of both orientation and scale in the curvelet domain. Our results indicate that these features are sensitive to the presence and severity of Image Distortion. Operating within a 2-stage framework of Distortion classification followed by quality assessment, we train an Image Distortion and quality prediction engine using a support vector machine (SVM). The resulting algorithm, dubbed CurveletQA for short, was tested on the LIVE IQA database and compared to state-of-the-art NR/FR IQA algorithms. We found that CurveletQA correlates well with human subjective opinions of Image quality, delivering performance that is competitive with popular full-reference (FR) IQA algorithms such as SSIM, and with top-performing NR IQA models. At the same time, CurveletQA has a relatively low complexity.

  • a universal Image quality index
    IEEE Signal Processing Letters, 2002
    Co-Authors: Zhou Wang, Alan C Bovik
    Abstract:

    We propose a new universal objective Image quality index, which is easy to calculate and applicable to various Image processing applications. Instead of using traditional error summation methods, the proposed index is designed by modeling any Image Distortion as a combination of three factors: loss of correlation, luminance Distortion, and contrast Distortion. Although the new index is mathematically defined and no human visual system model is explicitly employed, our experiments on various Image Distortion types indicate that it performs significantly better than the widely used Distortion metric mean squared error. Demonstrative Images and an efficient MATLAB implementation of the algorithm are available online at http://anchovy.ece.utexas.edu//spl sim/zwang/research/quality_index/demo.html.

Ed Gronenschild - One of the best experts on this subject based on the ideXlab platform.

  • the accuracy and reproducibility of a global method to correct for geometric Image Distortion in the x ray imaging chain
    Medical Physics, 1997
    Co-Authors: Ed Gronenschild
    Abstract:

    A method to correct for geometric Image Distortion in the x-ray imaging chain, so-called dewarping, has been developed. A global two-dimensional polynomial model of which the degree is optimized is used. The performance of the method has been tested in a number of experiments using Images of a plate with a 1 cm spaced wire grid put against the input screen of the x-rayImage intensifier (14/17/27 cm). Both offline cine film and online video Images were analyzed. The accuracy of the dewarp method was derived from the acquired Images and from computer-simulated distorted Images. The robustness and reproducibility of the dewarp method was evaluated by means of imaging the grid in various random orientations. Three parameters describing the behavior of the algorithm were considered. One is the reproducibility of the location of a dewarped position. The second parameter is the reproducibility of the distance between two adjacent dewarped positions as a measure of the reproducibility of the size of an object under investigation. The third parameter is the reproducibility of the pixel size in the plane of the calibration plate. The major results are: the reproducibility of the location of a dewarped position was 0.01–0.04 mm for cine film and 0.04–0.07 mm for video Images. The coefficient of variation of the distance between two dewarped positions was 0.04%–0.11% for cine film and 0.15%–0.18% for video Images. The dewarp algorithm turned out to be fast and accurate and the Distortion was removed over the whole Image field down to a low random residual level. It was found that a random orientation of the grid did not affect the assessment of the Distortion nor its correction. The dewarp method proved to be intrinsically robust and highly reliable. Time instability of the imaging chain was the main source of variability in the dewarp results.

Carlo Dal Mutto - One of the best experts on this subject based on the ideXlab platform.

  • ReConFig - Real-time Image Distortion correction: Analysis and evaluation of FPGA-compatible algorithms
    2016 International Conference on ReConFigurable Computing and FPGAs (ReConFig), 2016
    Co-Authors: Paolo Di Febbo, Stefano Mattoccia, Carlo Dal Mutto
    Abstract:

    Image Distortion correction is a critical preprocessing step for a variety of computer vision and Image processing algorithms. Standard real-time software implementations are generally not suited for direct hardware porting, so appropriated versions need to be designed in order to obtain implementations deployable on FPGAs. In this paper, hardware-compatible techniques for Image Distortion correction are introduced and analyzed in details. The considered solutions are compared in terms of output quality by using a geometrical-error-based approach, with particular emphasis on robustness with respect to increasing lens Distortion. The required amount of hardware resources is also estimated for each considered approach.

  • Real-Time Image Distortion Correction: Analysis and Evaluation of FPGA-Compatible Algorithms
    arXiv: Computer Vision and Pattern Recognition, 2016
    Co-Authors: Paolo Di Febbo, Stefano Mattoccia, Carlo Dal Mutto
    Abstract:

    Image Distortion correction is a critical pre-processing step for a variety of computer vision and Image processing algorithms. Standard real-time software implementations are generally not suited for direct hardware porting, so appropriated versions need to be designed in order to obtain implementations deployable on FPGAs. In this paper, hardware-compatible techniques for Image Distortion correction are introduced and analyzed in details. The considered solutions are compared in terms of output quality by using a geometrical-error-based approach, with particular emphasis on robustness with respect to increasing lens Distortion. The required amount of hardware resources is also estimated for each considered approach.