Image Difference

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 129156 Experts worldwide ranked by ideXlab platform

Philipp Urban - One of the best experts on this subject based on the ideXlab platform.

  • color Image quality assessment from prediction to optimization
    IEEE Transactions on Image Processing, 2014
    Co-Authors: Jens Preiss, Felipe Fernandes, Philipp Urban
    Abstract:

    While Image-Difference metrics show good prediction performance on visual data, they often yield artifact-contaminated results if used as objective functions for optimizing complex Image-processing tasks. We investigate in this regard the recently proposed color-Image-Difference (CID) metric particularly developed for predicting gamut-mapping distortions. We present an algorithm for optimizing gamut mapping employing the CID metric as the objective function. Resulting Images contain various visual artifacts, which are addressed by multiple modifications yielding the improved color-Image-Difference (iCID) metric. The iCID-based optimizations are free from artifacts and retain contrast, structure, and color of the original Image to a great extent. Furthermore, the prediction performance on visual data is improved by the modifications.

  • Image-Difference Prediction: From Color to Spectral
    IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, 2014
    Co-Authors: Steven Le Moan, Philipp Urban
    Abstract:

    We propose a new strategy to evaluate the quality of multi and hyperspectral Images, from the perspective of human perception. We define the spectral Image Difference as the overall perceived Difference between two spectral Images under a set of specified viewing conditions (illuminants). First, we analyze the stability of seven Image-Difference features across illuminants, by means of an information-theoretic strategy. We demonstrate, in particular, that in the case of common spectral distortions (spectral gamut mapping, spectral compression, spectral reconstruction), chromatic features vary much more than achromatic ones despite considering chromatic adaptation. Then, we propose two computationally efficient spectral Image Difference metrics and compare them to the results of a subjective visual experiment. A significant improvement is shown over existing metrics such as the widely used root-mean square error.

  • Image-Difference Prediction: From Grayscale to Color
    IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, 2012
    Co-Authors: Ingmar Lissner, Jens Preiss, Philipp Urban, Matthias Scheller Lichtenauer, Peter Zolliker
    Abstract:

    Existing Image-Difference measures show excellent accuracy in predicting distortions, such as lossy compression, noise, and blur. Their performance on certain other distortions could be improved; one example of this is gamut mapping. This is partly because they either do not interpret chromatic information correctly or they ignore it entirely. We present an Image-Difference framework that comprises Image normalization, feature extraction, and feature combination. Based on this framework, we create Image-Difference measures by selecting specific implementations for each of the steps. Particular emphasis is placed on using color information to improve the assessment of gamut-mapped Images. Our best Image-Difference measure shows significantly higher prediction accuracy on a gamut-mapping dataset than all other evaluated measures.

  • CGIV - The Impact of Image-Difference Features on Perceived Image Differences
    2012
    Co-Authors: Jens Preiss, Ingmar Lissner, Philipp Urban, Matthias Scheller Lichtenauer, Peter Zolliker
    Abstract:

    We discuss a few selected hypotheses on how the visual system ju dges Differences of color Images. We then derivefive Image-Difference features from these hypotheses and address their relation to the visual processing. Three models are proposed to combine these features for the prediction of perceived Image Differences. The parameters of the Image-Difference features are optimized on human Image-Difference assessments. For each model, we investigate the impact of individual features on the overall prediction performance. If chromatic features are combined with lightness-based features, the prediction accuracy on a test dataset is significantly higher than that of the SSIM index, which only operates on the achromatic component.

Christophoros Nikou - One of the best experts on this subject based on the ideXlab platform.

  • Variational-Bayes Optical Flow
    Journal of Mathematical Imaging and Vision, 2014
    Co-Authors: Giannis Chantas, Theodosios Gkamas, Christophoros Nikou
    Abstract:

    The Horn-Schunck (HS) optical flow method is widely employed to initialize many motion estimation algorithms. In this work, a variational Bayesian approach of the HS method is presented, where the motion vectors are considered to be spatially varying Student’s t -distributed unobserved random variables, i.e., the prior is a multivariate Student’s t -distribution, while the only observations available is the temporal and spatial Image Difference. The proposed model takes into account the residual resulting from the linearization of the brightness constancy constraint by Taylor series approximation, which is also assumed to be a spatially varying Student’s t -distributed observation noise. To infer the model variables and parameters we recur to variational inference methodology leading to an expectation-maximization (EM) framework with update equations analogous to the Horn-Schunck approach. This is accomplished in a principled probabilistic framework where all of the model parameters are estimated automatically from the data. Experimental results show the improvement obtained by the proposed model which may substitute the standard algorithm in the initialization of more sophisticated optical flow schemes.

  • Variational-Bayes Optical Flow
    Journal of Mathematical Imaging and Vision, 2014
    Co-Authors: Giannis Chantas, Theodosios Gkamas, Christophoros Nikou
    Abstract:

    The Horn-Schunck (HS) optical flow method is widely employed to initialize many motion estimation algorithms. In this work, a variational Bayesian approach of the HS method is presented, where the motion vectors are considered to be spatially varying Student’s t -distributed unobserved random variables, i.e., the prior is a multivariate Student’s t -distribution, while the only observations available is the temporal and spatial Image Difference. The proposed model takes into account the residual resulting from the linearization of the brightness constancy constraint by Taylor series approximation, which is also assumed to be a spatially varying Student’s t -distributed observation noise. To infer the model variables and parameters we recur to variational inference methodology leading to an expectation-maximization (EM) framework with update equations analogous to the Horn-Schunck approach. This is accomplished in a principled probabilistic framework where all of the model parameters are estimated automatically from the data. Experimental results show the improvement obtained by the proposed model which may substitute the standard algorithm in the initialization of more sophisticated optical flow schemes.

  • a probabilistic formulation of the optical flow problem
    International Conference on Pattern Recognition, 2012
    Co-Authors: Theodosios Gkamas, Giannis Chantas, Christophoros Nikou
    Abstract:

    The Horn-Schunck (HS) optical flow method is widely employed to initialize many motion estimation algorithms. In this work, a variational Bayesian approach of the HS method is presented where the motion vectors are considered to be spatially varying Student's t-distributed unobserved random variables and the only observation available is the temporal Image Difference. The proposed model takes into account the residual resulting from the linearization of the brightness constancy constraint by Taylor series approximation, which is also assumed to be a spatially varying Student's t-distributed observation noise. To infer the model variables and parameters we recur to variational inference methodology leading to an expectation-maximization (EM) framework in a principled probabilistic framework where all of the model parameters are estimated automatically from the data.

  • ICPR - A probabilistic formulation of the optical flow problem
    2012
    Co-Authors: Theodosios Gkamas, Giannis Chantas, Christophoros Nikou
    Abstract:

    The Horn-Schunck (HS) optical flow method is widely employed to initialize many motion estimation algorithms. In this work, a variational Bayesian approach of the HS method is presented where the motion vectors are considered to be spatially varying Student's t-distributed unobserved random variables and the only observation available is the temporal Image Difference. The proposed model takes into account the residual resulting from the linearization of the brightness constancy constraint by Taylor series approximation, which is also assumed to be a spatially varying Student's t-distributed observation noise. To infer the model variables and parameters we recur to variational inference methodology leading to an expectation-maximization (EM) framework in a principled probabilistic framework where all of the model parameters are estimated automatically from the data.

Tao Zhang - One of the best experts on this subject based on the ideXlab platform.

  • The Application of Kalman Filtering in the System of Image Difference Moving Objects Tracking
    2011 International Conference on Computational and Information Sciences, 2011
    Co-Authors: Jun Gui, Tao Zhang, Lijun Liu
    Abstract:

    The Difference Image method is always used in detecting moving target in application visual system[1]. But there is problem that the Difference Image algorithm globally searches Image pixels, which makes larger amount of calculation, costs a lot of time and reduces the number of frames in acquisition Image in the same time. Moreover, the target will be lost easily when partly blocked. To solve this problem, this paper presents a Difference Image method to track based on Kalman predictor, which makes full use of the prediction function of Kalman filter to predict the possible target area in the next frame and do the Image Difference operation in the smaller region to find the center point of target in next frame. When doing palm tracking by the algorithm put forward in this paper in the system constructed by ARM chips S3C2410X + OV511 camera, it can avoid disturbing of shading, track stably and decrease the time-consuming greatly. Therefore, this algorithm will preferably mitigate the large amount of calculation and solve shading interference existed in traditional Image Difference algorithm.

  • 3d Image format identification by Image Difference
    International Conference on Multimedia and Expo, 2010
    Co-Authors: Tao Zhang
    Abstract:

    Many 3D formats exist and will co-exist for a long time since there is no 3D standard that defines a generally accepted 3D format. The support for multiple 3D formats will be important for bringing 3D into home. In this paper, we propose a fast and effective method to detect whether an Image is a 2D Image or a 3D Image encoded with a pair of stereo Images, and to further identify the exact 3D format in the latter case. The proposed method computes an edge map resulted from Image Differences between the left and right view Images; uses the statistics from the distribution of edge widths combined with structure similarity analysis to detect the existence of a 3D format and to identify the format. Experiments show the effectiveness of this method.

  • ICME - 3D Image format identification by Image Difference
    2010 IEEE International Conference on Multimedia and Expo, 2010
    Co-Authors: Tao Zhang
    Abstract:

    Many 3D formats exist and will co-exist for a long time since there is no 3D standard that defines a generally accepted 3D format. The support for multiple 3D formats will be important for bringing 3D into home. In this paper, we propose a fast and effective method to detect whether an Image is a 2D Image or a 3D Image encoded with a pair of stereo Images, and to further identify the exact 3D format in the latter case. The proposed method computes an edge map resulted from Image Differences between the left and right view Images; uses the statistics from the distribution of edge widths combined with structure similarity analysis to detect the existence of a 3D format and to identify the format. Experiments show the effectiveness of this method.

Jon Yngve Hardeberg - One of the best experts on this subject based on the ideXlab platform.

  • Development of an adaptive bilateral filter for evaluating color Image Difference
    Journal of Electronic Imaging, 2012
    Co-Authors: Zhaohui Wang, Jon Yngve Hardeberg
    Abstract:

    Spatial filtering, which aims to mimic the contrast sensitiv- ity function (CSF) of the human visual system (HVS), has previously been combined with color Difference formulae for measuring color Image reproduction errors. These spatial filters attenuate impercep- tible information in Images, unfortunately including high frequency edges, which are believed to be crucial in the process of scene analysis by the HVS. The adaptive bilateral filter represents a novel approach, which avoids the undesirable loss of edge information introduced by CSF-based filtering. The bilateral filter employs two Gaussian smoothing filters in different domains, i.e., spatial domain and intensity domain. We propose a method to decide the param- eters, which are designed to be adaptive to the corresponding view- ing conditions, and the quantity and homogeneity of information contained in an Image. Experiments and discussions are given to support the proposal. A series of perceptual experiments were con- ducted to evaluate the performance of our approach. The experimen- tal sample Images were reproduced with variations in six Image attributes: lightness, chroma, hue, compression, noise, and sharp- ness/blurriness. The Pearson's correlation values between the model-predicted Image Difference and the observed Difference were employed to evaluate the performance, and compare it with that of spatial CIELAB and Image appearance model. © 2012 SPIE and IS&T. (DOI: 10.1117/1.JEI.21.2.023021)

  • CCIW - A New Spatial Hue Angle Metric for Perceptual Image Difference
    Lecture Notes in Computer Science, 2009
    Co-Authors: Marius Pedersen, Jon Yngve Hardeberg
    Abstract:

    Color Image Difference metrics have been proposed to find Differences between an original Image and a modified version of it. One of these metrics is the hue angle algorithm proposed by Hong and Luo in 2002. This metric does not take into account the spatial properties of the human visual system, and could therefore miscalculate the Difference between an original Image and a modified version of it. Because of this we propose a new color Image Difference metrics based on the hue angle algorithm that takes into account the spatial properties of the human visual system. The proposed metric, which we have named SHAME (Spatial Hue Angle MEtric), have been subjected to extensive testing. The results show improvement in performance compared to the original metric proposed by Hong and Luo.

  • Color Imaging Conference - An adaptive Bilateral Filter for Predicting Color Image Difference.
    2009
    Co-Authors: Zhaohui Wang, Jon Yngve Hardeberg
    Abstract:

    Color Image Difference metrics are of great importance in the field of color Image reproduction. In this study, we introduce an adaptive bilateral filter for predicting color Image Difference. This filter is simple, employing two Gaussian smoothing filters in different domains, which avoids the loss of edge information when smoothing the Image. However, the challenge is to select appropriate parameters to result in a better performance when applying for color Image deference prediction. We propose a method to optimize the parameters, which are designed to be adaptive to the corresponding viewing conditions, and the quantity and homogeneity of information contained in an Image. We have conducted psychophysical experiments to evaluate the performance of our approach. The experimental sample Images are reproduced with variations in six Image attributes: Lightness, Chroma, Hue, Compression, Noise, and Sharpness. The Pearson’s correlation value between the predicted Difference and the z-score of visual judgments was employed to evaluate the performance and compare it with that of s-CIELAB and iCAM. Background Theories of spatial characterization of the human visual system are of much current interest in the development of Image Difference metrics [1-4]. They all involve the conception that the human visual system is optimally designed to process the spatial information in Images or complex scenes. The study [5] of the human visual system has shown that the human visual system is composed of spatial frequency channels. The light sensors of the human visual system, cones and rods, are sensitive to the spatial changes of stimuli. Both contrast sensitivity and color appearance vary as a function of the spatial pattern [6, 7]. Attempts to computationally assess color Image Difference have typically created models of human perception suitable for determining the discriminations introduced by spatial alteration, such as Image compression, halftone reproduction, etc. On the other hand, the successful applications of color Difference formulae, such as CIELAB 1976 color Difference, CIE94, and CIEDE2000, have encouraged researchers to apply them also to Image Difference evaluation. An important motivation of our work is the development of an Image Difference metric on various Image reproduction tasks. Image Difference may originate due to different Image reproduction methods, such as the discriminations from chromatic and spatial modifications. Several studies [8-12] have measured the discriminations introduced by chromatic changes of the Images alone. In this work, we study the general statistics over both spatial and chromatic Image reproductions. Spatial filtering was introduced into the color Difference formula for measuring Image reproduction errors, and later, replaced with the simulator of the human contrast sensitivity functions (CSFs) [2, 13, 14]. There are many models developed for simulating the CSFs. The model developed by Movshon and Kiorpes [15] was suggested [2] and also adopted by the CIE TC802 [16]. Generally, the spatial filters (or CSF models) are applied in the opponent color space to deduct the high frequency components in an Image. The decrease in sensitivity at higher frequencies has been attributed to blurring because of the optical limitation of the eye and spatial summation in the human visual system [17]. Thus, a blurrier Image is the output, in which the imperceptible information is attenuated, including, inevitably, high frequency edges. There is a broad consensus, however, that the human visual system is particularly sensitive to the edges in an Image. Edge detection is believed to be necessary to distinguish objects from their background, and establish their shape and position. It has been proved to be a crucial early step in the process of scene analysis by the human visual system. To overcome the undesirable loss of edges whilst using the spatial filter, recent studies [3, 18] employed edge enhancement in the workflow for spatial localization. Many Image processing methods have been developed to smooth the Image and keep the edges. Recently, Tomasi and Manduchi [19] described an alternative bilateral filter which extended the concept of Gaussian smoothing by weighting the filter coefficients with their corresponding relative pixel intensities. Two Gaussian filters are applied at a localized pixel neighborhood, one in the spatial domain (domain filter) and the other in the intensity domain (range filter). The result is a blurrier Image than the original while preserving edges. However, the behavior of this filter is governed by a number of parameters which need to be selected with care for color Image Difference evaluation. In this paper, we propose an adaptive bilateral filter for color Image Difference evaluation and design the parameters based on the spatial frequency and the quantity and the homogeneity of the information contained in a certain Image. We describe a psychophysical experiment to validate its performance and compare it with other two models, sCIELAB and iCAM, which are both recognized as the human visual system based models. The testing Images are reproduced in terms of both spatial and chromatic attributes. The evaluation is based on the Pearson’s correlation value between the visual psychophysical judgments and the predicted Difference. Adaptive Bilateral Filter The idea behind the bilateral filter is to combine domain and range filters together. Pixels in the neighborhood which are geometrically closer and photometrically more similar to the filtering centre will be weighted more. Given a color Image f(x), the bilateral filter [19] can be expressed as: ξ ξ ξ ξ d x f f s x c f x k x h )) ( ), ( ( ) , ( ) ( ) ( ) ( 1 ∫ ∫ ∞

  • CGIV/MCS - Rank Order and Image Difference Metrics.
    2008
    Co-Authors: Marius Pedersen, Jon Yngve Hardeberg
    Abstract:

    There are a number of ways to reproduce an Image, for an example gamut mapping, halftoning and compression. To find the best reproduction among a number of variants of the same reproduction algorithm, a psychophysical experiment can be carried out. Image Difference metrics have been introduced to eliminate these experiments. To do this the metrics must reflect the perceived Image Difference. One way to evaluate the overall performance of Image differnece metrics is to compute the correlation coefficient between perceived and predicted Image Difference. This does not always reflect the true performance of the metric, therefore we propose to use the ranking based on the predicted Image Difference for each scene as a data set for the rank order method. This results in a z-score similar to the overall perceived Image Difference, the correlation coefficient between metric z-score and perceived z-score reflects the overall performance of the Image Difference metrics.

  • Evaluating colour Image Difference metrics for gamut‐mapped Images
    Coloration Technology, 2008
    Co-Authors: Jon Yngve Hardeberg, Eriko Bando, Marius Pedersen
    Abstract:

    The quality of several colour Image Difference metrics, pixelwise CIELAB ΔEab, S-CIELAB, iCAM, Structural Similarity Index, Universal Image Quality and the hue-angle algorithm, have been investigated. These results were compared with the results from a psychophysical experiment in which the perceptual Image Difference was evaluated. Six original Images were reproduced using six different colour gamut-mapping algorithms. The results of our experiment indicate that perceptual Image Difference cannot be directly related to colour Image Difference calculated by current metrics. Therefore, it is currently not possible to evaluate colour gamut mapping quality using colour Image Difference metrics.

Giannis Chantas - One of the best experts on this subject based on the ideXlab platform.

  • Variational-Bayes Optical Flow
    Journal of Mathematical Imaging and Vision, 2014
    Co-Authors: Giannis Chantas, Theodosios Gkamas, Christophoros Nikou
    Abstract:

    The Horn-Schunck (HS) optical flow method is widely employed to initialize many motion estimation algorithms. In this work, a variational Bayesian approach of the HS method is presented, where the motion vectors are considered to be spatially varying Student’s t -distributed unobserved random variables, i.e., the prior is a multivariate Student’s t -distribution, while the only observations available is the temporal and spatial Image Difference. The proposed model takes into account the residual resulting from the linearization of the brightness constancy constraint by Taylor series approximation, which is also assumed to be a spatially varying Student’s t -distributed observation noise. To infer the model variables and parameters we recur to variational inference methodology leading to an expectation-maximization (EM) framework with update equations analogous to the Horn-Schunck approach. This is accomplished in a principled probabilistic framework where all of the model parameters are estimated automatically from the data. Experimental results show the improvement obtained by the proposed model which may substitute the standard algorithm in the initialization of more sophisticated optical flow schemes.

  • Variational-Bayes Optical Flow
    Journal of Mathematical Imaging and Vision, 2014
    Co-Authors: Giannis Chantas, Theodosios Gkamas, Christophoros Nikou
    Abstract:

    The Horn-Schunck (HS) optical flow method is widely employed to initialize many motion estimation algorithms. In this work, a variational Bayesian approach of the HS method is presented, where the motion vectors are considered to be spatially varying Student’s t -distributed unobserved random variables, i.e., the prior is a multivariate Student’s t -distribution, while the only observations available is the temporal and spatial Image Difference. The proposed model takes into account the residual resulting from the linearization of the brightness constancy constraint by Taylor series approximation, which is also assumed to be a spatially varying Student’s t -distributed observation noise. To infer the model variables and parameters we recur to variational inference methodology leading to an expectation-maximization (EM) framework with update equations analogous to the Horn-Schunck approach. This is accomplished in a principled probabilistic framework where all of the model parameters are estimated automatically from the data. Experimental results show the improvement obtained by the proposed model which may substitute the standard algorithm in the initialization of more sophisticated optical flow schemes.

  • a probabilistic formulation of the optical flow problem
    International Conference on Pattern Recognition, 2012
    Co-Authors: Theodosios Gkamas, Giannis Chantas, Christophoros Nikou
    Abstract:

    The Horn-Schunck (HS) optical flow method is widely employed to initialize many motion estimation algorithms. In this work, a variational Bayesian approach of the HS method is presented where the motion vectors are considered to be spatially varying Student's t-distributed unobserved random variables and the only observation available is the temporal Image Difference. The proposed model takes into account the residual resulting from the linearization of the brightness constancy constraint by Taylor series approximation, which is also assumed to be a spatially varying Student's t-distributed observation noise. To infer the model variables and parameters we recur to variational inference methodology leading to an expectation-maximization (EM) framework in a principled probabilistic framework where all of the model parameters are estimated automatically from the data.

  • ICPR - A probabilistic formulation of the optical flow problem
    2012
    Co-Authors: Theodosios Gkamas, Giannis Chantas, Christophoros Nikou
    Abstract:

    The Horn-Schunck (HS) optical flow method is widely employed to initialize many motion estimation algorithms. In this work, a variational Bayesian approach of the HS method is presented where the motion vectors are considered to be spatially varying Student's t-distributed unobserved random variables and the only observation available is the temporal Image Difference. The proposed model takes into account the residual resulting from the linearization of the brightness constancy constraint by Taylor series approximation, which is also assumed to be a spatially varying Student's t-distributed observation noise. To infer the model variables and parameters we recur to variational inference methodology leading to an expectation-maximization (EM) framework in a principled probabilistic framework where all of the model parameters are estimated automatically from the data.