Reference Video

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 37620 Experts worldwide ranked by ideXlab platform

Alan C Bovik - One of the best experts on this subject based on the ideXlab platform.

  • MMSP - No-Reference Video Quality Assessment Using Space-Time Chips
    2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP), 2020
    Co-Authors: Joshua P. Ebenezer, Zaixi Shang, Hai Wei, Alan C Bovik
    Abstract:

    We propose a new prototype model for no-Reference Video quality assessment (VQA) based on the natural statistics of space-time chips of Videos. Space-time chips (ST-chips) are a new, quality-aware feature space which we define as space-time localized cuts of Video data in directions that are determined by the local motion flow. We use parametrized distribution fits to the bandpass histograms of space-time chips to characterize quality, and show that the parameters from these models are affected by distortion and can hence be used to objectively predict the quality of Videos. Our prototype method, which we call ChipQA-0, is agnostic to the types of distortion affecting the Video, and is based on identifying and quantifying deviations from the expected statistics of natural, undistorted ST-chips in order to predict Video quality. We train and test our resulting model on several large VQA databases and show that our model achieves high correlation against human judgments of Video quality and is competitive with state-of-the-art models.

  • No-Reference Video Quality Assessment Using Space-Time Chips
    arXiv: Image and Video Processing, 2020
    Co-Authors: Joshua P. Ebenezer, Zaixi Shang, Hai Wei, Alan C Bovik
    Abstract:

    We propose a new prototype model for no-Reference Video quality assessment (VQA) based on the natural statistics of space-time chips of Videos. Space-time chips (ST-chips) are a new, quality-aware feature space which we define as space-time localized cuts of Video data in directions that are determined by the local motion flow. We use parametrized distribution fits to the bandpass histograms of space-time chips to characterize quality, and show that the parameters from these models are affected by distortion and can hence be used to objectively predict the quality of Videos. Our prototype method, which we call ChipQA-0, is agnostic to the types of distortion affecting the Video, and is based on identifying and quantifying deviations from the expected statistics of natural, undistorted ST-chips in order to predict Video quality. We train and test our resulting model on several large VQA databases and show that our model achieves high correlation against human judgments of Video quality and is competitive with state-of-the-art models.

  • A No-Reference Video Quality Assessment Model for Underwater Networks
    IEEE Journal of Oceanic Engineering, 2020
    Co-Authors: Jose-miguel Moreno-roldan, Javier Poncela, Pablo Otero, Alan C Bovik
    Abstract:

    Underwater imagery is increasingly drawing attention from the scientific community, since pictures and Videos are invaluable tools in the study of the vast unknown oceanic environment that covers 90% of the planetary biosphere. However, underwater sensor networks must cope with the harsh channel that seawater constitutes. Medium range communication is only possible using acoustic modems that have limited transmission capabilities and peak bitrates of only a few dozens of kilobits per second. These reduced bitrates force heavy compression on Videos, yielding much higher levels of distortion than in other Video services. Furthermore, underwater Video users are ocean researchers, and therefore their quality perception is also different from the generic viewers that typically take part in subjective quality assessment experiments. Computational efficiency is also important since the underwater nodes must run on batteries and their recovery is very expensive. In this paper, we propose a pixel-based no-Reference Video quality assessment method that addresses the described challenges and achieves good correlations against subjective scores of users of underwater Videos.

  • motion based perceptual quality assessment of Video
    electronic imaging, 2009
    Co-Authors: Kalpana Seshadrinathan, Alan C Bovik
    Abstract:

    There is a great deal of interest in methods to assess the perceptual quality of a Video sequence in a full Reference framework. Motion plays an important role in human perception of Video and Videos suffer from several artifacts that have to deal with inaccuracies in the representation of motion in the test Video compared to the Reference. However, existing algorithms to measure Video quality focus primarily on capturing spatial artifacts in the Video signal, and are inadequate at modeling motion perception and capturing temporal artifacts in Videos. We present an objective, full Reference Video quality index known as the MOtion-based Video Integrity Evaluation (MOVIE) index that integrates both spatial and temporal aspects of distortion assessment. MOVIE explicitly uses motion information from the Reference Video and evaluates the quality of the test Video along the motion trajectories of the Reference Video. The performance of MOVIE is evaluated using the VQEG FR-TV Phase I dataset and MOVIE is shown to be competitive with, and even out-perform, existing Video quality assessment systems.

  • Human Vision and Electronic Imaging - Motion-based perceptual quality assessment of Video
    Human Vision and Electronic Imaging XIV, 2009
    Co-Authors: Kalpana Seshadrinathan, Alan C Bovik
    Abstract:

    There is a great deal of interest in methods to assess the perceptual quality of a Video sequence in a full Reference framework. Motion plays an important role in human perception of Video and Videos suffer from several artifacts that have to deal with inaccuracies in the representation of motion in the test Video compared to the Reference. However, existing algorithms to measure Video quality focus primarily on capturing spatial artifacts in the Video signal, and are inadequate at modeling motion perception and capturing temporal artifacts in Videos. We present an objective, full Reference Video quality index known as the MOtion-based Video Integrity Evaluation (MOVIE) index that integrates both spatial and temporal aspects of distortion assessment. MOVIE explicitly uses motion information from the Reference Video and evaluates the quality of the test Video along the motion trajectories of the Reference Video. The performance of MOVIE is evaluated using the VQEG FR-TV Phase I dataset and MOVIE is shown to be competitive with, and even out-perform, existing Video quality assessment systems.

Patrick Bonnet - One of the best experts on this subject based on the ideXlab platform.

Sumohana S. Channappayya - One of the best experts on this subject based on the ideXlab platform.

  • No-Reference Video Quality Assessment Using Natural Spatiotemporal Scene Statistics
    IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, 2020
    Co-Authors: Sathya Veera Reddy Dendi, Sumohana S. Channappayya
    Abstract:

    Robust spatiotemporal representations of natural Videos have several applications including quality assessment, action recognition, object tracking etc. In this paper, we propose a Video representation that is based on a parameterized statistical model for the spatiotemporal statistics of mean subtracted and contrast normalized (MSCN) coefficients of natural Videos. Specifically, we propose an asymmetric generalized Gaussian distribution (AGGD) to model the statistics of MSCN coefficients of natural Videos and their spatiotemporal Gabor bandpass filtered outputs. We then demonstrate that the AGGD model parameters serve as good representative features for distortion discrimination. Based on this observation, we propose a supervised learning approach using support vector regression (SVR) to address the no-Reference Video quality assessment (NRVQA) problem. The performance of the proposed algorithm is evaluated on publicly available Video quality assessment (VQA) datasets with both traditional and in-capture/authentic distortions. We show that the proposed algorithm delivers competitive performance on traditional (synthetic) distortions and acceptable performance on authentic distortions. The code for our algorithm will be released at https://www.iith.ac.in/lfovia/downloads.html .

  • NCC - Full-Reference Video Quality Assessment Using Deep 3D Convolutional Neural Networks
    2019 National Conference on Communications (NCC), 2019
    Co-Authors: Sathya Veera Reddy Dendi, Gokul Krishnappa, Sumohana S. Channappayya
    Abstract:

    We present a novel framework called Deep Video QUality Evaluator (DeepVQUE) for full-Reference Video quality assessment (FRVQA) using deep 3D convolutional neural networks (3D ConvNets). DeepVQUE is a complementary framework to traditional handcrafted feature based methods in that it uses deep 3D ConvNet models for feature extraction. 3D ConvNets are capable of extracting spatio-temporal features of the Video which are vital for Video quality assessment (VQA). Most of the existing FRVQA approaches operate on spatial and temporal domains independently followed by pooling, and often ignore the crucial spatio-temporal relationship of intensities in natural Videos. In this work, we pay special attention to the contribution of spatio-temporal dependencies in natural Videos to quality assessment. Specifically, the proposed approach estimates the spatio-temporal quality of a Video with respect to its pristine version by applying commonly used distance measures such as the l 1 or the l 2 norm to the volume-wise pristine and distorted 3D ConvNet features. Spatial quality is estimated using off-the-shelf full-Reference image quality assessment (FRIQA) methods. Overall Video quality is estimated using support vector regression (SVR) applied to the spatio-temporal and spatial quality estimates. Additionally, we illustrate the ability of the proposed approach to localize distortions in space and time.

  • an optical flow based full Reference Video quality assessment algorithm
    IEEE Transactions on Image Processing, 2016
    Co-Authors: K. Manasa, Sumohana S. Channappayya
    Abstract:

    We present a simple yet effective optical flow-based full-Reference Video quality assessment (FR-VQA) algorithm for assessing the perceptual quality of natural Videos. Our algorithm is based on the premise that local optical flow statistics are affected by distortions and the deviation from pristine flow statistics is proportional to the amount of distortion. We characterize the local flow statistics using the mean, the standard deviation, the coefficient of variation (CV), and the minimum eigenvalue ( $\lambda _{\mathrm{ min}}$ ) of the local flow patches. Temporal distortion is estimated as the change in the CV of the distorted flow with respect to the Reference flow, and the correlation between $\lambda _{\mathrm{ min}}$ of the Reference and of the distorted patches. We rely on the robust multi-scale structural similarity index for spatial quality estimation. The computed temporal and spatial distortions, thus, are then pooled using a perceptually motivated heuristic to generate a spatio-temporal quality score. The proposed method is shown to be competitive with the state-of-the-art when evaluated on the LIVE SD database, the EPFL Polimi SD database, and the LIVE Mobile HD database. The distortions considered in these databases include those due to compression, packet-loss, wireless channel errors, and rate-adaptation. Our algorithm is flexible enough to allow for any robust FR spatial distortion metric for spatial distortion estimation. In addition, the proposed method is not only parameter-free but also independent of the choice of the optical flow algorithm. Finally, we show that the replacement of the optical flow vectors in our proposed method with the much coarser block motion vectors also results in an acceptable FR-VQA algorithm. Our algorithm is called the flow similarity index.

  • ICIP - An optical flow-based no-Reference Video quality assessment algorithm
    2016 IEEE International Conference on Image Processing (ICIP), 2016
    Co-Authors: K. Manasa, Sumohana S. Channappayya
    Abstract:

    We present an optical flow-based no-Reference Video quality assessment (NR-VQA) algorithm for assessing the perceptual quality of natural Videos. Our algorithm is based on the hypothesis that distortions affect flow statistics both locally and globally. To capture the effects of distortion on optical flow, we measure irregularities at the patch level and at the frame level. At the patch level, we measure intra- and inter-patch level irregularities in the flow magnitude's variance and mean. We also measure the correlation in the patch level flow randomness between successive frames. At the frame level, we measure the normalized mean flow magnitude difference between successive frames. We rely on the robust NIQE algorithm for no-Reference spatial quality assessment of the frames. These temporal and spatial features are averaged over all the frames to arrive at a Video level feature vector. The Video level features and the corresponding DMOS scores are used to train a support vector machine for regression (SVR). This machine is used to estimate the quality score of a test Video. The competence of the proposed method is clearly demonstrated on SD and HD Video databases that include common distortion types such as compression artifacts, packet loss artifacts, additive noise, and blur.

  • QoMEX - A perceptually motivated no-Reference Video quality assessment algorithm for packet loss artifacts
    2014 Sixth International Workshop on Quality of Multimedia Experience (QoMEX), 2014
    Co-Authors: K. Manasa, K. V. S. N. L. Manasa Priya, Sumohana S. Channappayya
    Abstract:

    Packet loss artifacts are perhaps the most commonly occurring distortion type in Video streaming applications. We present a perceptually motivated no-Reference Video quality assessment (NR-VQA) algorithm for assessing the quality of Videos subject to IP and wireless distortions. The proposed algorithm is composed of a spatial quality assessment stage, a temporal quality assessment stage, followed by a spatio-temporal pooling stage. Each of these stages are perceptually motivated - the spatial stage is inspired by the sparse representation of natural scenes in the human visual system, the temporal stage is motivated by optical flow statistics, and the pooling stage by the sensitivity of the human visual system of spatio-temporal stimulus. We show that the spatio-temporal pooling results in significantly higher performance relative to the stand-alone performance of the spatial and temporal assessment stages. The performance of the algorithm is shown to be promising on a subset of the LIVE dataset.

Klaus Diepold - One of the best experts on this subject based on the ideXlab platform.

  • no Reference Video quality metric for hdtv based on h 264 avc bitstream features
    International Conference on Image Processing, 2011
    Co-Authors: Christian Keimel, Julian Habigt, Manuel Klimpke, Klaus Diepold
    Abstract:

    No-Reference Video quality metrics are becoming ever more popular, as they are more useful in real-life applications compared to full-Reference metrics. Many proposed metrics extract features related to human perception from the individual Video frames. Hence the Video sequences have to be decoded first, before the metrics can be applied. In order to avoid decoding just for quality estimation, we therefore present in this contribution a no-Reference metric for HDTV that uses features directly extracted from the H.264/AVC bitstream. We combine these features with the results from subjective tests using a data analysis approach with partial least squares regression to gain a prediction model for the visual quality. For verification, we performed a cross validation. Our results show that the proposed no-Reference metric outperforms other metrics and delivers a correlation between the quality prediction and the actual quality of 0.93.

  • QoMEX - Design of no-Reference Video quality metrics with multiway partial least squares regression
    2011 Third International Workshop on Quality of Multimedia Experience, 2011
    Co-Authors: Christian Keimel, Julian Habigt, Manuel Klimpke, Klaus Diepold
    Abstract:

    No-Reference Video quality metrics are becoming ever more popular, as they are more useful in real-life applications compared to full-Reference metrics. One way to design such metrics is by applying data analysis methods on both objectively measurable features and data from subjective testing. Partial least squares regression (PLSR) is one such method. In order to apply such methods, however, we have to temporally pool over all frames of a Video, loosing valuable information about the quality variation over time. Hence, we extend the PLSR into a higher dimensional space with multiway PLSR in this contribution and thus consider Video in all its dimensions. We designed a H.264/AVC bitstream no-Reference Video quality metric in order to verify multiway PLSR against PLSR with respect to the prediction performance. Our results show that the inclusion of the temporal dimension with multiway PLSR improves the quality prediction and its correlation with the actual quality.

  • ICIP - No-Reference Video quality metric for HDTV based on H.264/AVC bitstream features
    2011 18th IEEE International Conference on Image Processing, 2011
    Co-Authors: Christian Keimel, Julian Habigt, Manuel Klimpke, Klaus Diepold
    Abstract:

    No-Reference Video quality metrics are becoming ever more popular, as they are more useful in real-life applications compared to full-Reference metrics. Many proposed metrics extract features related to human perception from the individual Video frames. Hence the Video sequences have to be decoded first, before the metrics can be applied. In order to avoid decoding just for quality estimation, we therefore present in this contribution a no-Reference metric for HDTV that uses features directly extracted from the H.264/AVC bitstream. We combine these features with the results from subjective tests using a data analysis approach with partial least squares regression to gain a prediction model for the visual quality. For verification, we performed a cross validation. Our results show that the proposed no-Reference metric outperforms other metrics and delivers a correlation between the quality prediction and the actual quality of 0.93.

  • ICASSP - No-Reference Video quality evaluation for high-definition Video
    2009 IEEE International Conference on Acoustics Speech and Signal Processing, 2009
    Co-Authors: Christian Keimel, Tobias Oelbaum, Klaus Diepold
    Abstract:

    A no-Reference Video quality metric for High-Definition Video is introduced. This metric evaluates a set of simple features such as blocking or blurring, and combines those features into one parameter representing visual quality. While only comparably few base feature measurements are used, additional parameters are gained by evaluating changes for these measurements over time and using additional temporal pooling methods. To take into account the different characteristics of different Video sequences, the gained quality value is corrected using a low quality version of the received Video. The metric is verified using data from accurate subjective tests, and special care was taken to separate data used for calibration and verification. The proposed no-Reference quality metric delivers a prediction accuracy of 0.86 when compared to subjective tests, and significantly outperforms PSNR as a quality predictor.

  • Rule-Based No-Reference Video Quality Evaluation Using Additionally Coded Videos
    IEEE Journal of Selected Topics in Signal Processing, 2009
    Co-Authors: Tobias Oelbaum, Christian Keimel, Klaus Diepold
    Abstract:

    This contribution presents a no-Reference Video quality metric, which is based on a set of simple rules that assigns a given Video to one of four different content classes. The four content classes distinguish between Video sequences which are coded with a very low data rate, which are sensitive to blocking effects, which are sensitive to blurring, and a general model for all other types of Video sequences. The appropriate class for a given Video sequence is selected based on the evaluation of feature values of an additional low-quality version of the given Video, which is generated by encoding. The visual quality for a Video sequence is estimated using a set of features, which includes measures for the blockiness, the blurriness, the spatial activity, and a set of additional continuity features. The way these features are combined to compute one overall quality value is determined by the feature class, to which the Video has been assigned. We also propose an additional correction step for the visual quality value. The proposed metric is verified in a process, which includes visual quality values originating from subjective quality tests in combination with a cross validation approach. The presented metric significantly outperforms peak-signal-to-noise ratio as a visual quality estimator. The Pearson correlation between the estimated visual quality values and the subjective test results takes on values as high as 0.82.

Hugo Boujut - One of the best experts on this subject based on the ideXlab platform.