Perceptual Quality

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Weisi Lin - One of the best experts on this subject based on the ideXlab platform.

  • no reference view synthesis Quality prediction for 3 d videos based on color depth interactions
    IEEE Transactions on Multimedia, 2018
    Co-Authors: Feng Shao, Weisi Lin, Qizheng Yuan, Gangyi Jiang
    Abstract:

    In a 3-D video system, automatically predicting the Quality of synthesized 3-D video based on the inputs of color and depth videos is an urgent but very difficult task, while the existing full-reference methods usually measure the Perceptual Quality of the synthesized video. In this paper, a high-efficiency view synthesis Quality prediction (HEVSQP) metric for view synthesis is proposed. Based on the derived VSQP model that quantifies the influences of color and depth distortions and their interactions in determining the Perceptual Quality of 3-D synthesized video, color-involved VSQP and depth-involved VSQP indices are predicted, respectively, and are combined to yield an HEVSQP index. Experimental results on our constructed NBU-3D Synthesized Video Quality Database demonstrate that the proposed HEVSOP has good performance evaluated on the entire synthesized video-Quality database, compared with other full-reference and no-reference video-Quality assessment metrics.

  • Perceptual Quality assessment for 3d triangle mesh based on curvature
    IEEE Transactions on Multimedia, 2015
    Co-Authors: Lu Dong, Weisi Lin, Yuming Fang, Hock Soon Seah
    Abstract:

    Triangle meshes are used in representation of 3D geometric models, and they are subject to various visual distortions during geometrical processing and transmission. In this study, we propose a novel objective Quality assessment method for 3D meshes based on curvature information; according to characteristics of the human visual system (HVS), two new components including visual masking and saturation effect are designed for the proposed method. Besides, inspired by the fact that the HVS is sensitive to structural information, we compute the structure distortion of 3D meshes. We test the performance of the proposed method on three publicly available databases of 3D mesh Quality evaluation. We rotate among these databases for parameter determination to demonstrate the robustness of the proposed scheme. Experimental results demonstrate that the proposed method can predict consistent results in terms of correlation to the subjective scores across the databases.

  • Perceptual Quality assessment of screen content images
    IEEE Transactions on Image Processing, 2015
    Co-Authors: Huan Yang, Yuming Fang, Weisi Lin
    Abstract:

    Research on screen content images (SCIs) becomes important as they are increasingly used in multi-device communication applications. In this paper, we present a study on Perceptual Quality assessment of distorted SCIs subjectively and objectively. We construct a large-scale screen image Quality assessment database (SIQAD) consisting of 20 source and 980 distorted SCIs. In order to get the subjective Quality scores and investigate, which part (text or picture) contributes more to the overall visual Quality, the single stimulus methodology with 11 point numerical scale is employed to obtain three kinds of subjective scores corresponding to the entire, textual, and pictorial regions, respectively. According to the analysis of subjective data, we propose a weighting strategy to account for the correlation among these three kinds of subjective scores. Furthermore, we design an objective metric to measure the visual Quality of distorted SCIs by considering the visual difference of textual and pictorial regions. The experimental results demonstrate that the proposed SCI Perceptual Quality assessment scheme, consisting of the objective metric and the weighting strategy, can achieve better performance than 11 state-of-the-art IQA methods. To the best of our knowledge, the SIQAD is the first large-scale database published for Quality evaluation of SCIs, and this research is the first attempt to explore the Perceptual Quality assessment of distorted SCIs.

  • cross dimensional Perceptual Quality assessment for low bit rate videos
    IEEE Transactions on Multimedia, 2008
    Co-Authors: Guangtao Zhai, Weisi Lin, Xiaokang Yang, Jianfei Cai, Wenjun Zhang, Minoru Etoh
    Abstract:

    Most studies in the literature for video Quality assessment have been focused on the evaluation of quantized video sequences at fixed and high spatial and temporal resolutions. Only limited work has been reported for assessing video Quality under different spatial and temporal resolutions. In this paper, we consider a wider scope of video Quality assessment in the sense of considering multiple dimensions. In particular, we address the problem of evaluating Perceptual visual Quality of low bit-rate videos under different settings and requirements. Extensive subjective view tests for assessing the Perceptual Quality of low bit-rate videos have been conducted, which cover 150 test scenarios and include five distinctive dimensions: encoder type, video content, bit rate, frame size, and frame rate. Based on the obtained subjective testing results, we perform thorough statistical analysis to study the influence of different dimensions on the Perceptual Quality and some interesting observations are pointed out. We believe such a study brings new knowledge into the topic of cross-dimensional video Quality assessment and it has immediate applications in Perceptual video adaptation for scalable video over mobile networks.

  • Perceptual Quality significance map pqsm and its application on video Quality distortion metrics
    International Conference on Acoustics Speech and Signal Processing, 2003
    Co-Authors: Weisi Lin, Ee Ping Ong, Susu Yao, X K Yang
    Abstract:

    The paper presents a new and general concept, PQSM (Perceptual Quality significance map), to be used in measuring visual distortion. It makes use of the mechanism that the HVS (human visual system) pays more attention to certain areas of visual signals due to one or more of the following factors: salient features in image/video; cues from domain knowledge; association of other media (e.g., speech or audio). PQSM is a 3D/4D array whose elements represent the relative Perceptual-Quality significance levels for the corresponding pixels/regions for images or video. Due to its generality, PQSM can be incorporated into any visual distortion metrics; it can improve effectiveness or/and efficiency of Perceptual metrics or even enhance a PSNR-based metric. A three-stage PQSM generation method is also proposed, with an implementation of motion, luminance, skin-color and face mapping. Experimental results show that the scheme can significantly improve the performance of current image/video distortion metrics.

Yao Wang - One of the best experts on this subject based on the ideXlab platform.

  • modeling of rate and Perceptual Quality of compressed video as functions of frame rate and quantization stepsize and its applications
    IEEE Transactions on Circuits and Systems for Video Technology, 2012
    Co-Authors: Yao Wang
    Abstract:

    This paper first investigates the impact of frame rate and quantization on the bit rate and Perceptual Quality of compressed video. We propose a rate model and a Quality model, both in terms of the quantization stepsize and frame rate. Both models are expressed as the product of separate functions of quantization stepsize and frame rate. The proposed models are analytically tractable, each requiring only a few content-dependent parameters. The rate model is validated over videos coded using both scalable and nonscalable encoders, under a variety of encoder settings. The Quality model is validated only for a scalable video, although it is expected to be applicable to a single-layer video as well. We further investigate how to predict the model parameters using the content features extracted from original videos. Results show accurate bit rate and Quality prediction (average Pearson correlation >;0.99) can be achieved with model parameters predicted using three features. Finally, we apply rate and Quality models for rate-constrained scalable bitstream adaptation and frame rate adaptive rate control. Simulations show that our model-based solutions produce better video Quality compared with conventional video adaptation and rate control.

  • Perceptual Quality assessment of video considering both frame rate and quantization artifacts
    IEEE Transactions on Circuits and Systems for Video Technology, 2011
    Co-Authors: Tao Liu, Yao Wang
    Abstract:

    In this paper, we explore the impact of frame rate and quantization on Perceptual Quality of a video. We propose to use the product of a spatial Quality factor that assesses the Quality of decoded frames without considering the frame rate effect and a temporal correction factor, which reduces the Quality assigned by the first factor according to the actual frame rate. We find that the temporal correction factor follows closely an inverted falling exponential function, whereas the quantization effect on the coded frames can be captured accurately by a sigmoid function of the peak signal-to-noise ratio. The proposed model is analytically simple, with each function requiring only a single content-dependent parameter. The proposed overall metric has been validated using both our subjective test scores as well as those reported by others. For all seven data sets examined, our model yields high Pearson correlation (higher than 0.9) with measured mean opinion score (MOS). We further investigate how to predict parameters of our proposed model using content features derived from the original videos. Using predicted parameters from content features, our model still fits with measured MOS with high correlation.

  • modeling rate and Perceptual Quality of scalable video as functions of quantization and frame rate and its application in scalable video adaptation
    2009 17th International Packet Video Workshop, 2009
    Co-Authors: Yao Wang
    Abstract:

    This paper investigates the impact of frame rate and quantization on the bit rate and Perceptual Quality of a scalable video with temporal and Quality scalability. We propose a rate model and a Quality model, both in terms of the quantization stepsize and frame rate. The Quality model is derived from our earlier Quality model in terms of the PSNR of decoded frames and frame rate. Both models are developed based on the key observation from experimental data that the relative reduction of either rate and Quality when the frame rate decreases is quite independent of the quantization stepsize. This observation enables us to express both rate and Quality as the product of separate functions of quantization stepsize and frame rate, respectively. The proposed rate and Quality models are analytically tractable, each requiring only two content-dependent parameters. Both models fit the measured data very accurately, with high Pearson correlation. We further apply these models for rate-constrained bitstream adaptation, where the problem is to determine the optimal combination of Quality and temporal layers that provides the highest Perceptual Quality for a given bandwidth constraint.

Alan C. Bovik - One of the best experts on this subject based on the ideXlab platform.

  • Perceptual Quality evaluation of synthetic pictures distorted by compression and transmission
    Signal Processing-image Communication, 2018
    Co-Authors: Debarati Kundu, Alan C. Bovik, Lark Kwon Choi, Brian L Evans
    Abstract:

    Abstract Measuring visual Quality, as perceived by human observers, is becoming increasingly important in a large number of applications where humans are the ultimate consumers of visual information. Many natural image databases have been developed that contain human subjective ratings of the images. Subjective Quality evaluation data is less available for synthetic images, such as those commonly encountered in graphics novels, online games or internet ads. A wide variety of powerful full-reference, reduced-reference and no-reference Image Quality Assessment (IQA) algorithms have been proposed for natural images, but their performance has not been evaluated on synthetic images. In this paper we (1) conduct a series of subjective tests on a new publicly available Embedded Signal Processing Laboratory (ESPL) Synthetic Image Database, which contains 500 distorted images (20 distorted images for each of the 25 original images) in 1920  ×  1080 resolution, and (2) evaluate the performance of more than 50 publicly available IQA algorithms on the new database. The synthetic images in the database were processed by post acquisition distortions, including those arising from compression and transmission. We collected 26,000 individual ratings from 64 human subjects which can be used to evaluate full-reference, reduced-reference, and no-reference IQA algorithm performance. We find that IQA models based on scene statistics models can successfully predict the Perceptual Quality of synthetic scenes. The database is available at: http://signal.ece.utexas.edu/%7Ebevans/synthetic/ .

  • Perceptual Quality prediction on authentically distorted images using a bag of features approach
    Journal of vision, 2017
    Co-Authors: Deepti Ghadiyaram, Alan C. Bovik
    Abstract:

    Current top-performing blind Perceptual image Quality prediction models are generally trained on legacy databases of human Quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual Quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the Perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a "bag of feature maps" approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies-or departures therefrom-of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image Quality prediction. We demonstrate the competence of the features toward improving automatic Perceptual Quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the Perceptual Quality prediction model and algorithm and show that it is able to achieve good-Quality prediction power that is better than other leading models.

  • Perceptual Quality Prediction on Authentically Distorted Images Using a Bag of Features Approach
    arXiv: Computer Vision and Pattern Recognition, 2016
    Co-Authors: Deepti Ghadiyaram, Alan C. Bovik
    Abstract:

    Current top-performing blind Perceptual image Quality prediction models are generally trained on legacy databases of human Quality opinion scores on synthetically distorted images. Therefore they learn image features that effectively predict human visual Quality judgments of inauthentic, and usually isolated (single) distortions. However, real-world images usually contain complex, composite mixtures of multiple distortions. We study the Perceptually relevant natural scene statistics of such authentically distorted images, in different color spaces and transform domains. We propose a bag of feature-maps approach which avoids assumptions about the type of distortion(s) contained in an image, focusing instead on capturing consistencies, or departures therefrom, of the statistics of real world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image Quality prediction. We demonstrate the competence of the features towards improving automatic Perceptual Quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the Perceptual Quality prediction model and algorithm and show that it is able to achieve good Quality prediction power that is better than other leading models.

  • face detection on distorted images augmented by Perceptual Quality aware features
    IEEE Transactions on Information Forensics and Security, 2014
    Co-Authors: Suriya Gunasekar, Joydeep Ghosh, Alan C. Bovik
    Abstract:

    Motivated by the proliferation of low-cost digital cameras in mobile devices being deployed in automated surveillance networks, we study the interaction between Perceptual image Quality and a classic computer vision task of face detection. We quantify the degradation in performance of a popular and effective face detector when human-perceived image Quality is degraded by distortions commonly occurring in capture, storage, and transmission of facial images, including noise, blur, and compression. It is observed that, within a certain range of perceived image Quality, a modest increase in image Quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in acquisition or communication/delivery systems that are associated with face detection tasks. A new set of features, called qualHOG, are proposed for robust face-detection that augments face-indicative Histogram of Oriented Gradients (HOG) features with Perceptual Quality-aware spatial Natural Scene Statistics (NSS) features. Face detectors trained on these new features provide statistically significant improvement in tolerance to image distortions over a strong baseline. Distortion-dependent and distortion-unaware variants of the face detectors are proposed and evaluated on a large database of face images representing a wide range of distortions. A biased variant of the training algorithm is also proposed that further enhances the robustness of these face detectors. To facilitate this research, we created a new distorted face database (DFD), containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new data set and relevant code are available for download and further experimentation at www.live.ece.utexas.edu/research/Quality/index.htm.

  • motion based Perceptual Quality assessment of video
    electronic imaging, 2009
    Co-Authors: Kalpana Seshadrinathan, Alan C. Bovik
    Abstract:

    There is a great deal of interest in methods to assess the Perceptual Quality of a video sequence in a full reference framework. Motion plays an important role in human perception of video and videos suffer from several artifacts that have to deal with inaccuracies in the representation of motion in the test video compared to the reference. However, existing algorithms to measure video Quality focus primarily on capturing spatial artifacts in the video signal, and are inadequate at modeling motion perception and capturing temporal artifacts in videos. We present an objective, full reference video Quality index known as the MOtion-based Video Integrity Evaluation (MOVIE) index that integrates both spatial and temporal aspects of distortion assessment. MOVIE explicitly uses motion information from the reference video and evaluates the Quality of the test video along the motion trajectories of the reference video. The performance of MOVIE is evaluated using the VQEG FR-TV Phase I dataset and MOVIE is shown to be competitive with, and even out-perform, existing video Quality assessment systems.

Zhou Wang - One of the best experts on this subject based on the ideXlab platform.

  • Perceptual Quality assessment of smartphone photography
    Computer Vision and Pattern Recognition, 2020
    Co-Authors: Yuming Fang, Hanwei Zhu, Yan Zeng, Zhou Wang
    Abstract:

    As smartphones become people’s primary cameras to take photos, the Quality of their cameras and the associated computational photography modules has become a de facto standard in evaluating and ranking smartphones in the consumer market. We conduct so far the most comprehensive study of Perceptual Quality assessment of smartphone photography. We introduce the Smartphone Photography Attribute and Quality (SPAQ) database, consisting of 11,125 pictures taken by 66 smartphones, where each image is attached with so far the richest annotations. Specifically, we collect a series of human opinions for each image, including image Quality, image attributes (brightness, colorfulness, contrast, noisiness, and sharpness), and scene category labels (animal, cityscape, human, indoor scene, landscape, night scene, plant, still life, and others) in a well-controlled laboratory environment. The exchangeable image file format (EXIF) data for all images are also recorded to aid deeper analysis. We also make the first attempts using the database to train blind image Quality assessment (BIQA) models constructed by baseline and multi-task deep neural networks. The results provide useful insights on how EXIF data, image attributes and high-level semantics interact with image Quality, how next-generation BIQA models can be designed, and how better computational photography systems can be optimized on mobile devices. The database along with the proposed BIQA models are available at https://github.com/h4nwei/SPAQ.

  • Perceptual Quality assessment of uhd hdr wcg videos
    International Conference on Image Processing, 2019
    Co-Authors: Shahrukh Athar, Kai Zeng, Thilan Costa, Zhou Wang
    Abstract:

    High Dynamic Range (HDR) Wide Color Gamut (WCG) Ultra High Definition (4K/UHD) content has become increasingly popular recently. Due to the increased data rate, novel video compression methods have been developed to maintain the Quality of the videos being delivered to consumers under bandwidth constraints. This has led to new challenges for the development of objective Video Quality Assessment (VQA) models, which are traditionally designed without sufficient calibration and validation based on subjective Quality assessment of UHD-HDR-WCG videos. The large performance variations between different consumer HDR TVs, and between consumer HDR TVs and professional HDR reference displays used for content production, further complicates the task of acquiring reliable subjective data that faithfully reflects the impact of compression on UHD-HDR-WCG videos. In this work, we construct a first-of-its-kind video database composed of PQ-encoded UHD-HDR-WCG content, which is subsequently compressed by H.264 and HEVC encoders. We carry out a subjective study on a professional 4K-HDR reference display in a controlled lab environment. We also benchmark representative Full Reference (FR) and No-Reference (NR) objective VQA models against the subjective data to evaluate their performance on compressed UHD-HDR-WCG video content. The database will be made available to the public, subject to content copyright constraints.

  • Perceptual Quality assessment of 3d point clouds
    International Conference on Image Processing, 2019
    Co-Authors: Zhengfang Duanmu, Qi Liu, Wentao Liu, Zhou Wang
    Abstract:

    The real-world applications of 3D point clouds have been growing rapidly in recent years, but effective approaches and datasets to assess the Quality of 3D point clouds are largely lacking. In this work, we construct so far the largest 3D point cloud database with diverse source content and distortion patterns, and carry out a comprehensive subjective user study. We construct 20 high Quality, realistic, and omni-directional point clouds of diverse contents. We then apply downsampling, Gaussian noise, and three types of compression algorithms to create 740 distorted point clouds. Based on the database, we carry out a subjective experiment to evaluate the Quality of distorted point clouds, and perform a point cloud encoder comparison. Our statistical analysis find that existing point cloud Quality assessment models are limited in predicting subjective Quality ratings. The database will be made publicly available to facilitate future research.

  • Perceptual Quality assessment of hdr deghosting algorithms
    International Conference on Image Processing, 2017
    Co-Authors: Yuming Fang, Hanwei Zhu, Zhou Wang
    Abstract:

    High dynamic range (HDR) imaging techniques aim to extend the dynamic range of images that cannot be well captured using conventional camera sensors. A common practice is to take a stack of pictures with different exposure levels and fuse them to produce a final image with more details. However, a small displacement between images caused by either camera or scene motion would void the benefits and cause the so-called ghosting artifacts. Over the past decade, many HDR deghosting algorithms have been proposed, but little work has been dedicated to evaluate HDR deghosting results either subjectively or objectively. In this work, we present a comprehensive subjective study for HDR deghosting. Specifically, we create a database that contains 20 dynamic image sequences and their corresponding deghosting results by 9 deghosting algorithms. A subjective user study is then carried out to evaluate the Perceptual Quality of deghosted images. The experimental results demonstrate the performance and limitations of existing HDR deghosting algorithm as well as no-reference image Quality assessment models. In the future, we will make the database available to the public.

  • Perceptual Quality assessment of high frame rate video
    Multimedia Signal Processing, 2015
    Co-Authors: Rasoul Mohammadi Nasiri, Jiheng Wang, Abdul Rehman, Shiqi Wang, Zhou Wang
    Abstract:

    High frame rate video has been a hot topic in the past few years driven by a strong need in the entertainment and gaming industry. Nevertheless, progress on Perceptual Quality assessment of high frame rate video remains limited, making it difficult to evaluate the exact Perceptual gain by switching from low to high frame rates. In this work, we first conduct a subjective Quality assessment experiment on a database that contains videos compressed at different frame rates, quantization levels and spatial resolutions. We then carry out a series of analysis on the subjective data to investigate the impact of frame rate on perceived video Quality and its interplay with quantization level, spatial resolution, spatial complexity, and motion complexity. We observe that perceived video Quality generally increases with frame rate, but the gain saturates at high rates. Such gain also depends on the interactions between quantization level, spatial resolution, and spatial and motion complexities.

Minoru Etoh - One of the best experts on this subject based on the ideXlab platform.

  • cross dimensional Perceptual Quality assessment for low bit rate videos
    IEEE Transactions on Multimedia, 2008
    Co-Authors: Guangtao Zhai, Weisi Lin, Xiaokang Yang, Jianfei Cai, Wenjun Zhang, Minoru Etoh
    Abstract:

    Most studies in the literature for video Quality assessment have been focused on the evaluation of quantized video sequences at fixed and high spatial and temporal resolutions. Only limited work has been reported for assessing video Quality under different spatial and temporal resolutions. In this paper, we consider a wider scope of video Quality assessment in the sense of considering multiple dimensions. In particular, we address the problem of evaluating Perceptual visual Quality of low bit-rate videos under different settings and requirements. Extensive subjective view tests for assessing the Perceptual Quality of low bit-rate videos have been conducted, which cover 150 test scenarios and include five distinctive dimensions: encoder type, video content, bit rate, frame size, and frame rate. Based on the obtained subjective testing results, we perform thorough statistical analysis to study the influence of different dimensions on the Perceptual Quality and some interesting observations are pointed out. We believe such a study brings new knowledge into the topic of cross-dimensional video Quality assessment and it has immediate applications in Perceptual video adaptation for scalable video over mobile networks.