Subjective Study

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 312 Experts worldwide ranked by ideXlab platform

Yao Wang - One of the best experts on this subject based on the ideXlab platform.

  • A Subjective Study of Viewer Navigation Behaviors When Watching 360-Degree Videos on Computers
    2018 IEEE International Conference on Multimedia and Expo (ICME), 2018
    Co-Authors: Fanyi Duanmu, Sumanth Srinivasan, Yao Wang
    Abstract:

    Virtual reality (VR) applications have become popular recently and rapidly commercialized. The behaviors of users watching 360-degree omni-directional videos have not been fully investigated. In this paper, a dataset of view trajectories for users watching 360-degree videos over computer (desktop/laptop) environment is presented. The dataset includes view center trajectory data collected from viewers watching twelve 360-degree videos over the computer, using mouse to navigate and explore the environment. The selected videos cover a variety of contents, leading to different navigation patterns and behaviors. Based on the dataset, we demonstrate that the viewers share similar viewing patterns over certain 360-degree video category. We also compare the view motion patterns and statistics with prior datasets captured using head-mounted display (HMD). The dataset has been made available online, to facilitate the studies on 360-degree video view prediction, content saliency analysis and VR streaming, etc.

  • ICME - A Subjective Study of Viewer Navigation Behaviors When Watching 360-Degree Videos on Computers
    2018 IEEE International Conference on Multimedia and Expo (ICME), 2018
    Co-Authors: Fanyi Duanmu, Sumanth Srinivasan, Yao Wang
    Abstract:

    Virtual reality (VR) applications have become popular recently and rapidly commercialized. The behaviors of users watching 360-degree omni-directional videos have not been fully investigated. In this paper, a dataset of view trajectories for users watching 360-degree videos over computer (desktop/laptop) environment is presented. The dataset includes view center trajectory data collected from viewers watching twelve 360-degree videos over the computer, using mouse to navigate and explore the environment. The selected videos cover a variety of contents, leading to different navigation patterns and behaviors. Based on the dataset, we demonstrate that the viewers share similar viewing patterns over certain 360-degree video category. We also compare the view motion patterns and statistics with prior datasets captured using head-mounted display (HMD). The dataset has been made available online, to facilitate the studies on 360-degree video view prediction, content saliency analysis and VR streaming, etc.

  • ICIP - Perceptual quality of video with quantization variation: A Subjective Study and analytical modeling
    2012 19th IEEE International Conference on Image Processing, 2012
    Co-Authors: Yen-fu Ou, Huiqi Zeng, Yao Wang
    Abstract:

    This work investigates the impact of temporal variation of quantization stepsize (QS) on perceptual video quality. Among many dimensions of QS variation, as a first step we focus on videos in which two QS's, alternate over fixed intervals. We present Subjective test results, and analyze the influence of several factors (including the QS difference, QS ratio, changing intervals, and video content). According the observation and data analysis, we propose analytical models that relate the perceived quality with the two QS's. Such quality assessment and modeling are essential in making video adaptation decisions when delivering video over dynamically changing wireless links.

  • Perceptual quality of video with quantization variation: A Subjective Study and analytical modeling
    2012 19th IEEE International Conference on Image Processing, 2012
    Co-Authors: Yen-fu Ou, Huiqi Zeng, Yao Wang
    Abstract:

    This work investigates the impact of temporal variation of quantization stepsize (QS) on perceptual video quality. Among many dimensions of QS variation, as a first step we focus on videos in which two QS's, alternate over fixed intervals. We present Subjective test results, and analyze the influence of several factors (including the QS difference, QS ratio, changing intervals, and video content). According the observation and data analysis, we propose analytical models that relate the perceived quality with the two QS's. Such quality assessment and modeling are essential in making video adaptation decisions when delivering video over dynamically changing wireless links.

  • Perceptual quality of video with frame rate variation: A Subjective Study
    2010 IEEE International Conference on Acoustics Speech and Signal Processing, 2010
    Co-Authors: Yen-fu Ou, Yan Zhou, Yao Wang
    Abstract:

    This work investigates the impact of periodic frame rate variation on perceptual video quality. Among many dimensions of frame rate variation, as a first step we focus on videos in which two frame rates alternate over fixed intervals. We present Subjective test results, and analyze the influence of several factors (including the average frame rate, the frame rate deviation, and the video content) on the perceptual quality.

Alan C Bovik - One of the best experts on this subject based on the ideXlab platform.

  • Large-Scale Crowdsourced Study for Tone-Mapped HDR Pictures
    IEEE Transactions on Image Processing, 2017
    Co-Authors: Debarati Kundu, Alan C Bovik, Deepti Ghadiyaram, Brian L. Evans
    Abstract:

    Measuring digital picture quality, as perceived by human observers, is increasingly important in many applications in which humans are the ultimate consumers of visual information. Standard dynamic range (SDR) images provide 8 b/color/pixel. High dynamic range (HDR) images, usually created from multiple exposures of the same scene, can provide 16 or 32 b/color/pixel, but need to be tonemapped to SDR for display on standard monitors. Multiexposure fusion (MEF) techniques bypass HDR creation by fusing an exposure stack directly to SDR images to achieve aesthetically pleasing luminance and color distributions. Many HDR and MEF databases have a relatively small number of images and human opinion scores, obtained under stringently controlled conditions, thereby limiting realistic viewing. Moreover, many of these databases are intended to compare tone-mapping algorithms, rather than being specialized for developing and comparing image quality assessment models. To overcome these challenges, we conducted a massively crowdsourced online Subjective Study. The primary contributions described in this paper are: 1) the new ESPL-LIVE HDR Image Database that we created containing diverse images obtained by tonemapping operators and MEF algorithms, with and without postprocessing; 2) a large-scale Subjective Study that we conducted using a crowdsourced platform to gather more than 300000 opinion scores on 1811 images from over 5000 unique observers; and 3) a detailed Study of the correlation performance of the state-ofthe-art no-reference image quality assessment algorithms against human opinion scores of these images. The database is available at http://signal.ece.utexas.edu/%7Edebarati/HDRDatabase.zip.

  • ICIP - Adaptive video transmission with Subjective quality constraints
    2014 IEEE International Conference on Image Processing (ICIP), 2014
    Co-Authors: Chao Chen, Alan C Bovik, Gustavo De Veciana, Robert W. Heath
    Abstract:

    We conducted a Subjective Study wherein we found that viewers' Quality of Experience (QoE) was strongly correlated with the empirical cumulative distribution function (eCDF) of the predicted video quality. Based on this observation, we propose a rate-adaptation algorithm that can incorporate QoE constraints on the empirical cumulative quality distribution per user. Simulation results show that the proposed technique can reduce network resource consumption by 29% over conventional average-quality maximized rate-adaptation algorithms.

  • Delivery quality score model for Internet video
    2014 IEEE International Conference on Image Processing (ICIP), 2014
    Co-Authors: Hojatollah Yeganeh, Roman Kordasiewicz, Michael Gallant, Deepti Ghadiyaram, Alan C Bovik
    Abstract:

    The vast majority of today's internet video services are consumed over-the-top (OTT) via reliable streaming (HTTP via TCP), where the primary noticeable delivery-related impairments are startup delay and stalling. In this paper we introduce an objective model called the delivery quality score (DQS) model, to predict user's QoE in the presence of such impairments. We describe a large Subjective Study that we carried out to tune and validate this model. Our experiments demonstrate that the DQS model correlates highly with the Subjective data and that it outperforms other emerging models.

  • Crowdsourced Study of Subjective image quality
    2014 48th Asilomar Conference on Signals Systems and Computers, 2014
    Co-Authors: Deepti Ghadiyaram, Alan C Bovik
    Abstract:

    We designed and created a new image quality database that models diverse authentic image distortions and artifacts that affect images that are captured using modern mobile devices. We also designed and implemented a new online crowdsourcing system, which we are using to conduct a very large-scale, on-going, multi-month image quality assessment (IQA) Subjective Study, wherein a wide range of diverse observers record their judgments of image quality. Our database currently consists of over 320,000 opinion scores on 1,163 authentically distorted images evaluated by over 7000 human observers. The new database will soon be made freely available for download and we envision that the fruits of our efforts will provide researchers with a valuable tool to benchmark and improve the performance of objective IQA algorithms.

  • Temporal hysteresis model of time varying Subjective video quality
    2011 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2011
    Co-Authors: Kalpana Seshadrinathan, Alan C Bovik
    Abstract:

    Video quality assessment (QA) continues to be an important area of research due to the overwhelming number of applications where videos are delivered to humans. In particular, the problem of temporal pooling of quality sores has received relatively little attention. We observe a hysteresis effect in the Subjective judgment of time-varying video quality based on measured behavior in a Subjective Study. Based on our analysis of the Subjective data, we propose a hysteresis temporal pooling strategy for QA algorithms. Applying this temporal strategy to pool scores from PSNR, SSIM and MOVIE produces markedly improved Subjective quality prediction.

Rajiv Soundararajan - One of the best experts on this subject based on the ideXlab platform.

  • ISMAR - Prediction of Discomfort due to Egomotion in Immersive Videos for Virtual Reality
    2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2019
    Co-Authors: Suprith Balasubramanian, Rajiv Soundararajan
    Abstract:

    We consider the problem of automatic assessment of visually induced motion sickness in virtual reality applications. In particular, we Study the impact on visual discomfort due to camera motion or egomotion present in the video displayed through a head mounted display. We develop a database of 100 short duration videos with different camera trajectories, speeds and shake levels and conduct a large scale Subjective Study by collecting more than 4000 human ratings of discomfort levels. The videos are generated synthetically by applying different camera trajectories. We then use the Subjective Study to learn to predict discomfort by designing features describing the camera motion. The features are based on the ground truth camera trajectory and estimate the camera velocity and shake and depth of the visual scene. We show that these features can be effectively used to predict discomfort by obtaining a high correlation with the Subjective discomfort scores provided by humans.

  • Prediction of Discomfort due to Egomotion in Immersive Videos for Virtual Reality
    2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2019
    Co-Authors: Suprith Balasubramanian, Rajiv Soundararajan
    Abstract:

    We consider the problem of automatic assessment of visually induced motion sickness in virtual reality applications. In particular, we Study the impact on visual discomfort due to camera motion or egomotion present in the video displayed through a head mounted display. We develop a database of 100 short duration videos with different camera trajectories, speeds and shake levels and conduct a large scale Subjective Study by collecting more than 4000 human ratings of discomfort levels. The videos are generated synthetically by applying different camera trajectories. We then use the Subjective Study to learn to predict discomfort by designing features describing the camera motion. The features are based on the ground truth camera trajectory and estimate the camera velocity and shake and depth of the visual scene. We show that these features can be effectively used to predict discomfort by obtaining a high correlation with the Subjective discomfort scores provided by humans.

  • a Subjective Study to evaluate video quality assessment algorithms
    Proceedings of SPIE, 2010
    Co-Authors: Kalpana Seshadrinathan, Rajiv Soundararajan, Alan C Bovik, Lawrence K Cormack
    Abstract:

    Automatic methods to evaluate the perceptual quality of a digital video sequence have widespread applications wherever the end-user is a human. Several objective video quality assessment (VQA) algorithms exist, whose performance is typically evaluated using the results of a Subjective Study performed by the video quality experts group (VQEG) in 2000. There is a great need for a free, publicly available Subjective Study of video quality that embodies state-of-the-art in video processing technology and that is effective in challenging and benchmarking objective VQA algorithms. In this paper, we present a Study and a resulting database, known as the LIVE Video Quality Database, where 150 distorted video sequences obtained from 10 different source video content were Subjectively evaluated by 38 human observers. Our Study includes videos that have been compressed by MPEG-2 and H.264, as well as videos obtained by simulated transmission of H.264 compressed streams through error prone IP and wireless networks. The Subjective evaluation was performed using a single stimulus paradigm with hidden reference removal, where the observers were asked to provide their opinion of video quality on a continuous scale. We also present the performance of several freely available objective, full reference (FR) VQA algorithms on the LIVE Video Quality Database. The recent MOtion-based Video Integrity Evaluation (MOVIE) index emerges as the leading objective VQA algorithm in our Study, while the performance of the Video Quality Metric (VQM) and the Multi-Scale Structural SIMilarity (MS-SSIM) index is noteworthy. The LIVE Video Quality Database is freely available for download1 and we hope that our Study provides researchers with a valuable tool to benchmark and improve the performance of objective VQA algorithms.

  • Human Vision and Electronic Imaging - A Subjective Study to evaluate video quality assessment algorithms
    Proceedings of SPIE, 2010
    Co-Authors: Kalpana Seshadrinathan, Rajiv Soundararajan, Alan C Bovik, Lawrence K Cormack
    Abstract:

    Automatic methods to evaluate the perceptual quality of a digital video sequence have widespread applications wherever the end-user is a human. Several objective video quality assessment (VQA) algorithms exist, whose performance is typically evaluated using the results of a Subjective Study performed by the video quality experts group (VQEG) in 2000. There is a great need for a free, publicly available Subjective Study of video quality that embodies state-of-the-art in video processing technology and that is effective in challenging and benchmarking objective VQA algorithms. In this paper, we present a Study and a resulting database, known as the LIVE Video Quality Database, where 150 distorted video sequences obtained from 10 different source video content were Subjectively evaluated by 38 human observers. Our Study includes videos that have been compressed by MPEG-2 and H.264, as well as videos obtained by simulated transmission of H.264 compressed streams through error prone IP and wireless networks. The Subjective evaluation was performed using a single stimulus paradigm with hidden reference removal, where the observers were asked to provide their opinion of video quality on a continuous scale. We also present the performance of several freely available objective, full reference (FR) VQA algorithms on the LIVE Video Quality Database. The recent MOtion-based Video Integrity Evaluation (MOVIE) index emerges as the leading objective VQA algorithm in our Study, while the performance of the Video Quality Metric (VQM) and the Multi-Scale Structural SIMilarity (MS-SSIM) index is noteworthy. The LIVE Video Quality Database is freely available for download1 and we hope that our Study provides researchers with a valuable tool to benchmark and improve the performance of objective VQA algorithms.

Fanyi Duanmu - One of the best experts on this subject based on the ideXlab platform.

  • A Subjective Study of Viewer Navigation Behaviors When Watching 360-Degree Videos on Computers
    2018 IEEE International Conference on Multimedia and Expo (ICME), 2018
    Co-Authors: Fanyi Duanmu, Sumanth Srinivasan, Yao Wang
    Abstract:

    Virtual reality (VR) applications have become popular recently and rapidly commercialized. The behaviors of users watching 360-degree omni-directional videos have not been fully investigated. In this paper, a dataset of view trajectories for users watching 360-degree videos over computer (desktop/laptop) environment is presented. The dataset includes view center trajectory data collected from viewers watching twelve 360-degree videos over the computer, using mouse to navigate and explore the environment. The selected videos cover a variety of contents, leading to different navigation patterns and behaviors. Based on the dataset, we demonstrate that the viewers share similar viewing patterns over certain 360-degree video category. We also compare the view motion patterns and statistics with prior datasets captured using head-mounted display (HMD). The dataset has been made available online, to facilitate the studies on 360-degree video view prediction, content saliency analysis and VR streaming, etc.

  • ICME - A Subjective Study of Viewer Navigation Behaviors When Watching 360-Degree Videos on Computers
    2018 IEEE International Conference on Multimedia and Expo (ICME), 2018
    Co-Authors: Fanyi Duanmu, Sumanth Srinivasan, Yao Wang
    Abstract:

    Virtual reality (VR) applications have become popular recently and rapidly commercialized. The behaviors of users watching 360-degree omni-directional videos have not been fully investigated. In this paper, a dataset of view trajectories for users watching 360-degree videos over computer (desktop/laptop) environment is presented. The dataset includes view center trajectory data collected from viewers watching twelve 360-degree videos over the computer, using mouse to navigate and explore the environment. The selected videos cover a variety of contents, leading to different navigation patterns and behaviors. Based on the dataset, we demonstrate that the viewers share similar viewing patterns over certain 360-degree video category. We also compare the view motion patterns and statistics with prior datasets captured using head-mounted display (HMD). The dataset has been made available online, to facilitate the studies on 360-degree video view prediction, content saliency analysis and VR streaming, etc.

H.r. Sheikh - One of the best experts on this subject based on the ideXlab platform.

  • image information and visual quality
    IEEE Transactions on Image Processing, 2006
    Co-Authors: H.r. Sheikh, Alan C Bovik
    Abstract:

    Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA algorithms generally interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by signal fidelity measures. In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the distortion process and explore the relationship between image information and visual quality. QA systems are invariably involved with judging the visual quality of "natural" images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of such natural signals. Using these models, we previously presented an information fidelity criterion for image QA that related image quality with the amount of information shared between a reference and a distorted image. In this paper, we propose an image information measure that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image. Combining these two quantities, we propose a visual information fidelity measure for image QA. We validate the performance of our algorithm with an extensive Subjective Study involving 779 images and show that our method outperforms recent state-of-the-art image QA algorithms by a sizeable margin in our simulations. The code and the data from the Subjective Study are available at the LIVE website.

  • An information fidelity criterion for image quality assessment using natural scene statistics
    IEEE Transactions on Image Processing, 2005
    Co-Authors: H.r. Sheikh, A.c. Bovik, G. De Veciana
    Abstract:

    Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Traditionally, image QA algorithms interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by arbitrary signal fidelity criteria. In this paper, we approach the problem of image QA by proposing a novel information fidelity criterion that is based on natural scene statistics. QA systems are invariably involved with judging the visual quality of images and videos that are meant for "human consumption". Researchers have developed sophisticated models to capture the statistics of natural signals, that is, pictures and videos of the visual environment. Using these statistical models in an information-theoretic setting, we derive a novel QA algorithm that provides clear advantages over the traditional approaches. In particular, it is parameterless and outperforms current methods in our testing. We validate the performance of our algorithm with an extensive Subjective Study involving 779 images. We also show that, although our approach distinctly departs from traditional HVS-based methods, it is functionally similar to them under certain conditions, yet it outperforms them due to improved modeling. The code and the data from the Subjective Study are available at [1].