Statistical Processing

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 324 Experts worldwide ranked by ideXlab platform

Keiji Ozawa - One of the best experts on this subject based on the ideXlab platform.

  • iso iec jtc1 sc29 wg11 report on mpeg 2 subjective assessment at kurihama
    Signal Processing-image Communication, 1993
    Co-Authors: Tsuneyoshi Hidaka, Keiji Ozawa
    Abstract:

    Abstract Following the MPEG-1 subjective assessment, the MPEG-2 subjective assessment was performed at the JVC Kurihama Technical Center in November 1991 in relation to compression of 5 to 10 Mbit/s high-quality moving pictures. This report describes the subjective assessment procedures, results of Statistical Processing of the data obtained, Statistical Processing methods used, reliability of the data evaluated, and results of analysis of the data obtained. It also discusses future problems, centering on subjective assessment tests of high-quality pictures closely resembling their originals and the Processing of bulk data.

  • ISO/IEC JTC1 SC29/WG11; Report on MPEG-2 subjective assessment at Kurihama
    Signal Processing-image Communication, 1993
    Co-Authors: Tsuneyoshi Hidaka, Keiji Ozawa
    Abstract:

    Abstract Following the MPEG-1 subjective assessment, the MPEG-2 subjective assessment was performed at the JVC Kurihama Technical Center in November 1991 in relation to compression of 5 to 10 Mbit/s high-quality moving pictures. This report describes the subjective assessment procedures, results of Statistical Processing of the data obtained, Statistical Processing methods used, reliability of the data evaluated, and results of analysis of the data obtained. It also discusses future problems, centering on subjective assessment tests of high-quality pictures closely resembling their originals and the Processing of bulk data.

Patrick A Naylor - One of the best experts on this subject based on the ideXlab platform.

  • robust Statistical Processing of tdoa estimates for distant speaker diarization
    European Signal Processing Conference, 2017
    Co-Authors: Pablo Peso Parada, Dushyant Sharma, Toon Van Waterschoot, Patrick A Naylor
    Abstract:

    Speaker diarization systems aim to segment an audio signal into homogeneous sections with only one active speaker and answer the question "who spoke when?" We present a novel approach to speaker diarization exploiting spatial information through robust Statistical modeling of Time Difference of Arrival (TDOA) estimates obtained using pairs of microphones. The TDOAs are modeled with Gaussian Mixture Models (GMM) trained in a robust manner with the expectation-conditional maximization algorithm and minorization-maximization approach. In situations of multiple microphone deployment, our method allows for the selection of the best microphone pair as part of the modeling and supports ad-hoc microphone placement. Such information can be useful for subsequent speech Processing algorithms. We show that our method, which uses only spatial information, achieves up to 36.1% relative reduction in speaker error time compared to an open source toolkit using TDOA features and tested on the NIST RT05 multiparty meeting database.

Anne Treisman - One of the best experts on this subject based on the ideXlab platform.

  • Statistical Processing not so implausible after all
    Attention Perception & Psychophysics, 2008
    Co-Authors: Sang Chul Chong, Sung Jun Joo, Tatianaaloi Emmmanouil, Anne Treisman
    Abstract:

    Myczek and Simons (2008) have shown that findings attributed to a Statistical mode of perceptual Processing can, instead, be explained by focused attention to samples of just a few items. Some new findings raise questions about this claim. (1) Participants, given conditions that would require different focused attention strategies, did no worse when the conditions were randomly mixed than when they were blocked. (2) Participants were significantly worse at estimating the mean size when given small samples than when given the whole display. (3) One plausible suggested strategy—comparing the largest item in each display, rather than the mean size—was not, in fact, used. Distributed attention to sets of similar stimuli, enabling a Statistical-Processing mode, provides a coherent account of these and other phenomena.

  • Statistical Processing computing the average size in perceptual groups
    Vision Research, 2005
    Co-Authors: Sang Chul Chong, Anne Treisman
    Abstract:

    This paper explores some structural constraints on computing the mean sizes of sets of elements. Neither number nor density had much effect on judgments of mean size. Intermingled sets of circles segregated only by color gave mean discrimination thresholds for size that were as accurate as sets segregated by location. They were about the same when the relevant color was cued, when it was not cued, and when no distractor set was present. The results suggest that means are computed automatically and in parallel after an initial preattentive segregation by color.

  • Attentional spread in the Statistical Processing of visual displays
    Perception & Psychophysics, 2005
    Co-Authors: Sang Chul Chong, Anne Treisman
    Abstract:

    We tested the hypothesis that distributing attention over an array of similar items makes its Statistical properties automatically available. We found that extracting the mean size of sets of circles was easier to combine with tasks requiring distributed or global attention than with tasks requiring focused attention. One explanation may be that extracting the Statistical descriptors requires parallel access to all the information in the array. Consistent with this claim, we found an advantage for simultaneous over successive presentation when the total time available was matched. However, the advantage was small; parallel access facilitates Statistical Processing without being essential. Evidence that Statistical Processing is automatic when attention is distributed over a display came from the finding that there was no decrement in accuracy relative to single-task performance when mean judgments were made concurrently with another task that required distributed or global attention.

Tsuneyoshi Hidaka - One of the best experts on this subject based on the ideXlab platform.

  • iso iec jtc1 sc29 wg11 report on mpeg 2 subjective assessment at kurihama
    Signal Processing-image Communication, 1993
    Co-Authors: Tsuneyoshi Hidaka, Keiji Ozawa
    Abstract:

    Abstract Following the MPEG-1 subjective assessment, the MPEG-2 subjective assessment was performed at the JVC Kurihama Technical Center in November 1991 in relation to compression of 5 to 10 Mbit/s high-quality moving pictures. This report describes the subjective assessment procedures, results of Statistical Processing of the data obtained, Statistical Processing methods used, reliability of the data evaluated, and results of analysis of the data obtained. It also discusses future problems, centering on subjective assessment tests of high-quality pictures closely resembling their originals and the Processing of bulk data.

  • ISO/IEC JTC1 SC29/WG11; Report on MPEG-2 subjective assessment at Kurihama
    Signal Processing-image Communication, 1993
    Co-Authors: Tsuneyoshi Hidaka, Keiji Ozawa
    Abstract:

    Abstract Following the MPEG-1 subjective assessment, the MPEG-2 subjective assessment was performed at the JVC Kurihama Technical Center in November 1991 in relation to compression of 5 to 10 Mbit/s high-quality moving pictures. This report describes the subjective assessment procedures, results of Statistical Processing of the data obtained, Statistical Processing methods used, reliability of the data evaluated, and results of analysis of the data obtained. It also discusses future problems, centering on subjective assessment tests of high-quality pictures closely resembling their originals and the Processing of bulk data.

Pablo Peso Parada - One of the best experts on this subject based on the ideXlab platform.

  • robust Statistical Processing of tdoa estimates for distant speaker diarization
    European Signal Processing Conference, 2017
    Co-Authors: Pablo Peso Parada, Dushyant Sharma, Toon Van Waterschoot, Patrick A Naylor
    Abstract:

    Speaker diarization systems aim to segment an audio signal into homogeneous sections with only one active speaker and answer the question "who spoke when?" We present a novel approach to speaker diarization exploiting spatial information through robust Statistical modeling of Time Difference of Arrival (TDOA) estimates obtained using pairs of microphones. The TDOAs are modeled with Gaussian Mixture Models (GMM) trained in a robust manner with the expectation-conditional maximization algorithm and minorization-maximization approach. In situations of multiple microphone deployment, our method allows for the selection of the best microphone pair as part of the modeling and supports ad-hoc microphone placement. Such information can be useful for subsequent speech Processing algorithms. We show that our method, which uses only spatial information, achieves up to 36.1% relative reduction in speaker error time compared to an open source toolkit using TDOA features and tested on the NIST RT05 multiparty meeting database.