Robust Statistics

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 7065 Experts worldwide ranked by ideXlab platform

Thomas Wiegand - One of the best experts on this subject based on the ideXlab platform.

  • Improved video segmentation through Robust Statistics and MPEG-7 features
    2009 IEEE International Conference on Acoustics Speech and Signal Processing, 2009
    Co-Authors: Patrick Ndjiki-nya, Sebastian Gerke, Thomas Wiegand
    Abstract:

    Video segmentation is an important task for a wide range of applications like content-based video coding or video retrieval. In this paper, a new spatio-temporal video segmentation framework is presented. It is based upon Robust Statistics, namely an M-estimator, and incorporates an MPEG-7 descriptor for consistent temporal labeling of identified textures. The algorithm is based on assumptions about the geometric modifications a given moving region undergoes with time as well as on its surface properties. Homogeneously moving segments are described using a parametric motion scheme. The latter is used to piecewise fit the optical flow field in order to extract rigid motion areas. Robust Statistics are used to carefully constrain split, merge and contour refinement decisions. Experimental results show that regions detected by the proposed method are more reliable than the state-of-the-art. True region boundaries are moreover better detected.

  • ICASSP - Improved video segmentation through Robust Statistics and MPEG-7 features
    2009 IEEE International Conference on Acoustics Speech and Signal Processing, 2009
    Co-Authors: Patrick Ndjiki-nya, Sebastian Gerke, Thomas Wiegand
    Abstract:

    Video segmentation is an important task for a wide range of applications like content-based video coding or video retrieval. In this paper, a new spatio-temporal video segmentation framework is presented. It is based upon Robust Statistics, namely an M-estimator, and incorporates an MPEG-7 descriptor for consistent temporal labeling of identified textures. The algorithm is based on assumptions about the geometric modifications a given moving region undergoes with time as well as on its surface properties. Homogeneously moving segments are described using a parametric motion scheme. The latter is used to piecewise fit the optical flow field in order to extract rigid motion areas. Robust Statistics are used to carefully constrain split, merge and contour refinement decisions. Experimental results show that regions detected by the proposed method are more reliable than the state-of-the-art. True region boundaries are moreover better detected.

M Žura - One of the best experts on this subject based on the ideXlab platform.

  • Using the Robust Statistics for travel time estimation on highways
    IET Intelligent Transport Systems, 2015
    Co-Authors: Jure Pirc, Goran Turk, M Žura
    Abstract:

    © The Institution of Engineering and Technology 2015. Highway operators around the world are using automated vehicle identification (AVI)-based techniques as a technological input for travel time estimation on highways. Various AVI technologies provide various travel time measurement samples: some of them are able to identify only personal cars (e.g. tolling tags), while others provide mixed samples of all vehicle classes (e.g. license plate matching). As the adequate information on travel times should concern the personal cars, the influence of heavy vehicles (HVs) should be eliminated from the samples, which is not feasible with the use of existing travel time estimation algorithms. It was observed that also during congestion travel times of personal cars and HVs remain dispersed. The motivation for the present study was to introduce an algorithm that would be able to exclude the influence of slower HVs in travel time estimation for technologies, providing mixed samples of travel time measurements. This was achieved by the use of Robust Statistics. The results of the study could be used by all highway agencies and operators who are encountering problems with unreasonably extended estimations of travel times because of the presence of slow HVs in the traffic flow.

  • Using the Robust Statistics for travel time estimation on highways
    IET Intelligent Transport Systems, 2015
    Co-Authors: Jure Pirc, Goran Turk, M Žura
    Abstract:

    Highway operators around the world are using automated vehicle identification (AVI)-based techniques as a technological input for travel time estimation on highways. Various AVI technologies provide various travel time measurement samples: some of them are able to identify only personal cars (e.g. tolling tags), while others provide mixed samples of all vehicle classes (e.g. license plate matching). As the adequate information on travel times should concern the personal cars, the influence of heavy vehicles (HVs) should be eliminated from the samples, which is not feasible with the use of existing travel time estimation algorithms. It was observed that also during congestion travel times of personal cars and HVs remain dispersed. The motivation for the present study was to introduce an algorithm that would be able to exclude the influence of slower HVs in travel time estimation for technologies, providing mixed samples of travel time measurements. This was achieved by the use of Robust Statistics. The results of the study could be used by all highway agencies and operators who are encountering problems with unreasonably extended estimations of travel times because of the presence of slow HVs in the traffic flow.

D Ebenezer - One of the best experts on this subject based on the ideXlab platform.

  • Robust Statistics based algorithm to remove salt and pepper noise in images
    World Academy of Science Engineering and Technology International Journal of Computer Electrical Automation Control and Information Engineering, 2009
    Co-Authors: V R Vijaykumar, P T Vanathi, P Kanagasabapathy, D Ebenezer
    Abstract:

    � Abstract—In this paper, a Robust Statistics based filter to remove salt and pepper noise in digital images is presented. The function of the algorithm is to detect the corrupted pixels first since the impulse noise only affect certain pixels in the image and the remaining pixels are uncorrupted. The corrupted pixels are replaced by an estimated value using the proposed Robust Statistics based filter. The proposed method perform well in removing low to medium density impulse noise with detail preservation upto a noise density of 70% compared to standard median filter, weighted median filter, recursive weighted median filter, progressive switching median filter, signal dependent rank ordered mean filter, adaptive median filter and recently proposed decision based algorithm. The visual and quantitative results show the proposed algorithm outperforms in restoring the original image with superior preservation of edges and better suppression of impulse noise Keywords—Image denoising, Nonlinear filter, Robust Statistics, and Salt and Pepper Noise.

  • A new Adaptive Decision based Robust Statistics Estimation Filter for high density impulse noise in images and videos
    2009 International Conference on Control Automation Communication and Energy Conservation, 2009
    Co-Authors: V. Jayaraj, D Ebenezer
    Abstract:

    In this paper an adaptive decision based Robust Statistics estimation filter to remove high density impulse noise is presented. The proposed algorithm detects the corrupted pixel at the initial stage and replaces them with an estimated image data using the proposed Robust estimation algorithm. The new Robust Statistics based filter shows significantly better image quality than standard median filter, weighted median filter, recursive weighted median filter, adaptive median filter, decision based algorithm and the recently proposed Robust estimation algorithm. The proposed method removes the noise efficiently even at noise level as high as 90% and preserves the edges. The proposed filter has better image quality then existing nonlinear filters.

  • A new Adaptive Decision based Robust Statistics Estimation Filter for high density impulse noise in images and videos
    2009
    Co-Authors: V. Jayaraj, D Ebenezer
    Abstract:

    In this paper an Adaptive Decision based Robust Statistics Estimation Filter to remove high density impulse noise is presented. The proposed algorithm detects the corrupted pixel at the initial stage and replaces them with an estimated image data using the proposed Robust estimation algorithm. The new Robust Statistics based filter shows significantly better image quality than Standard Median Filter, Weighted Median Filter, Recursive Weighted Median Filter, Adaptive Median Filter, Decision Based Algorithm and the recently proposed Robust Estimation Algorithm. The proposed method removes the noise efficiently even at noise level as high as 90% and preserves the edges. The proposed filter has better image quality then existing Non-linear filters.

Cuntai Guan - One of the best experts on this subject based on the ideXlab platform.

  • ICPR - Artifact correction with Robust Statistics for non-stationary intracranial pressure signal monitoring
    2012
    Co-Authors: Mengling Feng, Clifton Phua, Feng Zhang, Cuntai Guan
    Abstract:

    To enhance ICP monitoring of Traumatic Brain Injury (TBI) patients, much research effort has been attracted to the development auto-alarming systems and forecasting methods to predict impending intracranial hypertension episodes. Nevertheless, the performance of the proposed methods are often limited by the presence of artifacts in the ICP signal. To address this bottleneck, we propose novel artifact correction methods. A scale-based filter is proposed to identify the artifacts. For the proposed filter, instead of classic Statistics, Robust Statistics is employed to estimate the scale parameter. Thus, our proposed methods are Robust against undesirable influences from artifacts. Since the ICP signal is non-stationary, non-stationary signal processing techniques, the empirical mode decomposition (EMD), wavelet transformation and median filter, are also employed. The effectiveness of proposed methods are evaluated experimentally. Experimental results demonstrate that, with the proposed artifact correction methods, significant performance gains can be achieved.

  • Artifact correction with Robust Statistics for non-stationary intracranial pressure signal monitoring
    Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), 2012
    Co-Authors: Mengling Feng, Clifton Phua, Feng Zhang, Cuntai Guan
    Abstract:

    To enhance ICP monitoring of Traumatic Brain Injury (TBI) patients, much research effort has been attracted to the development auto-alarming systems and forecasting methods to predict impending intracranial hypertension episodes. Nevertheless, the performance of the proposed methods are often limited by the presence of artifacts in the ICP signal. To address this bottleneck, we propose novel artifact correction methods. A scale-based filter is proposed to identify the artifacts. For the proposed filter, instead of classic Statistics, Robust Statistics is employed to estimate the scale parameter. Thus, our proposed methods are Robust against undesirable influences from artifacts. Since the ICP signal is non-stationary, non-stationary signal processing techniques, the empirical mode decomposition (EMD), wavelet transformation and median filter, are also employed. The effectiveness of proposed methods are evaluated experimentally. Experimental results demonstrate that, with the proposed artifact correction methods, significant performance gains can be achieved.

Andrew Mcgregor - One of the best experts on this subject based on the ideXlab platform.

  • ICS - Space-Efficient Estimation of Robust Statistics and Distribution Testing
    2020
    Co-Authors: Steve Chien, Katrina Ligett, Andrew Mcgregor
    Abstract:

    The generic problem of estimation and inference given a sequence of i.i.d. samples has been extensively studied in the Statistics, property testing, and learning communities. A natural quantity of interest is the sample complexity of the particular learning or estimation problem being considered. While sample complexity is an important component of the computational efficiency of the task, it is also natural to consider the space complexity: do we need to store all the samples as they are drawn, or is it sufficient to use memory that is significantly sublinear in the sample complexity? Surprisingly, this aspect of the complexity of estimation has received significantly less attention in all but a few specific cases. While space-bounded, sequential computation is the purview of the field of data-stream computation, almost all of the literature on the algorithmic theory of data-streams considers only "empirical problems", where the goal is to compute a function of the data present in the stream rather than to infer something about the source of the stream. Our contributions are two-fold. First, we provide results connecting space efficiency to the estimation of Robust Statistics from a sequence of i.i.d. samples. Robust Statistics are a particularly interesting class of Statistics in our setting because, by definition, they are resilient to noise or errors in the sampled data. We show that this property is enough to ensure that very space-efficient stream algorithms exist for their estimation. In contrast, the numerical value of a "non-Robust" statistic can change dramatically with additional samples, and this limits the utility of any finite length sequence of samples. Second, we present a general result that captures a trade-off between sample and space complexity in the context of distributional property testing.

  • space efficient estimation of Robust Statistics and distribution testing
    International Conference on Supercomputing, 2010
    Co-Authors: Steve Chien, Katrina Ligett, Andrew Mcgregor
    Abstract:

    The generic problem of estimation and inference given a sequence of i.i.d. samples has been extensively studied in the Statistics, property testing, and learning communities. A natural quantity of interest is the sample complexity of the particular learning or estimation problem being considered. While sample complexity is an important component of the computational efficiency of the task, it is also natural to consider the space complexity: do we need to store all the samples as they are drawn, or is it sufficient to use memory that is significantly sublinear in the sample complexity? Surprisingly, this aspect of the complexity of estimation has received significantly less attention in all but a few specific cases. While space-bounded, sequential computation is the purview of the field of data-stream computation, almost all of the literature on the algorithmic theory of data-streams considers only "empirical problems", where the goal is to compute a function of the data present in the stream rather than to infer something about the source of the stream. Our contributions are two-fold. First, we provide results connecting space efficiency to the estimation of Robust Statistics from a sequence of i.i.d. samples. Robust Statistics are a particularly interesting class of Statistics in our setting because, by definition, they are resilient to noise or errors in the sampled data. We show that this property is enough to ensure that very space-efficient stream algorithms exist for their estimation. In contrast, the numerical value of a "non-Robust" statistic can change dramatically with additional samples, and this limits the utility of any finite length sequence of samples. Second, we present a general result that captures a trade-off between sample and space complexity in the context of distributional property testing.