Sample Correlation Coefficient

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 65790 Experts worldwide ranked by ideXlab platform

Daniel A Griffith - One of the best experts on this subject based on the ideXlab platform.

  • effective geographic Sample size in the presence of spatial autoCorrelation
    Annals of The Association of American Geographers, 2005
    Co-Authors: Daniel A Griffith
    Abstract:

    As spatial autoCorrelation latent in georeferenced data increases, the amount of duplicate information contained in these data also increases. This property suggests the research question asking what the number of independent observations, say , is that is equivalent to the Sample size, n, of a data set. This is the notion of effective Sample size. Intuitively speaking, when zero spatial autoCorrelation prevails, ; when perfect positive spatial autoCorrelation prevails in a univariate regional mean problem, . Equations are presented for estimating based on the sampling distribution of a Sample mean or Sample Correlation Coefficient with the goal of obtaining some predetermined level of precision, using the following spatial statistical model specifications: (1) simultaneous autoregressive, (2) geostatistical semivariogram, and (3) spatial filter. These equations are evaluated with simulation experiments and are illustrated with selected empirical examples found in the literature.

Garfield R Mellema - One of the best experts on this subject based on the ideXlab platform.

  • Correlation based testing for passive sonar picture rationalization
    International Conference on Information Fusion, 2007
    Co-Authors: Garfield R Mellema
    Abstract:

    Modern passive sonar systems employ a high degree of automation to produce a track-level sonar picture. Further refinement of the track-level information is normally performed by a human operator. Providing automated assistance would reduce the operator's workload and is a key enabler for semi- and fully automated sonar systems. The nature of the signals emitted by targets and of the underwater environment typically results in each target being represented by multiple track segments. A tool is required which can numerically describe the relationship between pairs of track segments so that those that apparently share a common origin can be identified automatically. The Sample Correlation Coefficient, is a statistical measure of relatedness. This paper describes the application of a test based on that measure to compare tracks produced by a probabilistic data association filter from a set of towed array sonar data

  • FUSION - Correlation based testing for passive sonar picture rationalization
    2007 10th International Conference on Information Fusion, 2007
    Co-Authors: Garfield R Mellema
    Abstract:

    Modern passive sonar systems employ a high degree of automation to produce a track-level sonar picture. Further refinement of the track-level information is normally performed by a human operator. Providing automated assistance would reduce the operator's workload and is a key enabler for semi- and fully automated sonar systems. The nature of the signals emitted by targets and of the underwater environment typically results in each target being represented by multiple track segments. A tool is required which can numerically describe the relationship between pairs of track segments so that those that apparently share a common origin can be identified automatically. The Sample Correlation Coefficient, is a statistical measure of relatedness. This paper describes the application of a test based on that measure to compare tracks produced by a probabilistic data association filter from a set of towed array sonar data

  • A Structured Approach to Passive Sonar Track Association
    OCEANS 2007 - Europe, 2007
    Co-Authors: Garfield R Mellema
    Abstract:

    The presence of multiple, apparently independent track segments originating from the same target complicates the track-level passive sonar picture. If some of these segments can be shown to have common origin, they can be associated into a single composite track, simplifying that picture. The process could also provide additional information about the target, such as range or classification. Track association is typically a manual process, relying on the expertise of a human sonar operator. If a method could be found to reliably apply numerical scores to the degree of apparent relationship between pairs of tracks, those scores could be used to assist an operator in the track association process or as inputs to a fully automated track association process. This paper outlines the construction of a test based on the Sample Correlation Coefficient and describes the result of its application to tracks produced by a probabilistic data association filter (PDAF) from a set of towed array sensor data.

J.a. Fessler - One of the best experts on this subject based on the ideXlab platform.

  • Intensity-based image registration using robust Correlation Coefficients
    IEEE Transactions on Medical Imaging, 2004
    Co-Authors: J.a. Fessler
    Abstract:

    The ordinary Sample Correlation Coefficient is a popular similarity measure for aligning images from the same or similar modalities. However, this measure can be sensitive to the presence of "outlier" objects that appear in one image but not the other, such as surgical instruments, the patient table, etc., which can lead to biased registrations. This paper describes an intensity-based image registration technique that uses a robust Correlation Coefficient as a similarity measure. Relative to the ordinary Sample Correlation Coefficient, the proposed similarity measure reduces the influence of outliers. We also compared the performance of the proposed method with the mutual information-based method. The robust Correlation-based method should be useful for image registration in radiotherapy (KeV to MeV X-ray images) and image-guided surgery applications. We have investigated the properties of the proposed method by theoretical analysis, computer simulations, a phantom experiment, and with functional magnetic resonance imaging (MRI) data.

  • Intensity-Based Image Registration Using Robust
    2004
    Co-Authors: Jeongtae Kim, J.a. Fessler
    Abstract:

    The ordinary Sample Correlation Coefficient is a popular similarity measure for aligning images from the same or similar modalities. However, this measure can be sensitive to the presence of "outlier" objects that appear in one image but not the other, such as surgical instruments, the patient table, etc., which can lead to biased registrations. This paper describes an intensity-based image registration technique that uses a robust Correlation Coefficient as a similarity measure. Relative to the ordinary Sample Correlation Coefficient, the proposed similarity measure reduces the influence of outliers. We also compared the performance of the proposed method with the mutual informa- tion-based method. The robust Correlation-based method should be useful for image registration in radiotherapy (KeV to MeV X-ray images) and image-guided surgery applications. We have investigated the properties of the proposed method by theoretical analysis, computer simulations, a phantom experiment, and with functional magnetic resonance imaging data.

José A. Villaseñor-alva - One of the best experts on this subject based on the ideXlab platform.

  • A Correlation Test for Normality Based on the Lévy Characterization
    Communications in Statistics - Simulation and Computation, 2014
    Co-Authors: José A. Villaseñor-alva, Elizabeth González-estrada
    Abstract:

    A powerful test of fit for normal distributions is proposed. Based on the Levy characterization, the test statistic is the Sample Correlation Coefficient of normal quantiles and sums of pairs of observations from a random Sample. Since the test statistic is location-scale invariant, critical values can be obtained by simulation without estimating any parameters. It is proved that this test is consistent. A power comparison study including some directed tests shows that the proposed test is competitive, it is more powerful than the well-known Jarque–Bera test, and it is comparable to Shapiro–Wilk test against a number of alternatives.

  • On testing the log-gamma distribution hypothesis by bootstrap
    Computational Statistics, 2013
    Co-Authors: Eduardo Gutiérrez González, José A. Villaseñor-alva, Olga Vladimirovna Panteleeva, Humberto Vaquera Huerta
    Abstract:

    In this paper we propose two bootstrap goodness of fit tests for the log-gamma distribution with three parameters, location, scale and shape. These tests are built using the properties of this distribution family and are based on the Sample Correlation Coefficient which has the property of invariance with respect to location and scale transformations. Two estimators are proposed for the shape parameter and show that both are asymptotically unbiased and consistent in mean-squared error. The test size and power is estimated by simulation. The power of the two proposed tests against several alternative distributions is compared to that of the Kolmogorov-Smirnov, Anderson-Darling, and chi-square tests. Finally, an application to data from a production process of carbon fibers is presented.

  • A Goodness-of-Fit Test for Location-Scale Max-Stable Distributions
    Communications in Statistics - Simulation and Computation, 2010
    Co-Authors: Elizabeth González-estrada, José A. Villaseñor-alva
    Abstract:

    In this article, a technique based on the Sample Correlation Coefficient to construct goodness-of-fit tests for max-stable distributions with unknown location and scale parameters and finite second moment is proposed. Specific details to test for the Gumbel distribution are given, including critical values for small Sample sizes as well as approximate critical values for larger Sample sizes by using normal quantiles. A comparison by Monte Carlo simulation shows that the proposed test for the Gumbel hypothesis is substantially more powerful than some other known tests against some alternative distributions with positive skewness Coefficient.

Thomas E Nichols - One of the best experts on this subject based on the ideXlab platform.

  • effective degrees of freedom of the pearson s Correlation Coefficient under autoCorrelation
    NeuroImage, 2019
    Co-Authors: Soroosh Afyouni, Stephen M Smith, Thomas E Nichols
    Abstract:

    Abstract The dependence between pairs of time series is commonly quantified by Pearson's Correlation. However, if the time series are themselves dependent (i.e. exhibit temporal autoCorrelation), the effective degrees of freedom (EDF) are reduced, the standard error of the Sample Correlation Coefficient is biased, and Fisher's transformation fails to stabilise the variance. Since fMRI time series are notoriously autocorrelated, the issue of biased standard errors – before or after Fisher's transformation – becomes vital in individual-level analysis of resting-state functional connectivity (rsFC) and must be addressed anytime a standardised Z-score is computed. We find that the severity of autoCorrelation is highly dependent on spatial characteristics of brain regions, such as the size of regions of interest and the spatial location of those regions. We further show that the available EDF estimators make restrictive assumptions that are not supported by the data, resulting in biased rsFC inferences that lead to distorted topological descriptions of the connectome on the individual level. We propose a practical “xDF” method that accounts not only for distinct autoCorrelation in each time series, but instantaneous and lagged cross-Correlation. We find the xDF correction varies substantially over node pairs, indicating the limitations of global EDF corrections used previously. In addition to extensive synthetic and real data validations, we investigate the impact of this correction on rsFC measures in data from the Young Adult Human Connectome Project, showing that accounting for autoCorrelation dramatically changes fundamental graph theoretical measures relative to no correction.