Autocorrelation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 119559 Experts worldwide ranked by ideXlab platform

Michael Fernandez - One of the best experts on this subject based on the ideXlab platform.

  • proteometric modelling of protein conformational stability using amino acid sequence Autocorrelation vectors and genetic algorithm optimised support vector machines
    Molecular Simulation, 2008
    Co-Authors: Julio Caballero, Leyden Fernandez, Michael Fernandez, Pedro Sanchez, Jose Ignacio Abreu
    Abstract:

    The conformational stability of more than 1500 protein mutants was modelled by a proteometric approach using amino acid sequence Autocorrelation vector (AASA) formalism. 48 amino acid/residue properties selected from the AAindex database weighted the AASA vectors. Genetic algorithm-optimised support vector machine (GA-SVM), trained with subset of AASA descriptors, yielded predictive classification and regression models of unfolding Gibbs free energy change (ΔΔG). Function mapping and binary SVM models correctly predicted about 50 and 80% of ΔΔG variances and signs in crossvalidation experiments, respectively. Test set prediction showed adequate accuracies about 70% for stable single and double point mutants. Conformational stability depended on Autocorrelations at medium and long ranges in the mutant sequences of general structural, physico-chemical and thermodynamical properties relative to protein hydration process. A preliminary version of the predictor is available online at http://gibk21.bse.kyutech....

  • amino acid sequence Autocorrelation vectors and bayesian regularized genetic neural networks for modeling protein conformational stability gene v protein mutants
    Proteins, 2007
    Co-Authors: Leyden Fernandez, Julio Caballero, Jose Ignacio Abreu, Michael Fernandez
    Abstract:

    Development of novel computational approaches for modeling protein properties from their primary structure is the main goal in applied proteomics. In this work, we reported the extension of the Autocorrelation vector formalism to amino acid sequences for encoding protein structural information with modeling purposes. Amino acid sequence Autocorrelation (AASA) vectors were calculated by measuring the Autocorrelations at sequence lags ranging from 1 to 15 on the protein primary structure of 48 amino acid/residue properties selected from the AAindex data base. A total of 720 AASA descriptors were tested for building predictive models of the change of thermal unfolding Gibbs free energy change (ΔΔG) of gene V protein upon mutation. In this sense, ensembles of Bayesian-regularized genetic neural networks (BRGNNs) were used for obtaining an optimum nonlinear model for the conformational stability. The ensemble predictor described about 88% and 66% variance of the data in training and test sets respectively. Furthermore, the optimum AASA vector subset not only helped to successfully model unfolding stability but also well distributed wild-type and gene V protein mutants on a stability self-organized map (SOM), when used for unsupervised training of competitive neurons. Proteins 2007. © 2007 Wiley-Liss, Inc.

  • amino acid sequence Autocorrelation vectors and ensembles of bayesian regularized genetic neural networks for prediction of conformational stability of human lysozyme mutants
    Journal of Chemical Information and Modeling, 2006
    Co-Authors: Julio Caballero, Leyden Fernandez, Jose Ignacio Abreu, Michael Fernandez
    Abstract:

    Development of novel computational approaches for modeling protein properties from their primary structure is a main goal in applied proteomics. In this work, we reported the extension of the Autocorrelation vector formalism to amino acid sequences for encoding protein structural information with modeling purposes. Amino Acid Sequence Autocorrelation (AASA) vectors were calculated by measuring the Autocorrelations at sequence lags ranging from 1 to 15 on the protein primary structure of 48 amino acid/residue properties selected from the AAindex database. A total of 720 AASA descriptors were tested for building predictive models of the thermal unfolding Gibbs free energy change of human lysozyme mutants. In this sense, ensembles of Bayesian-Regularized Genetic Neural Networks (BRGNNs) were used for obtaining an optimum nonlinear model for the conformational stability. The ensemble predictor described about 88% and 68% variance of the data in training and test sets, respectively. Furthermore, the optimum AA...

Bhaskaran Swaminathan - One of the best experts on this subject based on the ideXlab platform.

  • incomplete information trading costs and cross Autocorrelations in stock returns
    Social Science Research Network, 2004
    Co-Authors: Tarun Chordia, Bhaskaran Swaminathan
    Abstract:

    This paper provides an economic rationale for the cross-Autocorrelation patterns in stock returns in the context of a microstructure model in which investors have incomplete information. The paper shows that in a market in which investors are informed about only a sub-set of stocks, the emergence of lead-lag, cross-Autocorrelations is a function of the cost of trading in other stocks based on information about the sub-set of stocks. If cross-trading costs are high, informed investors will trade only in the sub-set of stocks they are informed about; if cross-trading costs are moderate, informed investors will randomize between trading and not trading in other stocks; and if cross-trading costs are low, they will trade in all stocks. When informed investors trade only in a sub-set of stocks, prices of stocks with more informed trading will adjust to common factor information faster than the prices of stocks with less informed trading giving rise to asymmetric leadlag cross-Autocorrelations. When informed investors trade in all stocks, asymmetric lead-lag cross-Autocorrelations will disappear as a result of their cross-market arbitrage trading. These results provide a number of testable implications for lead-lag cross-Autocorrelation patterns. The data is consistent with the empirical predictions.

  • incomplete information trading costs and cross Autocorrelations in stock returns
    Economic Notes, 2004
    Co-Authors: Tarun Chordia, Bhaskaran Swaminathan
    Abstract:

    This paper provides an economic rationale for the cross-Autocorrelation patterns in stock returns in the context of a microstructure model in which investors have incomplete information. The paper shows that in a market in which investors are informed about only a sub-set of stocks, the emergence of lead-lag, cross-Autocorrelations is a function of the cost of trading in other stocks based on information about the sub-set of stocks. If cross-trading costs are high, informed investors will trade only in the sub-set of stocks they are informed about; if cross-trading costs are moderate, informed investors will randomize between trading and not trading in other stocks; and if cross-trading costs are low, they will trade in all stocks. When informed investors trade only in a sub-set of stocks, prices of stocks with more informed trading will adjust to common factor information faster than the prices of stocks with less informed trading giving rise to asymmetric lead-lag cross-Autocorrelations. When informed investors trade in all stocks, asymmetric lead-lag cross-Autocorrelations will disappear as a result of their cross-market arbitrage trading. These results provide a number of testable implications for lead-lag cross-Autocorrelation patterns. The data is consistent with the empirical predictions. (J.E.L.G12, G14).

  • trading volume and cross Autocorrelations in stock returns
    Journal of Finance, 2000
    Co-Authors: Tarun Chordia, Bhaskaran Swaminathan
    Abstract:

    This paper finds that trading volume is a significant determinant of the lead-lag patterns observed in stock returns. Daily and weekly returns on high volume portfolios lead returns on low volume portfolios, controlling for firm size. Nonsynchronous trading or low volume portfolio Autocorrelations cannot explain these findings. These patterns arise because returns on low volume portfolios respond more slowly to information in market returns. The speed of adjustment of individual stocks confirms these findings. Overall, the results indicate that differential speed of adjustment to information is a significant source of the cross-Autocorrelation patterns in short-horizon stock returns. BOTH ACADEMICS AND PRACTITIONERS HAVE LONG BEEN interested in the role played by trading volume in predicting future stock returns. 1 In this paper, we examine the interaction between trading volume and the predictability of short horizon stock returns, specifically that due to lead-lag cross-Autocorrelations in stock returns. Our investigation indicates that trading volume is a significant determinant of the cross-Autocorrelation patterns in stock returns. 2 We find that daily or weekly returns of stocks with high trading volume lead daily or weekly returns of stocks with low trading volume. Additional tests indicate that this effect is related to the tendency of high volume stocks to respond rapidly and low volume stocks to respond slowly to marketwide information.

  • trading volume and cross Autocorrelations in stock returns
    1999
    Co-Authors: Tarun Chordia, Bhaskaran Swaminathan
    Abstract:

    This paper finds that trading volume is a significant determinant of the lead-lag patterns observed in stock returns. Daily and weekly returns on high volume portfolios lead returns on low volume portfolios, controlling for firm size. Nonsynchronous trading or low volume portfolio Autocorrelations cannot explain these findings. These patterns arise because returns on low volume portfolios respond more slowly to information in market returns. The speed of adjustment of individual stocks confirms these findings. Overall, the results indicate that differential speed of adjustment to information is a significant source of the cross-Autocorrelation patterns in short-horizon stock returns.

Julio Caballero - One of the best experts on this subject based on the ideXlab platform.

  • proteometric modelling of protein conformational stability using amino acid sequence Autocorrelation vectors and genetic algorithm optimised support vector machines
    Molecular Simulation, 2008
    Co-Authors: Julio Caballero, Leyden Fernandez, Michael Fernandez, Pedro Sanchez, Jose Ignacio Abreu
    Abstract:

    The conformational stability of more than 1500 protein mutants was modelled by a proteometric approach using amino acid sequence Autocorrelation vector (AASA) formalism. 48 amino acid/residue properties selected from the AAindex database weighted the AASA vectors. Genetic algorithm-optimised support vector machine (GA-SVM), trained with subset of AASA descriptors, yielded predictive classification and regression models of unfolding Gibbs free energy change (ΔΔG). Function mapping and binary SVM models correctly predicted about 50 and 80% of ΔΔG variances and signs in crossvalidation experiments, respectively. Test set prediction showed adequate accuracies about 70% for stable single and double point mutants. Conformational stability depended on Autocorrelations at medium and long ranges in the mutant sequences of general structural, physico-chemical and thermodynamical properties relative to protein hydration process. A preliminary version of the predictor is available online at http://gibk21.bse.kyutech....

  • amino acid sequence Autocorrelation vectors and bayesian regularized genetic neural networks for modeling protein conformational stability gene v protein mutants
    Proteins, 2007
    Co-Authors: Leyden Fernandez, Julio Caballero, Jose Ignacio Abreu, Michael Fernandez
    Abstract:

    Development of novel computational approaches for modeling protein properties from their primary structure is the main goal in applied proteomics. In this work, we reported the extension of the Autocorrelation vector formalism to amino acid sequences for encoding protein structural information with modeling purposes. Amino acid sequence Autocorrelation (AASA) vectors were calculated by measuring the Autocorrelations at sequence lags ranging from 1 to 15 on the protein primary structure of 48 amino acid/residue properties selected from the AAindex data base. A total of 720 AASA descriptors were tested for building predictive models of the change of thermal unfolding Gibbs free energy change (ΔΔG) of gene V protein upon mutation. In this sense, ensembles of Bayesian-regularized genetic neural networks (BRGNNs) were used for obtaining an optimum nonlinear model for the conformational stability. The ensemble predictor described about 88% and 66% variance of the data in training and test sets respectively. Furthermore, the optimum AASA vector subset not only helped to successfully model unfolding stability but also well distributed wild-type and gene V protein mutants on a stability self-organized map (SOM), when used for unsupervised training of competitive neurons. Proteins 2007. © 2007 Wiley-Liss, Inc.

  • amino acid sequence Autocorrelation vectors and ensembles of bayesian regularized genetic neural networks for prediction of conformational stability of human lysozyme mutants
    Journal of Chemical Information and Modeling, 2006
    Co-Authors: Julio Caballero, Leyden Fernandez, Jose Ignacio Abreu, Michael Fernandez
    Abstract:

    Development of novel computational approaches for modeling protein properties from their primary structure is a main goal in applied proteomics. In this work, we reported the extension of the Autocorrelation vector formalism to amino acid sequences for encoding protein structural information with modeling purposes. Amino Acid Sequence Autocorrelation (AASA) vectors were calculated by measuring the Autocorrelations at sequence lags ranging from 1 to 15 on the protein primary structure of 48 amino acid/residue properties selected from the AAindex database. A total of 720 AASA descriptors were tested for building predictive models of the thermal unfolding Gibbs free energy change of human lysozyme mutants. In this sense, ensembles of Bayesian-Regularized Genetic Neural Networks (BRGNNs) were used for obtaining an optimum nonlinear model for the conformational stability. The ensemble predictor described about 88% and 68% variance of the data in training and test sets, respectively. Furthermore, the optimum AA...

Robert F Whitelaw - One of the best experts on this subject based on the ideXlab platform.

  • partial adjustment or stale prices implications from stock index and futures return Autocorrelations
    Review of Financial Studies, 2002
    Co-Authors: Donghyun Ahn, Jacob Boudoukh, Matthew Richardson, Robert F Whitelaw
    Abstract:

    We investigate the relation between returns on stock indices and their corresponding futures contracts to evaluate potential explanations for the pervasive yet anomalous evidence of positive, short-horizon portfolio Autocorrelations. Using a simple theoretical framework, we generate empirical implications for both microstructure and partial adjustment models. The major findings are (i) return Autocorrelations of indices are generally positive even though futures contracts have Autocorrelations close to zero, and (ii) these Autocorrelation differences are maintained under conditions favorable for spot-futures arbitrage and are most prevalent during low-volume periods. These results point toward microstructure-based explanations and away from explanations based on behavioral models.

  • partial adjustment or stale prices implications from stock index and futures return Autocorrelations
    2000
    Co-Authors: Donghyun Ahn, Jacob Boudoukh, Matthew Richardson, Robert F Whitelaw
    Abstract:

    This paper investigates the relation between returns on stock indices and their corresponding futures contracts in order to evaluate potential explanations for the pervasive yet anomalous evidence of positive, short-horizon portfolio Autocorrelations. Using a simple theoretical framework, we generate empirical implications for both microstructure and partial adjustment models. These implications are then tested using futures data on 24 contracts across 15 countries. The major findings are (i) return Autocorrelations of indices tend to be positive even though their corresponding futures contracts have Autocorrelations close to zero, (ii) these Autocorrelation differences between spot and futures markets are maintained even under conditions favorable for spot-futures arbitrage, and (iii) these Autocorrelation differences are most prevalent during low volume periods. These results point us towards a market microstructure-based explanation for short-horizon Autocorrelations and away from explanations based on current popular behavioral models.

  • a tale of three schools insights on Autocorrelations of short horizon stock returns
    Social Science Research Network, 1998
    Co-Authors: Jacob Boudoukh, Matthew Richardson, Robert F Whitelaw
    Abstract:

    This paper reexamines the Autocorrelation patterns of short- horizon stock returns. We document empirical results which imply that these Autocorrelations have been overstated in the existing literature. Based on several new insights, we provide support for a market efficiency-based explanation of the evidence. Our analysis suggests institutional factors are the most likely source of the Autocorrelation patterns.

Jose Ignacio Abreu - One of the best experts on this subject based on the ideXlab platform.

  • proteometric modelling of protein conformational stability using amino acid sequence Autocorrelation vectors and genetic algorithm optimised support vector machines
    Molecular Simulation, 2008
    Co-Authors: Julio Caballero, Leyden Fernandez, Michael Fernandez, Pedro Sanchez, Jose Ignacio Abreu
    Abstract:

    The conformational stability of more than 1500 protein mutants was modelled by a proteometric approach using amino acid sequence Autocorrelation vector (AASA) formalism. 48 amino acid/residue properties selected from the AAindex database weighted the AASA vectors. Genetic algorithm-optimised support vector machine (GA-SVM), trained with subset of AASA descriptors, yielded predictive classification and regression models of unfolding Gibbs free energy change (ΔΔG). Function mapping and binary SVM models correctly predicted about 50 and 80% of ΔΔG variances and signs in crossvalidation experiments, respectively. Test set prediction showed adequate accuracies about 70% for stable single and double point mutants. Conformational stability depended on Autocorrelations at medium and long ranges in the mutant sequences of general structural, physico-chemical and thermodynamical properties relative to protein hydration process. A preliminary version of the predictor is available online at http://gibk21.bse.kyutech....

  • amino acid sequence Autocorrelation vectors and bayesian regularized genetic neural networks for modeling protein conformational stability gene v protein mutants
    Proteins, 2007
    Co-Authors: Leyden Fernandez, Julio Caballero, Jose Ignacio Abreu, Michael Fernandez
    Abstract:

    Development of novel computational approaches for modeling protein properties from their primary structure is the main goal in applied proteomics. In this work, we reported the extension of the Autocorrelation vector formalism to amino acid sequences for encoding protein structural information with modeling purposes. Amino acid sequence Autocorrelation (AASA) vectors were calculated by measuring the Autocorrelations at sequence lags ranging from 1 to 15 on the protein primary structure of 48 amino acid/residue properties selected from the AAindex data base. A total of 720 AASA descriptors were tested for building predictive models of the change of thermal unfolding Gibbs free energy change (ΔΔG) of gene V protein upon mutation. In this sense, ensembles of Bayesian-regularized genetic neural networks (BRGNNs) were used for obtaining an optimum nonlinear model for the conformational stability. The ensemble predictor described about 88% and 66% variance of the data in training and test sets respectively. Furthermore, the optimum AASA vector subset not only helped to successfully model unfolding stability but also well distributed wild-type and gene V protein mutants on a stability self-organized map (SOM), when used for unsupervised training of competitive neurons. Proteins 2007. © 2007 Wiley-Liss, Inc.

  • amino acid sequence Autocorrelation vectors and ensembles of bayesian regularized genetic neural networks for prediction of conformational stability of human lysozyme mutants
    Journal of Chemical Information and Modeling, 2006
    Co-Authors: Julio Caballero, Leyden Fernandez, Jose Ignacio Abreu, Michael Fernandez
    Abstract:

    Development of novel computational approaches for modeling protein properties from their primary structure is a main goal in applied proteomics. In this work, we reported the extension of the Autocorrelation vector formalism to amino acid sequences for encoding protein structural information with modeling purposes. Amino Acid Sequence Autocorrelation (AASA) vectors were calculated by measuring the Autocorrelations at sequence lags ranging from 1 to 15 on the protein primary structure of 48 amino acid/residue properties selected from the AAindex database. A total of 720 AASA descriptors were tested for building predictive models of the thermal unfolding Gibbs free energy change of human lysozyme mutants. In this sense, ensembles of Bayesian-Regularized Genetic Neural Networks (BRGNNs) were used for obtaining an optimum nonlinear model for the conformational stability. The ensemble predictor described about 88% and 68% variance of the data in training and test sets, respectively. Furthermore, the optimum AA...