Orthonormal Transformation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1128 Experts worldwide ranked by ideXlab platform

Dionisio Doering - One of the best experts on this subject based on the ideXlab platform.

  • Efficient gamma-ray signal decomposition analysis based on Orthonormal Transformation and fixed poles
    2010 IEEE International Conference on Acoustics Speech and Signal Processing, 2010
    Co-Authors: Sergio Zimmermann, Dionisio Doering
    Abstract:

    Gamma-ray energy tracking is a new technique for the detection of gamma radiation. In such scheme, the individual interactions of the gamma rays with the germanium detectors are described by the energy, position and interaction time. Signal decomposition is the name of the procedure used to estimate the three-dimensional positions of the interactions based on pulse-shape analysis of the signals on the two-dimensional segments deposited on the faces of the detector. The present signal decomposition algorithm is computational intensive. For GRETINA, a detector being built based on this concept and that covers just a quarter of a sphere, 140 quad-processors are required to decompose 20,000 gamma interactions per second. In order to reduce the computational cost, we have conceptualized that the segments waveforms are generated by amplitude modulated discrete-time unit impulse. Projection of these waveforms into a more suitable basis reduces the computation costs while optimizing the same cost function. In this article we describe the framework for such projection, and we provide an example. For the present example, the computational cost was reduced by a factor of 5 times.

  • ICASSP - Efficient gamma-ray signal decomposition analysis based on Orthonormal Transformation and fixed poles
    2010 IEEE International Conference on Acoustics Speech and Signal Processing, 2010
    Co-Authors: Sergio Zimmermann, Dionisio Doering
    Abstract:

    Gamma-ray energy tracking is a new technique for the detection of gamma radiation. In such scheme, the individual interactions of the gamma rays with the germanium detectors are described by the energy, position and interaction time. Signal decomposition is the name of the procedure used to estimate the three-dimensional positions of the interactions based on pulse-shape analysis of the signals on the two-dimensional segments deposited on the faces of the detector. The present signal decomposition algorithm is computational intensive. For GRETINA, a detector being built based on this concept and that covers just a quarter of a sphere, 140 quad-processors are required to decompose 20,000 gamma interactions per second. In order to reduce the computational cost, we have conceptualized that the segments waveforms are generated by amplitude modulated discrete-time unit impulse. Projection of these waveforms into a more suitable basis reduces the computation costs while optimizing the same cost function. In this article we describe the framework for such projection, and we provide an example. For the present example, the computational cost was reduced by a factor of 5 times.

Sergio Zimmermann - One of the best experts on this subject based on the ideXlab platform.

  • Efficient gamma-ray signal decomposition analysis based on Orthonormal Transformation and fixed poles
    2010 IEEE International Conference on Acoustics Speech and Signal Processing, 2010
    Co-Authors: Sergio Zimmermann, Dionisio Doering
    Abstract:

    Gamma-ray energy tracking is a new technique for the detection of gamma radiation. In such scheme, the individual interactions of the gamma rays with the germanium detectors are described by the energy, position and interaction time. Signal decomposition is the name of the procedure used to estimate the three-dimensional positions of the interactions based on pulse-shape analysis of the signals on the two-dimensional segments deposited on the faces of the detector. The present signal decomposition algorithm is computational intensive. For GRETINA, a detector being built based on this concept and that covers just a quarter of a sphere, 140 quad-processors are required to decompose 20,000 gamma interactions per second. In order to reduce the computational cost, we have conceptualized that the segments waveforms are generated by amplitude modulated discrete-time unit impulse. Projection of these waveforms into a more suitable basis reduces the computation costs while optimizing the same cost function. In this article we describe the framework for such projection, and we provide an example. For the present example, the computational cost was reduced by a factor of 5 times.

  • ICASSP - Efficient gamma-ray signal decomposition analysis based on Orthonormal Transformation and fixed poles
    2010 IEEE International Conference on Acoustics Speech and Signal Processing, 2010
    Co-Authors: Sergio Zimmermann, Dionisio Doering
    Abstract:

    Gamma-ray energy tracking is a new technique for the detection of gamma radiation. In such scheme, the individual interactions of the gamma rays with the germanium detectors are described by the energy, position and interaction time. Signal decomposition is the name of the procedure used to estimate the three-dimensional positions of the interactions based on pulse-shape analysis of the signals on the two-dimensional segments deposited on the faces of the detector. The present signal decomposition algorithm is computational intensive. For GRETINA, a detector being built based on this concept and that covers just a quarter of a sphere, 140 quad-processors are required to decompose 20,000 gamma interactions per second. In order to reduce the computational cost, we have conceptualized that the segments waveforms are generated by amplitude modulated discrete-time unit impulse. Projection of these waveforms into a more suitable basis reduces the computation costs while optimizing the same cost function. In this article we describe the framework for such projection, and we provide an example. For the present example, the computational cost was reduced by a factor of 5 times.

Chris Sherlock - One of the best experts on this subject based on the ideXlab platform.

  • Optimal scaling of the random walk Metropolis: general criteria for the 0.234 acceptance rule
    Journal of Applied Probability, 2013
    Co-Authors: Chris Sherlock
    Abstract:

    Scaling of proposals for Metropolis algorithms is an important practical problem in MCMC implementation. Analyses of the random walk Metropolis for high dimensional targets with specific functional forms have shown that in many cases the optimal scaling is achieved when the acceptance rate is approximately 0.234, but that there are exceptions. We present a general set of sufficient conditions which are invariant to Orthonormal Transformation of the co-ordinate axes and which ensure that the limiting optimal acceptance rate is 0.234. The criteria are shown to hold for the joint distribution of successive elements of a stationary p-th order multivariate Markov process.

  • Optimal Scaling of the Random Walk Metropolis: General Criteria for the 0.234 Acceptance Rule
    Journal of Applied Probability, 2013
    Co-Authors: Chris Sherlock
    Abstract:

    Scaling of proposals for Metropolis algorithms is an important practical problem in Markov chain Monte Carlo implementation. Analyses of the random walk Metropolis for high-dimensional targets with specific functional forms have shown that in many cases the optimal scaling is achieved when the acceptance rate is approximately 0.234, but that there are exceptions. We present a general set of sufficient conditions which are invariant to Orthonormal Transformation of the coordinate axes and which ensure that the limiting optimal acceptance rate is 0.234. The criteria are shown to hold for the joint distribution of successive elements of a stationary pth-order multivariate Markov process.

Massoud Pedram - One of the best experts on this subject based on the ideXlab platform.

  • efficient representation stratification and compression of variational csm library waveforms using robust principle component analysis
    Design Automation and Test in Europe, 2010
    Co-Authors: Safar Hatami, Massoud Pedram
    Abstract:

    In deep sub-micron technology, accurate modeling of output waveforms of library cells under different input slew and load capacitance values is crucial for precise timing and noise analysis of VLSI circuits. Construction of a compact and efficient model of such waveforms becomes even more challenging when manufacturing process and environmental variations are considered. This paper introduces a rigorous and robust foundation to mathematically model output waveforms under sources of variability and to compress the library data. The proposed approach is suitable for today's current source model (CSM) based ASIC libraries. It employs an Orthonormal Transformation to represent the output waveforms as a linear combination of some appropriately-derived basis waveforms. More significantly Robust Principle Component Analysis (RPCA) is used to stratify the library waveforms into a small number of groups for which different sets of principle components are calculated. This stratification results in a very high compression ratio for the variational CSM library while meeting a maximum error tolerance. Interpolation and further compression is obtained by representing the coefficients as signomial functions of various parameters, e.g., input slew, load capacitance, supply voltage, and temperature. We propose a procedure to calculate the coefficients and power of the signomial functions. Experimental results demonstrate the effectiveness of the proposed variational CSM modeling framework and the stratification-based compression approach.

  • DATE - Efficient representation, stratification, and compression of variational CSM library waveforms using robust principle component analysis
    2010 Design Automation & Test in Europe Conference & Exhibition (DATE 2010), 2010
    Co-Authors: Safar Hatami, Massoud Pedram
    Abstract:

    In deep sub-micron technology, accurate modeling of output waveforms of library cells under different input slew and load capacitance values is crucial for precise timing and noise analysis of VLSI circuits. Construction of a compact and efficient model of such waveforms becomes even more challenging when manufacturing process and environmental variations are considered. This paper introduces a rigorous and robust foundation to mathematically model output waveforms under sources of variability and to compress the library data. The proposed approach is suitable for today's current source model (CSM) based ASIC libraries. It employs an Orthonormal Transformation to represent the output waveforms as a linear combination of some appropriately-derived basis waveforms. More significantly Robust Principle Component Analysis (RPCA) is used to stratify the library waveforms into a small number of groups for which different sets of principle components are calculated. This stratification results in a very high compression ratio for the variational CSM library while meeting a maximum error tolerance. Interpolation and further compression is obtained by representing the coefficients as signomial functions of various parameters, e.g., input slew, load capacitance, supply voltage, and temperature. We propose a procedure to calculate the coefficients and power of the signomial functions. Experimental results demonstrate the effectiveness of the proposed variational CSM modeling framework and the stratification-based compression approach.

Uri Erez - One of the best experts on this subject based on the ideXlab platform.

  • Achievability Performance Bounds for Integer-Forcing Source Coding
    IEEE Transactions on Information Theory, 2020
    Co-Authors: Elad Domanovitz, Uri Erez
    Abstract:

    Integer-forcing source coding has been proposed as a low-complexity method for compression of distributed correlated Gaussian sources. In this scheme, each encoder quantizes its observation using the same fine lattice and reduces the result modulo a coarse lattice. Rather than directly recovering the individual quantized signals, the decoder first recovers a full-rank set of judiciously chosen integer linear combinations of the quantized signals, and then inverts it. It has been observed that the method works very well for “most” but not all source covariance matrices. The present work quantifies the measure of bad covariance matrices by studying the probability that integer-forcing source coding fails as a function of the allocated rate, where the probability is with respect to a random Orthonormal Transformation that is applied to the sources prior to quantization. For the important case where the signals to be compressed correspond to the antenna inputs of relays in an i.i.d. Rayleigh fading environment, this Orthonormal Transformation can be viewed as being performed by nature. The scheme is also studied in the context of a non-distributed system. Here, the goal is to arrive at a universal, yet practical, compression method using equal-rate quantizers with provable performance guarantees. The scheme is universal in the sense that the covariance matrix need only be learned at the decoder but not at the encoder. The goal is accomplished by replacing the random Orthonormal Transformation by Transformations corresponding to number-theoretic space-time codes.

  • Achievability Performance Bounds for Integer-Forcing Source Coding.
    arXiv: Information Theory, 2017
    Co-Authors: Elad Domanovitz, Uri Erez
    Abstract:

    Integer-forcing source coding has been proposed as a low-complexity method for compression of distributed correlated Gaussian sources. In this scheme, each encoder quantizes its observation using the same fine lattice and reduces the result modulo a coarse lattice. Rather than directly recovering the individual quantized signals, the decoder first recovers a full-rank set of judiciously chosen integer linear combinations of the quantized signals, and then inverts it. It has been observed that the method works very well for "most" but not all source covariance matrices. The present work quantifies the measure of bad covariance matrices by studying the probability that integer-forcing source coding fails as a function of the allocated rate, %in excess of the %Berger-Tung benchmark, where the probability is with respect to a random Orthonormal Transformation that is applied to the sources prior to quantization. For the important case where the signals to be compressed correspond to the antenna inputs of relays in an i.i.d. Rayleigh fading environment, this Orthonormal Transformation can be viewed as being performed by nature. Hence, the results provide performance guarantees for distributed source coding via integer forcing in this scenario.