The Experts below are selected from a list of 5772 Experts worldwide ranked by ideXlab platform
S Boussakta - One of the best experts on this subject based on the ideXlab platform.
-
fast walsh hadamard fourier Transform algorithm
IEEE Transactions on Signal Processing, 2011Co-Authors: Mounir Taha Hamood, S BoussaktaAbstract:An efficient fast Walsh-Hadamard-Fourier Transform algorithm which combines the calculation of the Walsh-Hadamard Transform (WHT) and the discrete Fourier Transform (DFT) is introduced. This can be used in Walsh-Hadamard precoded orthogonal frequency division multiplexing systems (WHT-OFDM) to increase speed and reduce the implementation cost. The algorithm is developed through the sparse matrices factorization method using the Kronecker product technique, and implemented in an integrated butterfly structure. The proposed algorithm has significantly lower arithmetic complexity, shorter delays and simpler indexing schemes than existing algorithms based on the concatenation of the WHT and FFT, and saves about 70%-36% in computer run-time for Transform lengths of 16-4096.
Markus Puschel - One of the best experts on this subject based on the ideXlab platform.
-
in search of the optimal walsh hadamard Transform for streamed parallel processing
International Conference on Acoustics Speech and Signal Processing, 2019Co-Authors: Francois Serre, Markus PuschelAbstract:The Walsh-Hadamard Transform (WHT) is computed using a network of butterflies, similar to the fast Fourier Transform. The network is not unique but can be modified in exponentially many ways by properly changing the permutations between butterfly stages. Our first contribution is the exact char-acterization of all possible WHT networks. Then we aim to find the optimal networks for streaming implementations. In such an implementation the input is fed in chunks over several cycles and the hardware cost is thus reduced in proportion. To find the optimal network we smartly search through all possibilities for small sizes and discover novel networks that are thus proven optimal. The results can be used to extrapolate the optimal hardware cost for all sizes but the associated algorithms still remain elusive.
-
in search of the optimal walsh hadamard Transform
International Conference on Acoustics Speech and Signal Processing, 2000Co-Authors: Jeremy Johnson, Markus PuschelAbstract:This paper describes an approach to implementing and optimizing fast signal Transforms. Algorithms for computing signal Transforms are expressed by symbolic expressions, which can be automatically generated and translated into programs. Optimizing an implementation involves searching for the fastest program obtained from one of the possible expressions. We apply this methodology to the implementation of the Walsh-Hadamard Transform. An environment, accessible from MATLAB, is provided for generating and timing WHT algorithms. These tools are used to search for the fastest WHT algorithm. The fastest algorithm found is substantially faster than standard approaches to implementing the WHT. The work reported in this paper is part of the SPIRAL project. An ongoing project whose goal is to automate the implementation and optimization of signal processing algorithms.
Mounir Taha Hamood - One of the best experts on this subject based on the ideXlab platform.
-
fast walsh hadamard fourier Transform algorithm
IEEE Transactions on Signal Processing, 2011Co-Authors: Mounir Taha Hamood, S BoussaktaAbstract:An efficient fast Walsh-Hadamard-Fourier Transform algorithm which combines the calculation of the Walsh-Hadamard Transform (WHT) and the discrete Fourier Transform (DFT) is introduced. This can be used in Walsh-Hadamard precoded orthogonal frequency division multiplexing systems (WHT-OFDM) to increase speed and reduce the implementation cost. The algorithm is developed through the sparse matrices factorization method using the Kronecker product technique, and implemented in an integrated butterfly structure. The proposed algorithm has significantly lower arithmetic complexity, shorter delays and simpler indexing schemes than existing algorithms based on the concatenation of the WHT and FFT, and saves about 70%-36% in computer run-time for Transform lengths of 16-4096.
Piotr Indyk - One of the best experts on this subject based on the ideXlab platform.
-
nearly optimal deterministic algorithm for sparse walsh hadamard Transform
ACM Transactions on Algorithms, 2017Co-Authors: Mahdi Cheraghchi, Piotr IndykAbstract:For every fixed constant α > 0, we design an algorithm for computing the k-sparse Walsh-Hadamard Transform (i.e., Discrete Fourier Transform over the Boolean cube) of an N-dimensional vector x i RN in time k1 + α(log N)O(1). Specifically, the algorithm is given query access to x and computes a k-sparse x˜ i RN satisfying V x˜− xˆV1 ≤ c V xˆ− Hk(xˆ)VVVVVVVV1 for an absolute constant c > 0, where xˆ is the Transform of x and Hk(xˆ) is its best k-sparse approximation. Our algorithm is fully deterministic and only uses nonadaptive queries to x (i.e., all queries are determined and performed in parallel when the algorithm starts). An important technical tool that we use is a construction of nearly optimal and linear lossless condensers, which is a careful instantiation of the GUV condenser (Guruswami et al. [2009]). Moreover, we design a deterministic and nonadaptive e1/e1 compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time k1 + α(log N)O(1) (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, and Strauss [Berinde et al. 2008]. Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to k(log N)O(1) reconstruction time with a reduced exponent in the poly-logarithmic factor, and eliminating the extra parameter α). By allowing the algorithm to use randomness while still using nonadaptive queries, the runtime of the algorithm can be improved to o(k log3 N).
-
nearly optimal deterministic algorithm for sparse walsh hadamard Transform
Symposium on Discrete Algorithms, 2016Co-Authors: Mahdi Cheraghchi, Piotr IndykAbstract:For every fixed constant α > 0, we design an algorithm for computing the k-sparse Walsh-Hadamard Transform (i.e., Discrete Fourier Transform over the Boolean cube) of an N-dimensional vector x ∈ RN in time k1+α(log N)O(1). Specifically, the algorithm is given query access to x and computes a k-sparse x ∈ RN satisfying ||[EQUATION]||1 ≤c||[EQUATION] -- Hk([EQUATION])||1, for an absolute constant c > 0, where [EQUATION] is the Transform of x and Hk([EQUATION]) is its best k-sparse approximation. Our algorithm is fully deterministic and only uses non-adaptive queries to x (i.e., all queries are determined and performed in parallel when the algorithm starts). An important technical tool that we use is a construction of nearly optimal and linear lossless condensers which is a careful instantiation of the GUV condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a deterministic and non-adaptive e1/e1 compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time k1+α(log N)O(1) (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, Strauss (Allerton 2008). Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to k(log N)O(1) reconstruction time with a reduced exponent in the poly-logarithmic factor, and eliminating the extra parameter α). By allowing the algorithm to use randomness, while still using non-adaptive queries, the running time of the algorithm can be improved to O(k log3 N).
-
nearly optimal deterministic algorithm for sparse walsh hadamard Transform
arXiv: Information Theory, 2015Co-Authors: Mahdi Cheraghchi, Piotr IndykAbstract:For every fixed constant $\alpha > 0$, we design an algorithm for computing the $k$-sparse Walsh-Hadamard Transform of an $N$-dimensional vector $x \in \mathbb{R}^N$ in time $k^{1+\alpha} (\log N)^{O(1)}$. Specifically, the algorithm is given query access to $x$ and computes a $k$-sparse $\tilde{x} \in \mathbb{R}^N$ satisfying $\|\tilde{x} - \hat{x}\|_1 \leq c \|\hat{x} - H_k(\hat{x})\|_1$, for an absolute constant $c > 0$, where $\hat{x}$ is the Transform of $x$ and $H_k(\hat{x})$ is its best $k$-sparse approximation. Our algorithm is fully deterministic and only uses non-adaptive queries to $x$ (i.e., all queries are determined and performed in parallel when the algorithm starts). An important technical tool that we use is a construction of nearly optimal and linear lossless condensers which is a careful instantiation of the GUV condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a deterministic and non-adaptive $\ell_1/\ell_1$ compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time $k^{1+\alpha} (\log N)^{O(1)}$ (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, Strauss (Allerton 2008). Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to $k (\log N)^{O(1)}$ reconstruction time with a reduced exponent in the poly-logarithmic factor, and eliminating the extra parameter $\alpha$). Finally, by allowing the algorithm to use randomness, while still using non-adaptive queries, the running time of the algorithm can be improved to $\tilde{O}(k \log^3 N)$.
Mahdi Cheraghchi - One of the best experts on this subject based on the ideXlab platform.
-
nearly optimal deterministic algorithm for sparse walsh hadamard Transform
ACM Transactions on Algorithms, 2017Co-Authors: Mahdi Cheraghchi, Piotr IndykAbstract:For every fixed constant α > 0, we design an algorithm for computing the k-sparse Walsh-Hadamard Transform (i.e., Discrete Fourier Transform over the Boolean cube) of an N-dimensional vector x i RN in time k1 + α(log N)O(1). Specifically, the algorithm is given query access to x and computes a k-sparse x˜ i RN satisfying V x˜− xˆV1 ≤ c V xˆ− Hk(xˆ)VVVVVVVV1 for an absolute constant c > 0, where xˆ is the Transform of x and Hk(xˆ) is its best k-sparse approximation. Our algorithm is fully deterministic and only uses nonadaptive queries to x (i.e., all queries are determined and performed in parallel when the algorithm starts). An important technical tool that we use is a construction of nearly optimal and linear lossless condensers, which is a careful instantiation of the GUV condenser (Guruswami et al. [2009]). Moreover, we design a deterministic and nonadaptive e1/e1 compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time k1 + α(log N)O(1) (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, and Strauss [Berinde et al. 2008]. Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to k(log N)O(1) reconstruction time with a reduced exponent in the poly-logarithmic factor, and eliminating the extra parameter α). By allowing the algorithm to use randomness while still using nonadaptive queries, the runtime of the algorithm can be improved to o(k log3 N).
-
nearly optimal deterministic algorithm for sparse walsh hadamard Transform
Symposium on Discrete Algorithms, 2016Co-Authors: Mahdi Cheraghchi, Piotr IndykAbstract:For every fixed constant α > 0, we design an algorithm for computing the k-sparse Walsh-Hadamard Transform (i.e., Discrete Fourier Transform over the Boolean cube) of an N-dimensional vector x ∈ RN in time k1+α(log N)O(1). Specifically, the algorithm is given query access to x and computes a k-sparse x ∈ RN satisfying ||[EQUATION]||1 ≤c||[EQUATION] -- Hk([EQUATION])||1, for an absolute constant c > 0, where [EQUATION] is the Transform of x and Hk([EQUATION]) is its best k-sparse approximation. Our algorithm is fully deterministic and only uses non-adaptive queries to x (i.e., all queries are determined and performed in parallel when the algorithm starts). An important technical tool that we use is a construction of nearly optimal and linear lossless condensers which is a careful instantiation of the GUV condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a deterministic and non-adaptive e1/e1 compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time k1+α(log N)O(1) (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, Strauss (Allerton 2008). Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to k(log N)O(1) reconstruction time with a reduced exponent in the poly-logarithmic factor, and eliminating the extra parameter α). By allowing the algorithm to use randomness, while still using non-adaptive queries, the running time of the algorithm can be improved to O(k log3 N).
-
nearly optimal deterministic algorithm for sparse walsh hadamard Transform
arXiv: Information Theory, 2015Co-Authors: Mahdi Cheraghchi, Piotr IndykAbstract:For every fixed constant $\alpha > 0$, we design an algorithm for computing the $k$-sparse Walsh-Hadamard Transform of an $N$-dimensional vector $x \in \mathbb{R}^N$ in time $k^{1+\alpha} (\log N)^{O(1)}$. Specifically, the algorithm is given query access to $x$ and computes a $k$-sparse $\tilde{x} \in \mathbb{R}^N$ satisfying $\|\tilde{x} - \hat{x}\|_1 \leq c \|\hat{x} - H_k(\hat{x})\|_1$, for an absolute constant $c > 0$, where $\hat{x}$ is the Transform of $x$ and $H_k(\hat{x})$ is its best $k$-sparse approximation. Our algorithm is fully deterministic and only uses non-adaptive queries to $x$ (i.e., all queries are determined and performed in parallel when the algorithm starts). An important technical tool that we use is a construction of nearly optimal and linear lossless condensers which is a careful instantiation of the GUV condenser (Guruswami, Umans, Vadhan, JACM 2009). Moreover, we design a deterministic and non-adaptive $\ell_1/\ell_1$ compressed sensing scheme based on general lossless condensers that is equipped with a fast reconstruction algorithm running in time $k^{1+\alpha} (\log N)^{O(1)}$ (for the GUV-based condenser) and is of independent interest. Our scheme significantly simplifies and improves an earlier expander-based construction due to Berinde, Gilbert, Indyk, Karloff, Strauss (Allerton 2008). Our methods use linear lossless condensers in a black box fashion; therefore, any future improvement on explicit constructions of such condensers would immediately translate to improved parameters in our framework (potentially leading to $k (\log N)^{O(1)}$ reconstruction time with a reduced exponent in the poly-logarithmic factor, and eliminating the extra parameter $\alpha$). Finally, by allowing the algorithm to use randomness, while still using non-adaptive queries, the running time of the algorithm can be improved to $\tilde{O}(k \log^3 N)$.