Hamming Weight

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4116 Experts worldwide ranked by ideXlab platform

Tomohiko Uyematsu - One of the best experts on this subject based on the ideXlab platform.

  • secret sharing schemes based on linear codes can be precisely characterized by the relative generalized Hamming Weight
    IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences, 2012
    Co-Authors: Jun Kurihara, Ryutaroh Matsumoto, Tomohiko Uyematsu
    Abstract:

    SUMMARY This paper precisely characterizes secret sharing schemes based on arbitrary linear codes by using the relative dimension/length pro- file (RDLP) and the relative generalized Hamming Weight (RGHW). We first describe the equivocation Δm of the secret vectors =( s1 ,..., sl) given m shares in terms of the RDLP of linear codes. We also characterize two thresholds t1 and t2 in the secret sharing schemes by the RGHW of linear codes. One shows that any set of at most t1 shares leaks no information abouts, and the other shows that any set of at least t2 shares uniquely de- terminess. It is clarified that both characterizations for t1 and t2 are better than Chen et al.'s ones derived by the regular minimum Hamming Weight. Moreover, this paper characterizes the strong security in secret sharing schemes based on linear codes, by generalizing the definition of strongly- secure threshold ramp schemes. We define a secret sharing scheme achiev- ing the α-strong security as the one such that the mutual information be- tween any r elements of (s1 ,..., sl) and any α −r+ 1s hares is always zero. Then, it is clarified that secret sharing schemes based on linear codes can always achieve the α-strong security where the value α is precisely char- acterized by the RGHW.

  • strongly secure secret sharing based on linear codes can be characterized by generalized Hamming Weight
    Allerton Conference on Communication Control and Computing, 2011
    Co-Authors: Jun Kurihara, Tomohiko Uyematsu
    Abstract:

    Secret sharing scheme is an important tool for the management of secret information. For secret sharing scheme based on linear block codes, the amount of information leaked to adversaries has not been investigated. Hence, in existing constructions of secret sharing scheme based on arbitrary linear codes, some elements of a secret vector S = [s 1 , …, s l ] might leak out deterministically from a non-qualified set. In this paper, we first define anti-access set J as a special non-qualified set. For J, no information about every (t + 1)-tuple of s 1 , …, s l leaks out from any subset of J with cardinality│J│-t. We also introduce the conditions of a linear code and its dual code such that a specified set becomes an anti-access set in secret sharing scheme using the code. Then, we propose a secret sharing scheme based on a linear code C. The proposed secret sharing scheme realizes a similar access structure to threshold access structures, and every non-qualified set whose cardinality is less than or equal to α are anti-access sets. Further, we show that the proposed scheme can be completely characterized by the generalized Hamming Weight of C┴.

Jing Huang - One of the best experts on this subject based on the ideXlab platform.

  • controversy corner efficient Hamming Weight based side channel cube attacks on present
    Journal of Systems and Software, 2013
    Co-Authors: Xinjie Zhao, Shize Guo, Fan Zhang, Tao Wang, Zhijie Shi, Huiying Liu, Jing Huang
    Abstract:

    The side-channel cube attack (SCCA) is a powerful cryptanalysis technique that combines the side-channel and cube attack. This paper proposes several advanced techniques to improve the Hamming Weight-based SCCA (HW-SCCA) on the block cipher PRESENT. The new techniques utilize non-linear equations and an iterative scheme to extract more information from leakage. The new attacks need only 2^8^.^9^5 chosen plaintexts to recover 72 key bits of PRESENT-80 and 2^9^.^7^8 chosen plaintexts to recover 121 key bits of PRESENT-128. To the best of our knowledge, these are the most efficient SCCAs on PRESENT-80/128. To show the feasibility of the proposed techniques, real attacks have been conducted on PRESENT on an 8-bit microcontroller, which are the first SCCAs on PRESENT on a real device. The proposed HW-SCCA can successfully break PRESENT implementations even if they have some countermeasures such as random delay and masking.

Jun Kurihara - One of the best experts on this subject based on the ideXlab platform.

  • secret sharing schemes based on linear codes can be precisely characterized by the relative generalized Hamming Weight
    IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences, 2012
    Co-Authors: Jun Kurihara, Ryutaroh Matsumoto, Tomohiko Uyematsu
    Abstract:

    SUMMARY This paper precisely characterizes secret sharing schemes based on arbitrary linear codes by using the relative dimension/length pro- file (RDLP) and the relative generalized Hamming Weight (RGHW). We first describe the equivocation Δm of the secret vectors =( s1 ,..., sl) given m shares in terms of the RDLP of linear codes. We also characterize two thresholds t1 and t2 in the secret sharing schemes by the RGHW of linear codes. One shows that any set of at most t1 shares leaks no information abouts, and the other shows that any set of at least t2 shares uniquely de- terminess. It is clarified that both characterizations for t1 and t2 are better than Chen et al.'s ones derived by the regular minimum Hamming Weight. Moreover, this paper characterizes the strong security in secret sharing schemes based on linear codes, by generalizing the definition of strongly- secure threshold ramp schemes. We define a secret sharing scheme achiev- ing the α-strong security as the one such that the mutual information be- tween any r elements of (s1 ,..., sl) and any α −r+ 1s hares is always zero. Then, it is clarified that secret sharing schemes based on linear codes can always achieve the α-strong security where the value α is precisely char- acterized by the RGHW.

  • strongly secure secret sharing based on linear codes can be characterized by generalized Hamming Weight
    Allerton Conference on Communication Control and Computing, 2011
    Co-Authors: Jun Kurihara, Tomohiko Uyematsu
    Abstract:

    Secret sharing scheme is an important tool for the management of secret information. For secret sharing scheme based on linear block codes, the amount of information leaked to adversaries has not been investigated. Hence, in existing constructions of secret sharing scheme based on arbitrary linear codes, some elements of a secret vector S = [s 1 , …, s l ] might leak out deterministically from a non-qualified set. In this paper, we first define anti-access set J as a special non-qualified set. For J, no information about every (t + 1)-tuple of s 1 , …, s l leaks out from any subset of J with cardinality│J│-t. We also introduce the conditions of a linear code and its dual code such that a specified set becomes an anti-access set in secret sharing scheme using the code. Then, we propose a secret sharing scheme based on a linear code C. The proposed secret sharing scheme realizes a similar access structure to threshold access structures, and every non-qualified set whose cardinality is less than or equal to α are anti-access sets. Further, we show that the proposed scheme can be completely characterized by the generalized Hamming Weight of C┴.

Jeong San Kim - One of the best experts on this subject based on the ideXlab platform.

  • Hamming Weight and tight constraints of multi qubit entanglement in terms of unified entropy
    Scientific Reports, 2018
    Co-Authors: Jeong San Kim
    Abstract:

    We establish a characterization of multi-qubit entanglement constraints in terms of non-negative power of entanglement measures based on unified-(q, s) entropy. Using the Hamming Weight of the binary vector related with the distribution of subsystems, we establish a class of tight monogamy inequalities of multi-qubit entanglement based on the αth-power of unified-(q, s) entanglement for α ≥ 1. For 0 ≤ β ≤ 1, we establish a class of tight polygamy inequalities of multi-qubit entanglement in terms of the βth-power of unified-(q, s) entanglement of assistance. Thus our results characterize the monogamy and polygamy of multi-qubit entanglement for the full range of non-negative power of unified entanglement.

  • Hamming Weight and tight constraints of multi qubit entanglement in terms of unified entropy
    arXiv: Quantum Physics, 2018
    Co-Authors: Jeong San Kim
    Abstract:

    We establish a characterization of multi-qubit entanglement constraints in terms of non-negative power of entanglement measures based on unified-$(q,s)$ entropy. Using the Hamming Weight of the binary vector related with the distribution of subsystems, we establish a class of tight monogamy inequalities of multi-qubit entanglement based on the $\alpha$th-power of unified-$(q,s)$ entanglement for $\alpha \geq 1$. For $0 \leq \beta \leq 1$, we establish a class of tight polygamy inequalities of multi-qubit entanglement in terms of the $\beta$th-power of unified-$(q,s)$ entanglement of assistance. Thus our results characterize the monogamy and polygamy of multi-qubit entanglement for the full range of non-negative power of unified entanglement.

Avesta Sasan - One of the best experts on this subject based on the ideXlab platform.

  • nesta Hamming Weight compression based neural proc engineali mirzaeian
    Asia and South Pacific Design Automation Conference, 2020
    Co-Authors: Ali Mirzaeian, Houman Homayoun, Avesta Sasan
    Abstract:

    In this paper, we present NESTA, a specialized Neural engine that significantly accelerates the computation of convolution layers in a deep convolutional neural network, while reducing the computational energy. NESTA reformats Convolutions into 3 × 3 batches and uses a hierarchy of Hamming Weight Compressors to process each batch. Besides, when processing the convolution across multiple channels, NESTA, rather than computing the precise result of a convolution per channel, quickly computes an approximation of its partial sum, and a residual value such that if added to the approximate partial sum, generates the accurate output. Then, instead of immediately adding the residual, it uses (consumes) the residual when processing the next batch in the Hamming Weight compressors with available capacity. This mechanism shortens the critical path by avoiding the need to propagate carry signals during each round of computation and speeds up the convolution of each channel. In the last stage of computation, when the partial sum of the last channel is computed, NESTA terminates by adding the residual bits to the approximate output to generate a correct result.

  • nesta Hamming Weight compression based neural proc engine
    arXiv: Learning, 2019
    Co-Authors: Ali Mirzaeian, Houman Homayoun, Avesta Sasan
    Abstract:

    In this paper, we present NESTA, a specialized Neural engine that significantly accelerates the computation of convolution layers in a deep convolutional neural network, while reducing the computational energy. NESTA reformats Convolutions into $3 \times 3$ batches and uses a hierarchy of Hamming Weight Compressors to process each batch. Besides, when processing the convolution across multiple channels, NESTA, rather than computing the precise result of a convolution per channel, quickly computes an approximation of its partial sum, and a residual value such that if added to the approximate partial sum, generates the accurate output. Then, instead of immediately adding the residual, it uses (consumes) the residual when processing the next batch in the Hamming Weight compressors with available capacity. This mechanism shortens the critical path by avoiding the need to propagate carry signals during each round of computation and speeds up the convolution of each channel. In the last stage of computation, when the partial sum of the last channel is computed, NESTA terminates by adding the residual bits to the approximate output to generate a correct result.