Quantization Interval

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 273 Experts worldwide ranked by ideXlab platform

Chang Kyu Choi - One of the best experts on this subject based on the ideXlab platform.

  • CVPR - Learning to Quantize Deep Networks by Optimizing Quantization Intervals With Task Loss
    2019 IEEE CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019
    Co-Authors: Sangil Jung, Seohyung Lee, Youngjun Kwak, Jaejoon Han, Son Changyong, Son Jinwoo, Sung Ju Hwang, Chang Kyu Choi
    Abstract:

    Reducing bit-widths of activations and weights of deep networks makes it efficient to compute and store them in memory, which is crucial in their deployments to resource-limited devices, such as mobile phones. However, decreasing bit-widths with Quantization generally yields drastically degraded accuracy. To tackle this problem, we propose to learn to quantize activations and weights via a trainable quantizer that transforms and discretizes them. Specifically, we parameterize the Quantization Intervals and obtain their optimal values by directly minimizing the task loss of the network. This Quantization-Interval-learning (QIL) allows the quantized networks to maintain the accuracy of the full-precision (32-bit) networks with bit-width as low as 4-bit and minimize the accuracy degeneration with further bit-width reduction (i.e., 3 and 2-bit). Moreover, our quantizer can be trained on a heterogeneous dataset, and thus can be used to quantize pretrained networks without access to their training data. We demonstrate the effectiveness of our trainable quantizer on ImageNet dataset with various network architectures such as ResNet-18, -34 and AlexNet, on which it outperforms existing methods to achieve the state-of-the-art accuracy.

  • joint training of low precision neural network with Quantization Interval parameters
    2018
    Co-Authors: Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Youngjun Kwak, Jaejoon Han, Chang Kyu Choi
    Abstract:

    Reducing bit-widths of activations and weights of deep networks makes it efficient to compute and store them in memory, which is crucial in their deployments to resource-limited devices, such as mobile phones. However, decreasing bit-widths with Quantization generally yields drastically degraded accuracy. To tackle this problem, we propose to learn to quantize activations and weights via a trainable quantizer that transforms and discretizes them. Specifically, we parameterize the Quantization Intervals and obtain their optimal values by directly minimizing the task loss of the network. This Quantization-Interval-learning (QIL) allows the quantized networks to maintain the accuracy of the full-precision (32-bit) networks with bit-width as low as 4-bit and minimize the accuracy degeneration with further bit-width reduction (i.e., 3 and 2-bit). Moreover, our quantizer can be trained on a heterogeneous dataset, and thus can be used to quantize pretrained networks without access to their training data. We demonstrate the effectiveness of our trainable quantizer on ImageNet dataset with various network architectures such as ResNet-18, -34 and AlexNet, on which it outperforms existing methods to achieve the state-of-the-art accuracy.

  • Learning to Quantize Deep Networks by Optimizing Quantization Intervals with Task Loss
    arXiv: Computer Vision and Pattern Recognition, 2018
    Co-Authors: Sangil Jung, Seohyung Lee, Youngjun Kwak, Jaejoon Han, Son Changyong, Son Jinwoo, Sung Ju Hwang, Chang Kyu Choi
    Abstract:

    Reducing bit-widths of activations and weights of deep networks makes it efficient to compute and store them in memory, which is crucial in their deployments to resource-limited devices, such as mobile phones. However, decreasing bit-widths with Quantization generally yields drastically degraded accuracy. To tackle this problem, we propose to learn to quantize activations and weights via a trainable quantizer that transforms and discretizes them. Specifically, we parameterize the Quantization Intervals and obtain their optimal values by directly minimizing the task loss of the network. This Quantization-Interval-learning (QIL) allows the quantized networks to maintain the accuracy of the full-precision (32-bit) networks with bit-width as low as 4-bit and minimize the accuracy degeneration with further bit-width reduction (i.e., 3 and 2-bit). Moreover, our quantizer can be trained on a heterogeneous dataset, and thus can be used to quantize pretrained networks without access to their training data. We demonstrate the effectiveness of our trainable quantizer on ImageNet dataset with various network architectures such as ResNet-18, -34 and AlexNet, on which it outperforms existing methods to achieve the state-of-the-art accuracy.

Tim Fingscheidt - One of the best experts on this subject based on the ideXlab platform.

  • an improved adpcm decoder by adaptively controlled Quantization Interval centroids
    European Signal Processing Conference, 2015
    Co-Authors: Sai Han, Tim Fingscheidt
    Abstract:

    Adaptive differential pulse code modulation (ADPCM) has been standardized in ITU-T Recommendations G.726 and G.722 and is widely used in IP and cordless telephony. Although adaptive Quantization and adaptive prediction is employed in ADPCM using a fixed scalar Quantization codebook/lookup table, residual correlation of the quantizer input samples is yet observed. Exploiting source correlation, it has been shown that scalar Quantization performance can be improved by a time-variant Quantization Interval centroid leading to an adaptive codebook in the decoder. Using an ADPCM encoder and applying this principle to the ADPCM decoder with its own adaptive Quantization and prediction, the mean opinion score (MOS) of perceptual evaluation of speech quality (PESQ) is shown to improve by about 0.15 points for low bit rate ADPCM in error-free transmission conditions.

  • EUSIPCO - An improved adpcm decoder by adaptively controlled Quantization Interval centroids
    2015 23rd European Signal Processing Conference (EUSIPCO), 2015
    Co-Authors: Sai Han, Tim Fingscheidt
    Abstract:

    Adaptive differential pulse code modulation (ADPCM) has been standardized in ITU-T Recommendations G.726 and G.722 and is widely used in IP and cordless telephony. Although adaptive Quantization and adaptive prediction is employed in ADPCM using a fixed scalar Quantization codebook/lookup table, residual correlation of the quantizer input samples is yet observed. Exploiting source correlation, it has been shown that scalar Quantization performance can be improved by a time-variant Quantization Interval centroid leading to an adaptive codebook in the decoder. Using an ADPCM encoder and applying this principle to the ADPCM decoder with its own adaptive Quantization and prediction, the mean opinion score (MOS) of perceptual evaluation of speech quality (PESQ) is shown to improve by about 0.15 points for low bit rate ADPCM in error-free transmission conditions.

Kenji Sawada - One of the best experts on this subject based on the ideXlab platform.

  • integrated design of filter and Interval in dynamic quantizer under communication rate constraint
    IFAC Proceedings Volumes, 2011
    Co-Authors: Hiroshi Okajima, Kenji Sawada, Nobutomo Matsunaga
    Abstract:

    Abstract This paper proposes an design method of feedback type dynamic quantizers under communication rate constraints. It is well known that feedback type dynamic quantizers such as Delta/Sigma modulator are effective for encoding high resolution data into lower resolution data. The dynamic quantizers include a set of a filter and a static quantizer. When it is required to control under the communication rate constraint, the data size of signal should be minimized appropriately by Quantization. In the field of control engineering, many dynamic quantizer design methods have been proposed in terms of the filter design. However, design of the static quantizer part has not been considered though it is also important to satisfy constraint of the data size. The Quantization Interval of the static quantizer part is strongly related to the data size. In this paper, an integrated design method of the filter and the Quantization Interval is proposed under the communication rate constraint. The proposed method is derived based on our previous work which design the Interval to guarantee the communication rate constraint. By our proposed quantizer, the quantizer output satisfy the communication rate constraint and it gives good performance. The effectiveness of proposed quantizer is shown by numerical examples.

  • CDC - Optimal Quantization Interval design of dynamic quantizers which satisfy the communication rate constraints
    49th IEEE Conference on Decision and Control (CDC), 2010
    Co-Authors: Hiroshi Okajima, Nobutomo Matsunaga, Kenji Sawada
    Abstract:

    This paper proposes the design method of the dynamic quantizer for the networked control systems. It is well known that the dynamic quantizers, which consist of filter and static quantizer, are effective for compressing the data with small Quantization error of control. Many methods for designing the dynamic quantizers have been proposed from the perspective of filter design. When it is required to control with network communication, the data size of signal should be minified appropriately by quantizers because of the communication rate constraint. Since the Quantization Interval (distance between two quantizer outputs) of the quantizer makes an impact on the data rate, determination of Quantization Interval is important matter in the dynamic quantizers. However, the design method of Quantization Interval has not been proposed explicitly in the past researches of the dynamic quantizer design. In this paper, we propose the design method of the smaller Quantization Interval which satisfy the communication rate constraints. The design method is derived as an LMI problem based on the invariant set analysis. By the proposed method, the Quantization Interval guarantees that the signals are quantized appropriately within the given data size. The effectiveness is illustrated by numerical examples.

  • optimal Quantization Interval design of dynamic quantizers which satisfy the communication rate constraints
    Conference on Decision and Control, 2010
    Co-Authors: Hiroshi Okajima, Nobutomo Matsunaga, Kenji Sawada
    Abstract:

    This paper proposes the design method of the dynamic quantizer for the networked control systems. It is well known that the dynamic quantizers, which consist of filter and static quantizer, are effective for compressing the data with small Quantization error of control. Many methods for designing the dynamic quantizers have been proposed from the perspective of filter design. When it is required to control with network communication, the data size of signal should be minified appropriately by quantizers because of the communication rate constraint. Since the Quantization Interval (distance between two quantizer outputs) of the quantizer makes an impact on the data rate, determination of Quantization Interval is important matter in the dynamic quantizers. However, the design method of Quantization Interval has not been proposed explicitly in the past researches of the dynamic quantizer design. In this paper, we propose the design method of the smaller Quantization Interval which satisfy the communication rate constraints. The design method is derived as an LMI problem based on the invariant set analysis. By the proposed method, the Quantization Interval guarantees that the signals are quantized appropriately within the given data size. The effectiveness is illustrated by numerical examples.

Sangil Jung - One of the best experts on this subject based on the ideXlab platform.

  • CVPR - Learning to Quantize Deep Networks by Optimizing Quantization Intervals With Task Loss
    2019 IEEE CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019
    Co-Authors: Sangil Jung, Seohyung Lee, Youngjun Kwak, Jaejoon Han, Son Changyong, Son Jinwoo, Sung Ju Hwang, Chang Kyu Choi
    Abstract:

    Reducing bit-widths of activations and weights of deep networks makes it efficient to compute and store them in memory, which is crucial in their deployments to resource-limited devices, such as mobile phones. However, decreasing bit-widths with Quantization generally yields drastically degraded accuracy. To tackle this problem, we propose to learn to quantize activations and weights via a trainable quantizer that transforms and discretizes them. Specifically, we parameterize the Quantization Intervals and obtain their optimal values by directly minimizing the task loss of the network. This Quantization-Interval-learning (QIL) allows the quantized networks to maintain the accuracy of the full-precision (32-bit) networks with bit-width as low as 4-bit and minimize the accuracy degeneration with further bit-width reduction (i.e., 3 and 2-bit). Moreover, our quantizer can be trained on a heterogeneous dataset, and thus can be used to quantize pretrained networks without access to their training data. We demonstrate the effectiveness of our trainable quantizer on ImageNet dataset with various network architectures such as ResNet-18, -34 and AlexNet, on which it outperforms existing methods to achieve the state-of-the-art accuracy.

  • joint training of low precision neural network with Quantization Interval parameters
    2018
    Co-Authors: Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Youngjun Kwak, Jaejoon Han, Chang Kyu Choi
    Abstract:

    Reducing bit-widths of activations and weights of deep networks makes it efficient to compute and store them in memory, which is crucial in their deployments to resource-limited devices, such as mobile phones. However, decreasing bit-widths with Quantization generally yields drastically degraded accuracy. To tackle this problem, we propose to learn to quantize activations and weights via a trainable quantizer that transforms and discretizes them. Specifically, we parameterize the Quantization Intervals and obtain their optimal values by directly minimizing the task loss of the network. This Quantization-Interval-learning (QIL) allows the quantized networks to maintain the accuracy of the full-precision (32-bit) networks with bit-width as low as 4-bit and minimize the accuracy degeneration with further bit-width reduction (i.e., 3 and 2-bit). Moreover, our quantizer can be trained on a heterogeneous dataset, and thus can be used to quantize pretrained networks without access to their training data. We demonstrate the effectiveness of our trainable quantizer on ImageNet dataset with various network architectures such as ResNet-18, -34 and AlexNet, on which it outperforms existing methods to achieve the state-of-the-art accuracy.

  • Learning to Quantize Deep Networks by Optimizing Quantization Intervals with Task Loss
    arXiv: Computer Vision and Pattern Recognition, 2018
    Co-Authors: Sangil Jung, Seohyung Lee, Youngjun Kwak, Jaejoon Han, Son Changyong, Son Jinwoo, Sung Ju Hwang, Chang Kyu Choi
    Abstract:

    Reducing bit-widths of activations and weights of deep networks makes it efficient to compute and store them in memory, which is crucial in their deployments to resource-limited devices, such as mobile phones. However, decreasing bit-widths with Quantization generally yields drastically degraded accuracy. To tackle this problem, we propose to learn to quantize activations and weights via a trainable quantizer that transforms and discretizes them. Specifically, we parameterize the Quantization Intervals and obtain their optimal values by directly minimizing the task loss of the network. This Quantization-Interval-learning (QIL) allows the quantized networks to maintain the accuracy of the full-precision (32-bit) networks with bit-width as low as 4-bit and minimize the accuracy degeneration with further bit-width reduction (i.e., 3 and 2-bit). Moreover, our quantizer can be trained on a heterogeneous dataset, and thus can be used to quantize pretrained networks without access to their training data. We demonstrate the effectiveness of our trainable quantizer on ImageNet dataset with various network architectures such as ResNet-18, -34 and AlexNet, on which it outperforms existing methods to achieve the state-of-the-art accuracy.

Sai Han - One of the best experts on this subject based on the ideXlab platform.

  • an improved adpcm decoder by adaptively controlled Quantization Interval centroids
    European Signal Processing Conference, 2015
    Co-Authors: Sai Han, Tim Fingscheidt
    Abstract:

    Adaptive differential pulse code modulation (ADPCM) has been standardized in ITU-T Recommendations G.726 and G.722 and is widely used in IP and cordless telephony. Although adaptive Quantization and adaptive prediction is employed in ADPCM using a fixed scalar Quantization codebook/lookup table, residual correlation of the quantizer input samples is yet observed. Exploiting source correlation, it has been shown that scalar Quantization performance can be improved by a time-variant Quantization Interval centroid leading to an adaptive codebook in the decoder. Using an ADPCM encoder and applying this principle to the ADPCM decoder with its own adaptive Quantization and prediction, the mean opinion score (MOS) of perceptual evaluation of speech quality (PESQ) is shown to improve by about 0.15 points for low bit rate ADPCM in error-free transmission conditions.

  • EUSIPCO - An improved adpcm decoder by adaptively controlled Quantization Interval centroids
    2015 23rd European Signal Processing Conference (EUSIPCO), 2015
    Co-Authors: Sai Han, Tim Fingscheidt
    Abstract:

    Adaptive differential pulse code modulation (ADPCM) has been standardized in ITU-T Recommendations G.726 and G.722 and is widely used in IP and cordless telephony. Although adaptive Quantization and adaptive prediction is employed in ADPCM using a fixed scalar Quantization codebook/lookup table, residual correlation of the quantizer input samples is yet observed. Exploiting source correlation, it has been shown that scalar Quantization performance can be improved by a time-variant Quantization Interval centroid leading to an adaptive codebook in the decoder. Using an ADPCM encoder and applying this principle to the ADPCM decoder with its own adaptive Quantization and prediction, the mean opinion score (MOS) of perceptual evaluation of speech quality (PESQ) is shown to improve by about 0.15 points for low bit rate ADPCM in error-free transmission conditions.