The Experts below are selected from a list of 309 Experts worldwide ranked by ideXlab platform

Junru Shao - One of the best experts on this subject based on the ideXlab platform.

  • Generalized Residual Vector Quantization and Aggregating Tree for Large Scale Search
    IEEE Transactions on Multimedia, 2017
    Co-Authors: Shicong Liu, Junru Shao
    Abstract:

    Vector Quantization is an essential tool for tasks involving large scale data, for example, large scale similarity search, which is crucial for content-based information retrieval and analysis. In this paper, we propose a novel Vector Quantization framework that iteratively minimizes Quantization error. First, we provide a detailed review on a relevant Vector Quantization method named residual Vector Quantization (RVQ). Next, we propose generalized residual Vector Quantization (GRVQ) to further improve over RVQ. Many Vector Quantization methods can be viewed as special cases of our proposed method. To enable GRVQ on billion scale data, we introduce a nonexhaustive search scheme named aggregating tree (A-Tree) for high dimensional data that uses GRVQ encodings to build a radix tree and perform the nearest neighbor search by beam search. To search accurately and efficiently, VQ-encodings should satisfy locally aggregating encoding criterion: For any node of the corresponding A-Tree, neighboring Vectors should aggregate in fewer subtrees to make beam search efficient. We show that the proposed GRVQ encodings best satisfy the suggested criterion, and the joint use of GRVQ and A-Tree shows significantly better performances on billion scale datasets. Our methods are validated on several standard benchmark datasets. Experimental results and empirical analysis show the superior efficiency and effectiveness of our proposed methods compared to the state-of-the-art for large scale search.

  • Generalized residual Vector Quantization for large scale data
    arXiv: Multimedia, 2016
    Co-Authors: Shicong Liu, Junru Shao
    Abstract:

    Vector Quantization is an essential tool for tasks involving large scale data, for example, large scale similarity search, which is crucial for content-based information retrieval and analysis. In this paper, we propose a novel Vector Quantization framework that iteratively minimizes Quantization error. First, we provide a detailed review on a relevant Vector Quantization method named \textit{residual Vector Quantization} (RVQ). Next, we propose \textit{generalized residual Vector Quantization} (GRVQ) to further improve over RVQ. Many Vector Quantization methods can be viewed as the special cases of our proposed framework. We evaluate GRVQ on several large scale benchmark datasets for large scale search, classification and object retrieval. We compared GRVQ with existing methods in detail. Extensive experiments demonstrate our GRVQ framework substantially outperforms existing methods in term of Quantization accuracy and computation efficiency.

  • generalized residual Vector Quantization for large scale data
    International Conference on Multimedia and Expo, 2016
    Co-Authors: Shicong Liu, Junru Shao
    Abstract:

    Vector Quantization is an essential tool for tasks involving large scale data, for example, large scale similarity search, which is crucial for content-based information retrieval and analysis. In this paper, we propose a novel Vector Quantization framework that iteratively minimizes Quantization error. First, we provide a detailed review on a relevant Vector Quantization method named residual Vector Quantization (RVQ). Next, we propose generalized residual Vector Quantization (GRVQ) to further improve over RVQ. Many Vector Quantization methods can be viewed as the special cases of our proposed framework. We evaluate GRVQ on several large scale benchmark datasets for large scale search, classification and object retrieval. We compared GRVQ with existing methods in detail. Extensive experiments demonstrate our GRVQ framework substantially outperforms existing methods in term of Quantization accuracy and computation efficiency.

  • ICME - Generalized residual Vector Quantization for large scale data
    2016 IEEE International Conference on Multimedia and Expo (ICME), 2016
    Co-Authors: Shicong Liu, Junru Shao
    Abstract:

    Vector Quantization is an essential tool for tasks involving large scale data, for example, large scale similarity search, which is crucial for content-based information retrieval and analysis. In this paper, we propose a novel Vector Quantization framework that iteratively minimizes Quantization error. First, we provide a detailed review on a relevant Vector Quantization method named residual Vector Quantization (RVQ). Next, we propose generalized residual Vector Quantization (GRVQ) to further improve over RVQ. Many Vector Quantization methods can be viewed as the special cases of our proposed framework. We evaluate GRVQ on several large scale benchmark datasets for large scale search, classification and object retrieval. We compared GRVQ with existing methods in detail. Extensive experiments demonstrate our GRVQ framework substantially outperforms existing methods in term of Quantization accuracy and computation efficiency.

Jianping Pan - One of the best experts on this subject based on the ideXlab platform.

  • ICASSP (1) - Vector Quantization-lattice Vector Quantization of speech LPC coefficients
    Proceedings of ICASSP '94. IEEE International Conference on Acoustics Speech and Signal Processing, 1
    Co-Authors: Jianping Pan, Thomas R. Fischer
    Abstract:

    Two-stage Vector Quantization-lattice Vector Quantization (VQ-LVQ) is used to encode the speech line spectrum pair (LSP) parameters. VQ-LVQ has the advantages of lower implementational complexity and less required memory than split Vector Quantization (SVQ) and multi-stage Vector Quantization (MSVQ) with unstructured codebooks. Based on the authors' speech data base and the same spectral measure, VQ-LVQ can save about 3 bits/frame compared to SVQ, and has advantages of about 2 to 3 bits/frame compared to unstructured codebook MSVQ, depending on the number of stages and the survivor path search complexity. The paper also provides a discussion on some factors influencing the evaluation of the LSP encoding performance. >

  • ICASSP - Two-stage Vector Quantization-pyramidal lattice Vector Quantization and application to speech LSP coding
    1996 IEEE International Conference on Acoustics Speech and Signal Processing Conference Proceedings, 1
    Co-Authors: Jianping Pan
    Abstract:

    Motivated by two-stage Vector Quantization-(spherical) lattice Vector Quantization introduced by Pan and Fischer (1995), two-stage Vector Quantization-pyramidal lattice Vector Quantization (VQ-PLVQ) is proposed that has a lower computational requirement and a slightly superior encoding performance. The complexity of VQ-PLVQ is further reduced by introducing tree-structures into the first-stage VQ. It is found that the efficiency of tree-searching depends on source distributions. Tree-structured VQ-PLVQ provides a performance very close to VQ-PLVQ for sources with memory. Then, these two quantisation schemes are applied to encoding of speech line spectrum pair parameters. Tree-structured VQ-PLVQ is an attractive approach that achieves an outstanding performance for the spectral distortion of 1 dB as well as a significant reduction of the implementation complexity.

Shicong Liu - One of the best experts on this subject based on the ideXlab platform.

  • Generalized Residual Vector Quantization and Aggregating Tree for Large Scale Search
    IEEE Transactions on Multimedia, 2017
    Co-Authors: Shicong Liu, Junru Shao
    Abstract:

    Vector Quantization is an essential tool for tasks involving large scale data, for example, large scale similarity search, which is crucial for content-based information retrieval and analysis. In this paper, we propose a novel Vector Quantization framework that iteratively minimizes Quantization error. First, we provide a detailed review on a relevant Vector Quantization method named residual Vector Quantization (RVQ). Next, we propose generalized residual Vector Quantization (GRVQ) to further improve over RVQ. Many Vector Quantization methods can be viewed as special cases of our proposed method. To enable GRVQ on billion scale data, we introduce a nonexhaustive search scheme named aggregating tree (A-Tree) for high dimensional data that uses GRVQ encodings to build a radix tree and perform the nearest neighbor search by beam search. To search accurately and efficiently, VQ-encodings should satisfy locally aggregating encoding criterion: For any node of the corresponding A-Tree, neighboring Vectors should aggregate in fewer subtrees to make beam search efficient. We show that the proposed GRVQ encodings best satisfy the suggested criterion, and the joint use of GRVQ and A-Tree shows significantly better performances on billion scale datasets. Our methods are validated on several standard benchmark datasets. Experimental results and empirical analysis show the superior efficiency and effectiveness of our proposed methods compared to the state-of-the-art for large scale search.

  • Generalized residual Vector Quantization for large scale data
    arXiv: Multimedia, 2016
    Co-Authors: Shicong Liu, Junru Shao
    Abstract:

    Vector Quantization is an essential tool for tasks involving large scale data, for example, large scale similarity search, which is crucial for content-based information retrieval and analysis. In this paper, we propose a novel Vector Quantization framework that iteratively minimizes Quantization error. First, we provide a detailed review on a relevant Vector Quantization method named \textit{residual Vector Quantization} (RVQ). Next, we propose \textit{generalized residual Vector Quantization} (GRVQ) to further improve over RVQ. Many Vector Quantization methods can be viewed as the special cases of our proposed framework. We evaluate GRVQ on several large scale benchmark datasets for large scale search, classification and object retrieval. We compared GRVQ with existing methods in detail. Extensive experiments demonstrate our GRVQ framework substantially outperforms existing methods in term of Quantization accuracy and computation efficiency.

  • generalized residual Vector Quantization for large scale data
    International Conference on Multimedia and Expo, 2016
    Co-Authors: Shicong Liu, Junru Shao
    Abstract:

    Vector Quantization is an essential tool for tasks involving large scale data, for example, large scale similarity search, which is crucial for content-based information retrieval and analysis. In this paper, we propose a novel Vector Quantization framework that iteratively minimizes Quantization error. First, we provide a detailed review on a relevant Vector Quantization method named residual Vector Quantization (RVQ). Next, we propose generalized residual Vector Quantization (GRVQ) to further improve over RVQ. Many Vector Quantization methods can be viewed as the special cases of our proposed framework. We evaluate GRVQ on several large scale benchmark datasets for large scale search, classification and object retrieval. We compared GRVQ with existing methods in detail. Extensive experiments demonstrate our GRVQ framework substantially outperforms existing methods in term of Quantization accuracy and computation efficiency.

  • ICME - Generalized residual Vector Quantization for large scale data
    2016 IEEE International Conference on Multimedia and Expo (ICME), 2016
    Co-Authors: Shicong Liu, Junru Shao
    Abstract:

    Vector Quantization is an essential tool for tasks involving large scale data, for example, large scale similarity search, which is crucial for content-based information retrieval and analysis. In this paper, we propose a novel Vector Quantization framework that iteratively minimizes Quantization error. First, we provide a detailed review on a relevant Vector Quantization method named residual Vector Quantization (RVQ). Next, we propose generalized residual Vector Quantization (GRVQ) to further improve over RVQ. Many Vector Quantization methods can be viewed as the special cases of our proposed framework. We evaluate GRVQ on several large scale benchmark datasets for large scale search, classification and object retrieval. We compared GRVQ with existing methods in detail. Extensive experiments demonstrate our GRVQ framework substantially outperforms existing methods in term of Quantization accuracy and computation efficiency.

T R Fischer - One of the best experts on this subject based on the ideXlab platform.

  • two stage Vector Quantization lattice Vector Quantization
    International Symposium on Information Theory, 1994
    Co-Authors: T R Fischer
    Abstract:

    A two-stage Vector quantizer is introduced that uses an unstructured first-stage codebook and a second-stage lattice codebook. Joint optimum two-stage encoding is accomplished by exhaustive search of the parent codebook of the two-stage product code. Due to the relative ease of lattice Vector Quantization, optimum encoding is feasible for moderate-to-large encoding rates and Vector dimensions, provided the first-stage codebook size is kept reasonable. For memoryless Gaussian and Laplacian sources, encoding rates of 2 to 3 b/sample, and Vector dimensions of 8 to 35 the signal-to-noise ratio performance is comparable or superior to equivalent-delay encoding results previously reported. For Gaussian sources with memory, the effectiveness of the encoding method is dependent on the feasibility of using a large enough first-stage Vector quantizer codebook to exploit most of the source memory. >

J. Pan - One of the best experts on this subject based on the ideXlab platform.

  • Extension of two-stage Vector Quantization-lattice Vector Quantization
    IEEE Transactions on Communications, 1997
    Co-Authors: J. Pan
    Abstract:

    This paper is the extension of two-stage Vector Quantization-(spherical) lattice Vector Quantization (VQ-(S)LVQ) recently introduced by Pan and Fischer (see IEEE Trans. Inform. Theory, vol.41, p.155, 1995). First, according to high resolution Quantization theory, generalized Vector Quantization-lattice Vector Quantization (G-VQ-LVQ) is formulated in order to release the constraint of the spherical boundary for the second-stage lattice Vector Quantization (LVQ), which would provide possibilities of improving this kind of two-stage unstructured/structured quantizer by using more efficient LVQ. Second, among G-VQ-LVQ, Vector Quantization-pyramidal lattice Vector Quantization (VQ-PLVQ) is developed which is slightly superior or comparable to VQ-(S)LVQ in performance but has a much lower complexity. Simulation results show that for memoryless sources, VQ-PLVQ achieves a rate-distortion performance that is among the best of the fixed-rate Quantization that we found in the literature. Therefore, VQ-PLVQ is an attractive alternative to VQ-(S)LVQ in practice. Third, transform VQ-PLVQ (TVQ-PLVQ) is proposed for sources with memory. For encoding 16-D Vectors of the Gauss-Markov source, T-VQ-PLVQ has an advantage of close to 1.0 dB over VQ-PLVQ and is about 0.5 dB better than VQ-(S)LVQ.