Hashing Function

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 34737 Experts worldwide ranked by ideXlab platform

Zi Huang - One of the best experts on this subject based on the ideXlab platform.

  • Scalable Supervised Asymmetric Hashing With Semantic and Latent Factor Embedding
    IEEE Transactions on Image Processing, 2019
    Co-Authors: Zheng Zhang, Zi Huang, Wai Keung Wong, Ling Shao
    Abstract:

    Compact hash code learning has been widely applied to fast similarity search owing to its significantly reduced storage and highly efficient query speed. However, it is still a challenging task to learn discriminative binary codes for perfectly preserving the full pairwise similarities embedded in the high-dimensional real-valued features, such that the promising performance can be guaranteed. To overcome this difficulty, in this paper, we propose a novel scalable supervised asymmetric Hashing (SSAH) method, which can skillfully approximate the full-pairwise similarity matrix based on maximum asymmetric inner product of two different non-binary embeddings. In particular, to comprehensively explore the semantic information of data, the supervised label information and the refined latent feature embedding are simultaneously considered to construct the high-quality Hashing Function and boost the discriminant of the learned binary codes. Specifically, SSAH learns two distinctive Hashing Functions in conjunction of minimizing the regression loss on the semantic label alignment and the encoding loss on the refined latent features. More importantly, instead of using only part of similarity correlations of data, the full-pairwise similarity matrix is directly utilized to avoid information loss and performance degeneration, and its cumbersome computation complexity on $n \times n$ matrix can be dexterously manipulated during the optimization phase. Furthermore, an efficient alternating optimization scheme with guaranteed convergence is designed to address the resulting discrete optimization problem. The encouraging experimental results on diverse benchmark datasets demonstrate the superiority of the proposed SSAH method in comparison with many recently proposed Hashing algorithms.

  • SADIH: Semantic-Aware DIscrete Hashing
    2019
    Co-Authors: Zhang Zheng, Xie Guo-sen, Li Yang, Li Sheng, Zi Huang
    Abstract:

    Due to its low storage cost and fast query speed, Hashing has been recognized to accomplish similarity search in large-scale multimedia retrieval applications. Particularly supervised Hashing has recently received considerable research attention by leveraging the label information to preserve the pairwise similarities of data points in the Hamming space. However, there still remain two crucial bottlenecks: 1) the learning process of the full pairwise similarity preservation is computationally unaffordable and unscalable to deal with big data; 2) the available category information of data are not well-explored to learn discriminative hash Functions. To overcome these challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH) framework, which aims to directly embed the transformed semantic information into the asymmetric similarity approximation and discriminative Hashing Function learning. Specifically, a semantic-aware latent embedding is introduced to asymmetrically preserve the full pairwise similarities while skillfully handle the cumbersome n times n pairwise similarity matrix. Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the data structures in the discriminative latent semantic space and perform data reconstruction. Moreover, an efficient alternating optimization algorithm is proposed to solve the resulting discrete optimization problem. Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.Comment: Accepted by The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19

  • SADIH: Semantic-Aware DIscrete Hashing
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zheng Zhang, Yang Li, Sheng Li, Zi Huang
    Abstract:

    Due to its low storage cost and fast query speed, Hashing has been recognized to accomplish similarity search in large-scale multimedia retrieval applications. Particularly supervised Hashing has recently received considerable research attention by leveraging the label information to preserve the pairwise similarities of data points in the Hamming space. However, there still remain two crucial bottlenecks: 1) the learning process of the full pairwise similarity preservation is computationally unaffordable and unscalable to deal with big data; 2) the available category information of data are not well-explored to learn discriminative hash Functions. To overcome these challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH) framework, which aims to directly embed the transformed semantic information into the asymmetric similarity approximation and discriminative Hashing Function learning. Specifically, a semantic-aware latent embedding is introduced to asymmetrically preserve the full pairwise similarities while skillfully handle the cumbersome n times n pairwise similarity matrix. Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the data structures in the discriminative latent semantic space and perform data reconstruction. Moreover, an efficient alternating optimization algorithm is proposed to solve the resulting discrete optimization problem. Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.

  • AAAI - SADIH: Semantic-Aware DIscrete Hashing
    Proceedings of the AAAI Conference on Artificial Intelligence, 2019
    Co-Authors: Zheng Zhang, Yang Li, Sheng Li, Zi Huang
    Abstract:

    Due to its low storage cost and fast query speed, Hashing has been recognized to accomplish similarity search in largescale multimedia retrieval applications. Particularly, supervised Hashing has recently received considerable research attention by leveraging the label information to preserve the pairwise similarities of data points in the Hamming space. However, there still remain two crucial bottlenecks: 1) the learning process of the full pairwise similarity preservation is computationally unaffordable and unscalable to deal with big data; 2) the available category information of data are not well-explored to learn discriminative hash Functions. To overcome these challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH) framework, which aims to directly embed the transformed semantic information into the asymmetric similarity approximation and discriminative Hashing Function learning. Specifically, a semantic-aware latent embedding is introduced to asymmetrically preserve the full pairwise similarities while skillfully handle the cumbersome n×n pairwise similarity matrix. Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the data structures in the discriminative latent semantic space and perform data reconstruction. Moreover, an efficient alternating optimization algorithm is proposed to solve the resulting discrete optimization problem. Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.

  • AAAI - SADIH: Semantic-Aware DIscrete Hashing
    Proceedings of the AAAI Conference on Artificial Intelligence, 2019
    Co-Authors: Zheng Zhang, Yang Li, Sheng Li, Zi Huang
    Abstract:

    Due to its low storage cost and fast query speed, Hashing has been recognized to accomplish similarity search in largescale multimedia retrieval applications. Particularly, supervised Hashing has recently received considerable research attention by leveraging the label information to preserve the pairwise similarities of data points in the Hamming space. However, there still remain two crucial bottlenecks: 1) the learning process of the full pairwise similarity preservation is computationally unaffordable and unscalable to deal with big data; 2) the available category information of data are not well-explored to learn discriminative hash Functions. To overcome these challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH) framework, which aims to directly embed the transformed semantic information into the asymmetric similarity approximation and discriminative Hashing Function learning. Specifically, a semantic-aware latent embedding is introduced to asymmetrically preserve the full pairwise similarities while skillfully handle the cumbersome n×n pairwise similarity matrix. Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the data structures in the discriminative latent semantic space and perform data reconstruction. Moreover, an efficient alternating optimization algorithm is proposed to solve the resulting discrete optimization problem. Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.

Zheng Zhang - One of the best experts on this subject based on the ideXlab platform.

  • Scalable Supervised Asymmetric Hashing With Semantic and Latent Factor Embedding
    IEEE Transactions on Image Processing, 2019
    Co-Authors: Zheng Zhang, Zi Huang, Wai Keung Wong, Ling Shao
    Abstract:

    Compact hash code learning has been widely applied to fast similarity search owing to its significantly reduced storage and highly efficient query speed. However, it is still a challenging task to learn discriminative binary codes for perfectly preserving the full pairwise similarities embedded in the high-dimensional real-valued features, such that the promising performance can be guaranteed. To overcome this difficulty, in this paper, we propose a novel scalable supervised asymmetric Hashing (SSAH) method, which can skillfully approximate the full-pairwise similarity matrix based on maximum asymmetric inner product of two different non-binary embeddings. In particular, to comprehensively explore the semantic information of data, the supervised label information and the refined latent feature embedding are simultaneously considered to construct the high-quality Hashing Function and boost the discriminant of the learned binary codes. Specifically, SSAH learns two distinctive Hashing Functions in conjunction of minimizing the regression loss on the semantic label alignment and the encoding loss on the refined latent features. More importantly, instead of using only part of similarity correlations of data, the full-pairwise similarity matrix is directly utilized to avoid information loss and performance degeneration, and its cumbersome computation complexity on $n \times n$ matrix can be dexterously manipulated during the optimization phase. Furthermore, an efficient alternating optimization scheme with guaranteed convergence is designed to address the resulting discrete optimization problem. The encouraging experimental results on diverse benchmark datasets demonstrate the superiority of the proposed SSAH method in comparison with many recently proposed Hashing algorithms.

  • SADIH: Semantic-Aware DIscrete Hashing
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zheng Zhang, Yang Li, Sheng Li, Zi Huang
    Abstract:

    Due to its low storage cost and fast query speed, Hashing has been recognized to accomplish similarity search in large-scale multimedia retrieval applications. Particularly supervised Hashing has recently received considerable research attention by leveraging the label information to preserve the pairwise similarities of data points in the Hamming space. However, there still remain two crucial bottlenecks: 1) the learning process of the full pairwise similarity preservation is computationally unaffordable and unscalable to deal with big data; 2) the available category information of data are not well-explored to learn discriminative hash Functions. To overcome these challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH) framework, which aims to directly embed the transformed semantic information into the asymmetric similarity approximation and discriminative Hashing Function learning. Specifically, a semantic-aware latent embedding is introduced to asymmetrically preserve the full pairwise similarities while skillfully handle the cumbersome n times n pairwise similarity matrix. Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the data structures in the discriminative latent semantic space and perform data reconstruction. Moreover, an efficient alternating optimization algorithm is proposed to solve the resulting discrete optimization problem. Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.

  • AAAI - SADIH: Semantic-Aware DIscrete Hashing
    Proceedings of the AAAI Conference on Artificial Intelligence, 2019
    Co-Authors: Zheng Zhang, Yang Li, Sheng Li, Zi Huang
    Abstract:

    Due to its low storage cost and fast query speed, Hashing has been recognized to accomplish similarity search in largescale multimedia retrieval applications. Particularly, supervised Hashing has recently received considerable research attention by leveraging the label information to preserve the pairwise similarities of data points in the Hamming space. However, there still remain two crucial bottlenecks: 1) the learning process of the full pairwise similarity preservation is computationally unaffordable and unscalable to deal with big data; 2) the available category information of data are not well-explored to learn discriminative hash Functions. To overcome these challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH) framework, which aims to directly embed the transformed semantic information into the asymmetric similarity approximation and discriminative Hashing Function learning. Specifically, a semantic-aware latent embedding is introduced to asymmetrically preserve the full pairwise similarities while skillfully handle the cumbersome n×n pairwise similarity matrix. Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the data structures in the discriminative latent semantic space and perform data reconstruction. Moreover, an efficient alternating optimization algorithm is proposed to solve the resulting discrete optimization problem. Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.

  • AAAI - SADIH: Semantic-Aware DIscrete Hashing
    Proceedings of the AAAI Conference on Artificial Intelligence, 2019
    Co-Authors: Zheng Zhang, Yang Li, Sheng Li, Zi Huang
    Abstract:

    Due to its low storage cost and fast query speed, Hashing has been recognized to accomplish similarity search in largescale multimedia retrieval applications. Particularly, supervised Hashing has recently received considerable research attention by leveraging the label information to preserve the pairwise similarities of data points in the Hamming space. However, there still remain two crucial bottlenecks: 1) the learning process of the full pairwise similarity preservation is computationally unaffordable and unscalable to deal with big data; 2) the available category information of data are not well-explored to learn discriminative hash Functions. To overcome these challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH) framework, which aims to directly embed the transformed semantic information into the asymmetric similarity approximation and discriminative Hashing Function learning. Specifically, a semantic-aware latent embedding is introduced to asymmetrically preserve the full pairwise similarities while skillfully handle the cumbersome n×n pairwise similarity matrix. Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the data structures in the discriminative latent semantic space and perform data reconstruction. Moreover, an efficient alternating optimization algorithm is proposed to solve the resulting discrete optimization problem. Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.

Sheng Li - One of the best experts on this subject based on the ideXlab platform.

  • SADIH: Semantic-Aware DIscrete Hashing
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zheng Zhang, Yang Li, Sheng Li, Zi Huang
    Abstract:

    Due to its low storage cost and fast query speed, Hashing has been recognized to accomplish similarity search in large-scale multimedia retrieval applications. Particularly supervised Hashing has recently received considerable research attention by leveraging the label information to preserve the pairwise similarities of data points in the Hamming space. However, there still remain two crucial bottlenecks: 1) the learning process of the full pairwise similarity preservation is computationally unaffordable and unscalable to deal with big data; 2) the available category information of data are not well-explored to learn discriminative hash Functions. To overcome these challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH) framework, which aims to directly embed the transformed semantic information into the asymmetric similarity approximation and discriminative Hashing Function learning. Specifically, a semantic-aware latent embedding is introduced to asymmetrically preserve the full pairwise similarities while skillfully handle the cumbersome n times n pairwise similarity matrix. Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the data structures in the discriminative latent semantic space and perform data reconstruction. Moreover, an efficient alternating optimization algorithm is proposed to solve the resulting discrete optimization problem. Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.

  • AAAI - SADIH: Semantic-Aware DIscrete Hashing
    Proceedings of the AAAI Conference on Artificial Intelligence, 2019
    Co-Authors: Zheng Zhang, Yang Li, Sheng Li, Zi Huang
    Abstract:

    Due to its low storage cost and fast query speed, Hashing has been recognized to accomplish similarity search in largescale multimedia retrieval applications. Particularly, supervised Hashing has recently received considerable research attention by leveraging the label information to preserve the pairwise similarities of data points in the Hamming space. However, there still remain two crucial bottlenecks: 1) the learning process of the full pairwise similarity preservation is computationally unaffordable and unscalable to deal with big data; 2) the available category information of data are not well-explored to learn discriminative hash Functions. To overcome these challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH) framework, which aims to directly embed the transformed semantic information into the asymmetric similarity approximation and discriminative Hashing Function learning. Specifically, a semantic-aware latent embedding is introduced to asymmetrically preserve the full pairwise similarities while skillfully handle the cumbersome n×n pairwise similarity matrix. Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the data structures in the discriminative latent semantic space and perform data reconstruction. Moreover, an efficient alternating optimization algorithm is proposed to solve the resulting discrete optimization problem. Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.

  • AAAI - SADIH: Semantic-Aware DIscrete Hashing
    Proceedings of the AAAI Conference on Artificial Intelligence, 2019
    Co-Authors: Zheng Zhang, Yang Li, Sheng Li, Zi Huang
    Abstract:

    Due to its low storage cost and fast query speed, Hashing has been recognized to accomplish similarity search in largescale multimedia retrieval applications. Particularly, supervised Hashing has recently received considerable research attention by leveraging the label information to preserve the pairwise similarities of data points in the Hamming space. However, there still remain two crucial bottlenecks: 1) the learning process of the full pairwise similarity preservation is computationally unaffordable and unscalable to deal with big data; 2) the available category information of data are not well-explored to learn discriminative hash Functions. To overcome these challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH) framework, which aims to directly embed the transformed semantic information into the asymmetric similarity approximation and discriminative Hashing Function learning. Specifically, a semantic-aware latent embedding is introduced to asymmetrically preserve the full pairwise similarities while skillfully handle the cumbersome n×n pairwise similarity matrix. Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the data structures in the discriminative latent semantic space and perform data reconstruction. Moreover, an efficient alternating optimization algorithm is proposed to solve the resulting discrete optimization problem. Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.

Yang Li - One of the best experts on this subject based on the ideXlab platform.

  • SADIH: Semantic-Aware DIscrete Hashing
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zheng Zhang, Yang Li, Sheng Li, Zi Huang
    Abstract:

    Due to its low storage cost and fast query speed, Hashing has been recognized to accomplish similarity search in large-scale multimedia retrieval applications. Particularly supervised Hashing has recently received considerable research attention by leveraging the label information to preserve the pairwise similarities of data points in the Hamming space. However, there still remain two crucial bottlenecks: 1) the learning process of the full pairwise similarity preservation is computationally unaffordable and unscalable to deal with big data; 2) the available category information of data are not well-explored to learn discriminative hash Functions. To overcome these challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH) framework, which aims to directly embed the transformed semantic information into the asymmetric similarity approximation and discriminative Hashing Function learning. Specifically, a semantic-aware latent embedding is introduced to asymmetrically preserve the full pairwise similarities while skillfully handle the cumbersome n times n pairwise similarity matrix. Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the data structures in the discriminative latent semantic space and perform data reconstruction. Moreover, an efficient alternating optimization algorithm is proposed to solve the resulting discrete optimization problem. Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.

  • AAAI - SADIH: Semantic-Aware DIscrete Hashing
    Proceedings of the AAAI Conference on Artificial Intelligence, 2019
    Co-Authors: Zheng Zhang, Yang Li, Sheng Li, Zi Huang
    Abstract:

    Due to its low storage cost and fast query speed, Hashing has been recognized to accomplish similarity search in largescale multimedia retrieval applications. Particularly, supervised Hashing has recently received considerable research attention by leveraging the label information to preserve the pairwise similarities of data points in the Hamming space. However, there still remain two crucial bottlenecks: 1) the learning process of the full pairwise similarity preservation is computationally unaffordable and unscalable to deal with big data; 2) the available category information of data are not well-explored to learn discriminative hash Functions. To overcome these challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH) framework, which aims to directly embed the transformed semantic information into the asymmetric similarity approximation and discriminative Hashing Function learning. Specifically, a semantic-aware latent embedding is introduced to asymmetrically preserve the full pairwise similarities while skillfully handle the cumbersome n×n pairwise similarity matrix. Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the data structures in the discriminative latent semantic space and perform data reconstruction. Moreover, an efficient alternating optimization algorithm is proposed to solve the resulting discrete optimization problem. Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.

  • AAAI - SADIH: Semantic-Aware DIscrete Hashing
    Proceedings of the AAAI Conference on Artificial Intelligence, 2019
    Co-Authors: Zheng Zhang, Yang Li, Sheng Li, Zi Huang
    Abstract:

    Due to its low storage cost and fast query speed, Hashing has been recognized to accomplish similarity search in largescale multimedia retrieval applications. Particularly, supervised Hashing has recently received considerable research attention by leveraging the label information to preserve the pairwise similarities of data points in the Hamming space. However, there still remain two crucial bottlenecks: 1) the learning process of the full pairwise similarity preservation is computationally unaffordable and unscalable to deal with big data; 2) the available category information of data are not well-explored to learn discriminative hash Functions. To overcome these challenges, we propose a unified Semantic-Aware DIscrete Hashing (SADIH) framework, which aims to directly embed the transformed semantic information into the asymmetric similarity approximation and discriminative Hashing Function learning. Specifically, a semantic-aware latent embedding is introduced to asymmetrically preserve the full pairwise similarities while skillfully handle the cumbersome n×n pairwise similarity matrix. Meanwhile, a semantic-aware autoencoder is developed to jointly preserve the data structures in the discriminative latent semantic space and perform data reconstruction. Moreover, an efficient alternating optimization algorithm is proposed to solve the resulting discrete optimization problem. Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.

Wai Keung Wong - One of the best experts on this subject based on the ideXlab platform.

  • Scalable Supervised Asymmetric Hashing With Semantic and Latent Factor Embedding
    IEEE Transactions on Image Processing, 2019
    Co-Authors: Zheng Zhang, Zi Huang, Wai Keung Wong, Ling Shao
    Abstract:

    Compact hash code learning has been widely applied to fast similarity search owing to its significantly reduced storage and highly efficient query speed. However, it is still a challenging task to learn discriminative binary codes for perfectly preserving the full pairwise similarities embedded in the high-dimensional real-valued features, such that the promising performance can be guaranteed. To overcome this difficulty, in this paper, we propose a novel scalable supervised asymmetric Hashing (SSAH) method, which can skillfully approximate the full-pairwise similarity matrix based on maximum asymmetric inner product of two different non-binary embeddings. In particular, to comprehensively explore the semantic information of data, the supervised label information and the refined latent feature embedding are simultaneously considered to construct the high-quality Hashing Function and boost the discriminant of the learned binary codes. Specifically, SSAH learns two distinctive Hashing Functions in conjunction of minimizing the regression loss on the semantic label alignment and the encoding loss on the refined latent features. More importantly, instead of using only part of similarity correlations of data, the full-pairwise similarity matrix is directly utilized to avoid information loss and performance degeneration, and its cumbersome computation complexity on $n \times n$ matrix can be dexterously manipulated during the optimization phase. Furthermore, an efficient alternating optimization scheme with guaranteed convergence is designed to address the resulting discrete optimization problem. The encouraging experimental results on diverse benchmark datasets demonstrate the superiority of the proposed SSAH method in comparison with many recently proposed Hashing algorithms.

  • Supervised discrete discriminant Hashing for image retrieval
    Pattern Recognition, 2018
    Co-Authors: Yan Cui, Jielin Jiang, Zhihui Lai, Wai Keung Wong
    Abstract:

    Abstract Most existing Hashing methods usually focus on constructing hash Function only, rather than learning discrete hash codes directly. Therefore the learned hash Function in this way may result in the hash Function which can-not achieve ideal discrete hash codes. To make the learned hash Function for achieving ideal approximated discrete hash codes, in this paper, we proposed a novel supervised discrete discriminant Hashing learning method, which can learn discrete Hashing codes and Hashing Function simultaneously. To make the learned discrete hash codes to be optimal for classification, the learned Hashing framework aims to learn a robust similarity metric so as to maximize the similarity of the same class discrete hash codes and minimize the similarity of the different class discrete hash codes simultaneously. The discriminant information of the training data can thus be incorporated into the learning framework. Meanwhile, the hash Functions are constructed to fit the directly learned binary hash codes. Experimental results clearly demonstrate that the proposed method achieves leading performance compared with the state-of-the-art semi-supervised classification methods.