Offline Learning

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Wen Gao - One of the best experts on this subject based on the ideXlab platform.

  • Learning compact visual descriptors for low bit rate mobile landmark search
    Ai Magazine, 2013
    Co-Authors: Lingyu Duan, Jie Chen, Tiejun Huang, Wen Gao
    Abstract:

    Coming with the ever growing computational power of mobile devices, mobile visual search have undergone an evolution in techniques and applications. A significant trend is low bit rate visual search, where compact visual descriptors are extracted directly over a mobile and delivered as queries rather than raw images to reduce the query transmission latency. In this article, we introduce our work on low bit rate mobile landmark search, in which a compact yet discriminative landmark image descriptor is extracted by using location context such as GPS, crowd-sourced hotspot WLAN, and cell tower locations. The compactness originates from the bag-of-words image representation, with an Offline Learning from geotagged photos from online photo sharing websites including Flickr and Panoramio. The Learning process involves segmenting the landmark photo collection by discrete geographical regions using Gaussian mixture model, and then boosting a ranking sensitive vocabulary within each region, with an “entropy” based descriptor compactness feedback to refine both phases iteratively. In online search, when entering a geographical region, the codebook in a mobile device are downstream adapted to generate extremely compact descriptors with promising discriminative ability. We have deployed landmark search apps to both HTC and iPhone mobile phones, working over the database of million scale images in typical areas like Beijing, New York, and Barcelona, and others. Our descriptor outperforms alternative compact descriptors (Chen et al. 2009; Chen et al., 2010; Chandrasekhar et al. 2009a; Chandrasekhar et al. 2009b) with significant margins. Beyond landmark search, this article will summarize the MPEG standarization progress of compact descriptor for visual search (CDVS) (Yuri et al. 2010; Yuri et al. 2011) towards application interoperability.

  • Learning multiple codebooks for low bit rate mobile visual search
    International Conference on Acoustics Speech and Signal Processing, 2012
    Co-Authors: Jie Lin, Lingyu Duan, Jie Chen, Siwei Luo, Wen Gao
    Abstract:

    Compressing a query image's signature via vocabulary coding is an effective approach to low bit rate mobile visual search. State-of-the-art methods concentrate on Offline Learning a codebook from an initial large vocabulary. Over a large heterogeneous reference database, Learning a single codebook may not suffice for maximally removing redundant codewords for vocabulary based compact descriptor. In this paper, we propose to learn multiple codebooks (m-Codebooks) for extremely compressing image signatures. A query-specific codebook (q-Codebook) is online generated at both client and server sides by adaptively weighting the off-line learned multiple codebooks. The q-Codebook is subsequently employed to quantize the query image for producing compact, discriminative, and scalable descriptors. As q-Codebook may be simultaneously generated at both sides, without transmitting the entire vocabulary, only small overhead (e.g. codebook ID and codeword 0/1 index) is incurred to reconstruct the query signature at the server end. To fulfill m-Codebooks and q-Codebook, we adopt a Bi-layer Sparse Coding method to learn the sparse relationships of codewords vs. codebooks as well as codebooks vs. query images via l1 regularization. Experiments on benchmarking datasets have demonstrated the extremely small descriptor's supervior performance in image retrieval.

  • Learning compact visual descriptor for low bit rate mobile landmark search
    International Joint Conference on Artificial Intelligence, 2011
    Co-Authors: Lingyu Duan, Jie Chen, Hongxun Yao, Tiejun Huang, Wen Gao
    Abstract:

    In this paper, we propose to extract a compact yet discriminative visual descriptor directly on the mobile device, which tackles the wireless query transmission latency in mobile landmark search. This descriptor originates from Offline Learning the location contexts of geo-tagged Web photos from both Flickr and Panoramio with two phrases: First, we segment the landmark photo collections into discrete geographical regions using a Gaussian Mixture Model [Stauffer et al., 2000]. Second, a ranking sensitive vocabulary boosting is introduced to learn a compact codebook within each region. To tackle the locally optimal descriptor Learning caused by imprecise geographical segmentation, we further iterate above phrases incorporating the feedback of an "entropy" based descriptor compactness into a prior distribution to constrain the Gaussian mixture modeling. Consequently, when entering a specific geographical region, the codebook in the mobile device is downstream adapted, which ensures efficient extraction of compact descriptors, its low bit rate transmission, as well as promising discrimination ability. We descriptors to both HTC and iPhone mobile phones, testing landmark search over one million images in typical areas like Beijing, New York, and Barcelona, etc. Our descriptor outperforms alternative compact descriptors [Chen et al., 2009][Chen et al., 2010][Chandrasekhar et al., 2009a][Chandrasekhar et al., 2009b] with a large margin.

Lingyu Duan - One of the best experts on this subject based on the ideXlab platform.

  • Learning compact visual descriptors for low bit rate mobile landmark search
    Ai Magazine, 2013
    Co-Authors: Lingyu Duan, Jie Chen, Tiejun Huang, Wen Gao
    Abstract:

    Coming with the ever growing computational power of mobile devices, mobile visual search have undergone an evolution in techniques and applications. A significant trend is low bit rate visual search, where compact visual descriptors are extracted directly over a mobile and delivered as queries rather than raw images to reduce the query transmission latency. In this article, we introduce our work on low bit rate mobile landmark search, in which a compact yet discriminative landmark image descriptor is extracted by using location context such as GPS, crowd-sourced hotspot WLAN, and cell tower locations. The compactness originates from the bag-of-words image representation, with an Offline Learning from geotagged photos from online photo sharing websites including Flickr and Panoramio. The Learning process involves segmenting the landmark photo collection by discrete geographical regions using Gaussian mixture model, and then boosting a ranking sensitive vocabulary within each region, with an “entropy” based descriptor compactness feedback to refine both phases iteratively. In online search, when entering a geographical region, the codebook in a mobile device are downstream adapted to generate extremely compact descriptors with promising discriminative ability. We have deployed landmark search apps to both HTC and iPhone mobile phones, working over the database of million scale images in typical areas like Beijing, New York, and Barcelona, and others. Our descriptor outperforms alternative compact descriptors (Chen et al. 2009; Chen et al., 2010; Chandrasekhar et al. 2009a; Chandrasekhar et al. 2009b) with significant margins. Beyond landmark search, this article will summarize the MPEG standarization progress of compact descriptor for visual search (CDVS) (Yuri et al. 2010; Yuri et al. 2011) towards application interoperability.

  • Learning multiple codebooks for low bit rate mobile visual search
    International Conference on Acoustics Speech and Signal Processing, 2012
    Co-Authors: Jie Lin, Lingyu Duan, Jie Chen, Siwei Luo, Wen Gao
    Abstract:

    Compressing a query image's signature via vocabulary coding is an effective approach to low bit rate mobile visual search. State-of-the-art methods concentrate on Offline Learning a codebook from an initial large vocabulary. Over a large heterogeneous reference database, Learning a single codebook may not suffice for maximally removing redundant codewords for vocabulary based compact descriptor. In this paper, we propose to learn multiple codebooks (m-Codebooks) for extremely compressing image signatures. A query-specific codebook (q-Codebook) is online generated at both client and server sides by adaptively weighting the off-line learned multiple codebooks. The q-Codebook is subsequently employed to quantize the query image for producing compact, discriminative, and scalable descriptors. As q-Codebook may be simultaneously generated at both sides, without transmitting the entire vocabulary, only small overhead (e.g. codebook ID and codeword 0/1 index) is incurred to reconstruct the query signature at the server end. To fulfill m-Codebooks and q-Codebook, we adopt a Bi-layer Sparse Coding method to learn the sparse relationships of codewords vs. codebooks as well as codebooks vs. query images via l1 regularization. Experiments on benchmarking datasets have demonstrated the extremely small descriptor's supervior performance in image retrieval.

  • Learning compact visual descriptor for low bit rate mobile landmark search
    International Joint Conference on Artificial Intelligence, 2011
    Co-Authors: Lingyu Duan, Jie Chen, Hongxun Yao, Tiejun Huang, Wen Gao
    Abstract:

    In this paper, we propose to extract a compact yet discriminative visual descriptor directly on the mobile device, which tackles the wireless query transmission latency in mobile landmark search. This descriptor originates from Offline Learning the location contexts of geo-tagged Web photos from both Flickr and Panoramio with two phrases: First, we segment the landmark photo collections into discrete geographical regions using a Gaussian Mixture Model [Stauffer et al., 2000]. Second, a ranking sensitive vocabulary boosting is introduced to learn a compact codebook within each region. To tackle the locally optimal descriptor Learning caused by imprecise geographical segmentation, we further iterate above phrases incorporating the feedback of an "entropy" based descriptor compactness into a prior distribution to constrain the Gaussian mixture modeling. Consequently, when entering a specific geographical region, the codebook in the mobile device is downstream adapted, which ensures efficient extraction of compact descriptors, its low bit rate transmission, as well as promising discrimination ability. We descriptors to both HTC and iPhone mobile phones, testing landmark search over one million images in typical areas like Beijing, New York, and Barcelona, etc. Our descriptor outperforms alternative compact descriptors [Chen et al., 2009][Chen et al., 2010][Chandrasekhar et al., 2009a][Chandrasekhar et al., 2009b] with a large margin.

Jie Chen - One of the best experts on this subject based on the ideXlab platform.

  • Learning compact visual descriptors for low bit rate mobile landmark search
    Ai Magazine, 2013
    Co-Authors: Lingyu Duan, Jie Chen, Tiejun Huang, Wen Gao
    Abstract:

    Coming with the ever growing computational power of mobile devices, mobile visual search have undergone an evolution in techniques and applications. A significant trend is low bit rate visual search, where compact visual descriptors are extracted directly over a mobile and delivered as queries rather than raw images to reduce the query transmission latency. In this article, we introduce our work on low bit rate mobile landmark search, in which a compact yet discriminative landmark image descriptor is extracted by using location context such as GPS, crowd-sourced hotspot WLAN, and cell tower locations. The compactness originates from the bag-of-words image representation, with an Offline Learning from geotagged photos from online photo sharing websites including Flickr and Panoramio. The Learning process involves segmenting the landmark photo collection by discrete geographical regions using Gaussian mixture model, and then boosting a ranking sensitive vocabulary within each region, with an “entropy” based descriptor compactness feedback to refine both phases iteratively. In online search, when entering a geographical region, the codebook in a mobile device are downstream adapted to generate extremely compact descriptors with promising discriminative ability. We have deployed landmark search apps to both HTC and iPhone mobile phones, working over the database of million scale images in typical areas like Beijing, New York, and Barcelona, and others. Our descriptor outperforms alternative compact descriptors (Chen et al. 2009; Chen et al., 2010; Chandrasekhar et al. 2009a; Chandrasekhar et al. 2009b) with significant margins. Beyond landmark search, this article will summarize the MPEG standarization progress of compact descriptor for visual search (CDVS) (Yuri et al. 2010; Yuri et al. 2011) towards application interoperability.

  • Learning multiple codebooks for low bit rate mobile visual search
    International Conference on Acoustics Speech and Signal Processing, 2012
    Co-Authors: Jie Lin, Lingyu Duan, Jie Chen, Siwei Luo, Wen Gao
    Abstract:

    Compressing a query image's signature via vocabulary coding is an effective approach to low bit rate mobile visual search. State-of-the-art methods concentrate on Offline Learning a codebook from an initial large vocabulary. Over a large heterogeneous reference database, Learning a single codebook may not suffice for maximally removing redundant codewords for vocabulary based compact descriptor. In this paper, we propose to learn multiple codebooks (m-Codebooks) for extremely compressing image signatures. A query-specific codebook (q-Codebook) is online generated at both client and server sides by adaptively weighting the off-line learned multiple codebooks. The q-Codebook is subsequently employed to quantize the query image for producing compact, discriminative, and scalable descriptors. As q-Codebook may be simultaneously generated at both sides, without transmitting the entire vocabulary, only small overhead (e.g. codebook ID and codeword 0/1 index) is incurred to reconstruct the query signature at the server end. To fulfill m-Codebooks and q-Codebook, we adopt a Bi-layer Sparse Coding method to learn the sparse relationships of codewords vs. codebooks as well as codebooks vs. query images via l1 regularization. Experiments on benchmarking datasets have demonstrated the extremely small descriptor's supervior performance in image retrieval.

  • Learning compact visual descriptor for low bit rate mobile landmark search
    International Joint Conference on Artificial Intelligence, 2011
    Co-Authors: Lingyu Duan, Jie Chen, Hongxun Yao, Tiejun Huang, Wen Gao
    Abstract:

    In this paper, we propose to extract a compact yet discriminative visual descriptor directly on the mobile device, which tackles the wireless query transmission latency in mobile landmark search. This descriptor originates from Offline Learning the location contexts of geo-tagged Web photos from both Flickr and Panoramio with two phrases: First, we segment the landmark photo collections into discrete geographical regions using a Gaussian Mixture Model [Stauffer et al., 2000]. Second, a ranking sensitive vocabulary boosting is introduced to learn a compact codebook within each region. To tackle the locally optimal descriptor Learning caused by imprecise geographical segmentation, we further iterate above phrases incorporating the feedback of an "entropy" based descriptor compactness into a prior distribution to constrain the Gaussian mixture modeling. Consequently, when entering a specific geographical region, the codebook in the mobile device is downstream adapted, which ensures efficient extraction of compact descriptors, its low bit rate transmission, as well as promising discrimination ability. We descriptors to both HTC and iPhone mobile phones, testing landmark search over one million images in typical areas like Beijing, New York, and Barcelona, etc. Our descriptor outperforms alternative compact descriptors [Chen et al., 2009][Chen et al., 2010][Chandrasekhar et al., 2009a][Chandrasekhar et al., 2009b] with a large margin.

Frank Kirchner - One of the best experts on this subject based on the ideXlab platform.

  • evolving neural networks for online reinforcement Learning
    Parallel Problem Solving from Nature, 2008
    Co-Authors: Jan Hendrik Metzen, Yohannes Kassahun, Mark Edgington, Frank Kirchner
    Abstract:

    For many complex Reinforcement Learning problems with large and continuous state spaces, neuroevolution (the evolution of artificial neural networks) has achieved promising results. This is especially true when there is noise in sensor and/or actuator signals. These results have mainly been obtained in Offline Learning settings, where the training and evaluation phase of the system are separated. In contrast, in online Reinforcement Learning tasks where the actual performance of the systems during its Learning phase matters, the results of neuroevolution are significantly impaired by its purely exploratory nature, meaning that it does not use (i. e. exploit) its knowledge of the performance of single individuals in order to improve its performance during Learning. In this paper we describe modifications which significantly improve the online performance of the neuroevolutionary method Evolutionary Acquisition of Neural Topologies (EANT) and discuss the results obtained on two benchmark problems.

  • towards efficient online reinforcement Learning using neuroevolution
    Genetic and Evolutionary Computation Conference, 2008
    Co-Authors: Jan Hendrik Metzen, Frank Kirchner, Mark Edgington, Yohannes Kassahun
    Abstract:

    For many complex Reinforcement Learning (RL) problems with large and continuous state spaces, neuroevolution has achieved promising results. This is especially true when there is noise in sensor and/or actuator signals. These results have mainly been obtained in Offline Learning settings, where the training and the evaluation phases of the systems are separated. In contrast, for online RL tasks, the actual performance of a system matters during its Learning phase. In these tasks, neuroevolutionary systems are often impaired by their purely exploratory nature, meaning that they usually do not use (i.e. exploit) their knowledge of a single individual's performance to improve performance during Learning. In this paper we describe modifications that significantly improve the online performance of the neuroevolutionary method Evolutionary Acquisition of Neural Topologies and discuss the results obtained in the Mountain Car benchmark.

Tiejun Huang - One of the best experts on this subject based on the ideXlab platform.

  • Learning compact visual descriptors for low bit rate mobile landmark search
    Ai Magazine, 2013
    Co-Authors: Lingyu Duan, Jie Chen, Tiejun Huang, Wen Gao
    Abstract:

    Coming with the ever growing computational power of mobile devices, mobile visual search have undergone an evolution in techniques and applications. A significant trend is low bit rate visual search, where compact visual descriptors are extracted directly over a mobile and delivered as queries rather than raw images to reduce the query transmission latency. In this article, we introduce our work on low bit rate mobile landmark search, in which a compact yet discriminative landmark image descriptor is extracted by using location context such as GPS, crowd-sourced hotspot WLAN, and cell tower locations. The compactness originates from the bag-of-words image representation, with an Offline Learning from geotagged photos from online photo sharing websites including Flickr and Panoramio. The Learning process involves segmenting the landmark photo collection by discrete geographical regions using Gaussian mixture model, and then boosting a ranking sensitive vocabulary within each region, with an “entropy” based descriptor compactness feedback to refine both phases iteratively. In online search, when entering a geographical region, the codebook in a mobile device are downstream adapted to generate extremely compact descriptors with promising discriminative ability. We have deployed landmark search apps to both HTC and iPhone mobile phones, working over the database of million scale images in typical areas like Beijing, New York, and Barcelona, and others. Our descriptor outperforms alternative compact descriptors (Chen et al. 2009; Chen et al., 2010; Chandrasekhar et al. 2009a; Chandrasekhar et al. 2009b) with significant margins. Beyond landmark search, this article will summarize the MPEG standarization progress of compact descriptor for visual search (CDVS) (Yuri et al. 2010; Yuri et al. 2011) towards application interoperability.

  • Learning compact visual descriptor for low bit rate mobile landmark search
    International Joint Conference on Artificial Intelligence, 2011
    Co-Authors: Lingyu Duan, Jie Chen, Hongxun Yao, Tiejun Huang, Wen Gao
    Abstract:

    In this paper, we propose to extract a compact yet discriminative visual descriptor directly on the mobile device, which tackles the wireless query transmission latency in mobile landmark search. This descriptor originates from Offline Learning the location contexts of geo-tagged Web photos from both Flickr and Panoramio with two phrases: First, we segment the landmark photo collections into discrete geographical regions using a Gaussian Mixture Model [Stauffer et al., 2000]. Second, a ranking sensitive vocabulary boosting is introduced to learn a compact codebook within each region. To tackle the locally optimal descriptor Learning caused by imprecise geographical segmentation, we further iterate above phrases incorporating the feedback of an "entropy" based descriptor compactness into a prior distribution to constrain the Gaussian mixture modeling. Consequently, when entering a specific geographical region, the codebook in the mobile device is downstream adapted, which ensures efficient extraction of compact descriptors, its low bit rate transmission, as well as promising discrimination ability. We descriptors to both HTC and iPhone mobile phones, testing landmark search over one million images in typical areas like Beijing, New York, and Barcelona, etc. Our descriptor outperforms alternative compact descriptors [Chen et al., 2009][Chen et al., 2010][Chandrasekhar et al., 2009a][Chandrasekhar et al., 2009b] with a large margin.