Generic Representation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 336 Experts worldwide ranked by ideXlab platform

Tomas Lozanoperez - One of the best experts on this subject based on the ideXlab platform.

  • learning to guide task and motion planning using score space Representation
    arXiv: Robotics, 2018
    Co-Authors: Beomjoon Kim, Zi Wang, Leslie Pack Kaelbling, Tomas Lozanoperez
    Abstract:

    In this paper, we propose a learning algorithm that speeds up the search in task and motion planning problems. Our algorithm proposes solutions to three different challenges that arise in learning to improve planning efficiency: what to predict, how to represent a planning problem instance, and how to transfer knowledge from one problem instance to another. We propose a method that predicts constraints on the search space based on a Generic Representation of a planning problem instance, called score-space, where we represent a problem instance in terms of the performance of a set of solutions attempted so far. Using this Representation, we transfer knowledge, in the form of constraints, from previous problems based on the similarity in score space. We design a sequential algorithm that efficiently predicts these constraints, and evaluate it in three different challenging task and motion planning problems. Results indicate that our approach performs orders of magnitudes faster than an unguided planner

  • learning to guide task and motion planning using score space Representation
    International Conference on Robotics and Automation, 2017
    Co-Authors: Beomjoon Kim, Leslie Pack Kaelbling, Tomas Lozanoperez
    Abstract:

    In this paper, we propose a learning algorithm that speeds up the search in task and motion planning problems. Our algorithm proposes solutions to three different challenges that arise in learning to improve planning efficiency: what to predict, how to represent a planning problem instance, and how to transfer knowledge from one problem instance to another. We propose a method that predicts constraints on the search space based on a Generic Representation of a planning problem instance, called score space, where we represent a problem instance in terms of performance of a set of solutions attempted so far. Using this Representation, we transfer knowledge, in the form of constraints, from previous problems based on the similarity in score space. We design a sequential algorithm that efficiently predicts these constraints, and evaluate it in three different challenging task and motion planning problems. Results indicate that our approach perform orders of magnitudes faster than an unguided planner.

Il Yong Lee - One of the best experts on this subject based on the ideXlab platform.

  • local Generic Representation for face recognition with single sample per person
    Asian Conference on Computer Vision, 2014
    Co-Authors: Pengfei Zhu, Meng Yang, Lei Zhang, Il Yong Lee
    Abstract:

    Face recognition with single sample per person (SSPP) is a very challenging task because in such a scenario it is difficult to predict the facial variations of a query sample by the gallery samples. Considering the fact that different parts of human faces have different importance to face recognition, and the fact that the intra-class facial variations can be shared across different subjects, we propose a local Generic Representation (LGR) based framework for face recognition with SSPP. A local gallery dictionary is built by extracting the neighboring patches from the gallery dataset, while an intra-class variation dictionary is built by using an external Generic dataset to predict the possible facial variations (e.g., illuminations, pose, expressions and disguises). LGR minimizes the total Representation residual of the query sample over the local gallery dictionary and the Generic variation dictionary, and it uses correntropy to measure the Representation residual of each patch. Half-quadratic analysis is adopted to solve the optimization problem. LGR takes the advantages of patch based local Representation and Generic variation Representation, showing leading performance in face recognition with SSPP.

Beomjoon Kim - One of the best experts on this subject based on the ideXlab platform.

  • learning to guide task and motion planning using score space Representation
    arXiv: Robotics, 2018
    Co-Authors: Beomjoon Kim, Zi Wang, Leslie Pack Kaelbling, Tomas Lozanoperez
    Abstract:

    In this paper, we propose a learning algorithm that speeds up the search in task and motion planning problems. Our algorithm proposes solutions to three different challenges that arise in learning to improve planning efficiency: what to predict, how to represent a planning problem instance, and how to transfer knowledge from one problem instance to another. We propose a method that predicts constraints on the search space based on a Generic Representation of a planning problem instance, called score-space, where we represent a problem instance in terms of the performance of a set of solutions attempted so far. Using this Representation, we transfer knowledge, in the form of constraints, from previous problems based on the similarity in score space. We design a sequential algorithm that efficiently predicts these constraints, and evaluate it in three different challenging task and motion planning problems. Results indicate that our approach performs orders of magnitudes faster than an unguided planner

  • learning to guide task and motion planning using score space Representation
    International Conference on Robotics and Automation, 2017
    Co-Authors: Beomjoon Kim, Leslie Pack Kaelbling, Tomas Lozanoperez
    Abstract:

    In this paper, we propose a learning algorithm that speeds up the search in task and motion planning problems. Our algorithm proposes solutions to three different challenges that arise in learning to improve planning efficiency: what to predict, how to represent a planning problem instance, and how to transfer knowledge from one problem instance to another. We propose a method that predicts constraints on the search space based on a Generic Representation of a planning problem instance, called score space, where we represent a problem instance in terms of performance of a set of solutions attempted so far. Using this Representation, we transfer knowledge, in the form of constraints, from previous problems based on the similarity in score space. We design a sequential algorithm that efficiently predicts these constraints, and evaluate it in three different challenging task and motion planning problems. Results indicate that our approach perform orders of magnitudes faster than an unguided planner.

Nicholas J Kuhn - One of the best experts on this subject based on the ideXlab platform.

  • Generic Representation theory of finite fields in nondescribing characteristic
    Advances in Mathematics, 2015
    Co-Authors: Nicholas J Kuhn
    Abstract:

    Abstract Let Rep ( F ; K ) denote the category of functors from finite dimensional F -vector spaces to K-modules, where F is a field and K is a commutative ring. We prove that, if F is a finite field, and char F is invertible in K, then the K-linear abelian category Rep ( F ; K ) is equivalent to the product, over all n ≥ 0 , of the categories of K [ GL n ( F ) ] -modules. As a consequence, if K is also a field, then small projectives are also injective in Rep ( F ; K ) , and will have finite length. Even more is true if char K = 0 : the category Rep ( F ; K ) will be semisimple. In the last section, we briefly discuss ‘ q = 1 ’ analogues and consider Representations of various categories of finite sets. The main result follows from a 1992 result by L.G. Kovacs about the semigroup ring K [ M n ( F ) ] .

  • Generic Representation theory of finite fields in nondescribing characteristic
    arXiv: Representation Theory, 2014
    Co-Authors: Nicholas J Kuhn
    Abstract:

    Let Rep(F;K) denote the category of functors from finite dimensional F-vector spaces to K-modules, where F is a field and K is a commutative ring. We prove that, if F is a finite field, and Char F is invertible in K, then the K-linear abelian category Rep(F;K) is equivalent to the product, over all k=0,1,2, ..., of the categories of K[GL(k,F)]-modules. As a consequence, if K is also a field, then small projectives are also injective in Rep(F;K), and will have finite length. Even more is true if Char K = 0: the category Rep(F;K) will be semisimple. In a last section, we briefly discuss "q=1" analogues and consider Representations of various categories of finite sets. The main result follows from a 1992 result by L.G.Kovacs about the semigroup ring K[M_n(\F)].

  • a stratification of Generic Representation theory and generalized schur algebras
    K-theory, 2002
    Co-Authors: Nicholas J Kuhn
    Abstract:

    If Fq is the nite eld of characteristic p and order q = p s , let F(Fq) be the category whose objects are functors from nite dimensional Fq{vector spaces to Fq{vector spaces, and with morphisms the natural trans- formations between such functors. We dene an innite lattice of thick subcategories of F(Fq). Our main re- sult then identies various subquotients as categories of modules over products of symmetric groups, via recollement diagrams. Our lattice of thick subcategories is a renemen t of the Eilenberg{MacLane polynomial degree ltration F 0 (Fq) F 1 (Fq) F 2 (Fq) : : : of F(Fq) which has been extensively studied and used in the algebraic K{theory lit- erature. Our main theorem implies a description of F d (Fq)=F d 1 (Fq) that renes and extends earlier results of Pirashvili and others. If q r, one of the subcategories is the Friedlander{Suslin category P r of 'strict polynomial functors of degree r', equivalent to the category of modules over the Schur algebra S(n; r) with n r. Our results can thus also be viewed as rening and extending the classic relationship between S(n; r){modules and r{modules via the Schur functor. In fact, (essentially) all our subcategories are equivalent to categories of modules over various nite dimensional algebras, and our lattice can be interpreted in terms of lattices of idempotent two sided ideals in these generalized Schur algebras. Applications include a simple proof, free of algebraic group theory, of a generalized Steinberg Tensor Product Theorem. This then implies the classic theorem for GLn(Fq), shedding some new light on this classic result. Our tensor product theorem is then used to study when various of our generalized Schur algebras are Morita equivalent. We use two technical tricks which may be of some independent interest. Firstly, we 'twist' by an action of the Galois group Gal(Fq; Fp) to be able to work entirely with vector spaces over the prime eld Fp, making discussions of 'base change' unnecessary. Secondly, we systematically use 'functors with product', a.k.a. lax symmetric monoidal functors, to dene our subcategories.

  • the Generic Representation theory of finite fields a survey of basic structure
    2000
    Co-Authors: Nicholas J Kuhn
    Abstract:

    If F q is the finite field of characteristic p and order q= p s , let F(q)be the category whose objects are functors from finite dimensional F q -vector spaces to F q -vector spaces, and with morphisms the natural transformations between such functors. We survey the basic structure of this category and its close connections to the finite general linear groups, the symmetric groups, classical Schur algebras, algebraic K-theory, and the Steenrod algebra.

  • rational cohomology and cohomological stability in Generic Representation theory
    American Journal of Mathematics, 1998
    Co-Authors: Nicholas J Kuhn
    Abstract:

    With F q a finite field of characteristic p , let [inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="01i" /]( q ) be the category whose objects are functors from finite dimensional F q -vector spaces to [inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="02i" /]-vector spaces. Friedlander and Suslin have introduced a category [inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="03i" /] of "strict polynomial functors" which has the same relationship to [inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="04i" /]( q ) that the category of rational GL m -modules has to the category of GL m ( F q )-modules. Our main theorem says that, for all finite objects F, G ∈ [inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="05i" /], and all s , the natural restriction map from [inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="06i" /] ( F ( k ) , G ( k ) ) to [inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="07i" /] ( F, G ) is an isomorphism for all large enough k and q . Here F ( k ) denotes F twisted by the Frobenius k times. This combines with an analogous theorem of Cline, Parshall, Scott, and van der Kallen to show that, for all finite F, G ∈ [inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="08i" /], and all s , evaluation on an m dimensional vector space V m induces an isomorphism from [inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="09i" /] ( F, G ) to Ext s GL m ( F q ) ( F ( V m ), F ( V m )) for all large enough m and q . Thus group cohomology of the finite general linear groups has often been identified with MacLane (or Topological Hochschild) cohomology.

Philippe Véry - One of the best experts on this subject based on the ideXlab platform.

  • Toward a Generic Representation of random variables for machine learning
    2016
    Co-Authors: Gautier Marti, Philippe Véry, Philippe Donnat
    Abstract:

    This paper presents a pre-processing and a distance which improve the performance of machine learning algorithms working on independent and identically distributed stochastic processes. We introduce a novel non-parametric approach to represent random variables which splits apart dependency and distribution without losing any information. We also propound an associated metric leveraging this Representation and its statistical estimate. Besides experiments on synthetic datasets, the benefits of our contribution is illustrated through the example of clustering financial time series, for instance prices from the credit default swaps market. Results are available on the website www.datagrapple.com and an IPython Notebook tutorial is available at www.datagrapple.com/Tech for reproducible research.

  • toward a Generic Representation of random variables for machine learning
    Pattern Recognition Letters, 2016
    Co-Authors: Philippe Donnat, Gautier Marti, Philippe Véry
    Abstract:

    Introduce a non-parametric Representation of i.i.d. stochastic processes.The presented pre-processing boosts performance of algorithms.Clusterings of financial time series become more stable.Prices clustering allows one to recover idiosyncratic risk.Experiments results available at www.datagrapple.com. Display Omitted This paper presents a pre-processing and a distance which improve the performance of machine learning algorithms working on independent and identically distributed stochastic processes. We introduce a novel non-parametric approach to represent random variables which splits apart dependency and distribution without losing any information. We also propound an associated metric leveraging this Representation and its statistical estimate. Besides experiments on synthetic datasets, the benefits of our contribution is illustrated through the example of clustering financial time series, for instance prices from the credit default swaps market. Results are available on the website http://www.datagrapple.com and an IPython Notebook tutorial is available at http://www.datagrapple.com/Tech for reproducible research.

  • toward a Generic Representation of random variables for machine learning
    arXiv: Learning, 2015
    Co-Authors: Gautier Marti, Philippe Véry, Philippe Donnat
    Abstract:

    This paper presents a pre-processing and a distance which improve the performance of machine learning algorithms working on independent and identically distributed stochastic processes. We introduce a novel non-parametric approach to represent random variables which splits apart dependency and distribution without losing any information. We also propound an associated metric leveraging this Representation and its statistical estimate. Besides experiments on synthetic datasets, the benefits of our contribution is illustrated through the example of clustering financial time series, for instance prices from the credit default swaps market. Results are available on the website this http URL and an IPython Notebook tutorial is available at this http URL for reproducible research.