Function Definition

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 136926 Experts worldwide ranked by ideXlab platform

John R Koza - One of the best experts on this subject based on the ideXlab platform.

  • Transmembrane Domains Using Genetic Protein Segments as Programming
    2013
    Co-Authors: John R Koza
    Abstract:

    The recently-developed genetic programming paradigm is used to evolve a computer program to classify a given protein segment as being a transmembrane domain or non-transmembrane area of the protein. Genetic programming starts with a primordial ooze of randomly generated computer programs composed of available programmatic ingredients and then genetically breeds the population of programs using the Darwinian principle of survival of the fittest and an analog of the naturally occurring genetic operation of crossover (sexual recombination). Automatic Function Definition enables genetic programming to dynamically create subroutines dynamically during the run. Genetic programming is given a training set of differently-sized protein segments and their correct classification (but no biochemical knowledge, such as hydrophobicity values). Correlation is used as the fitness measure to drive the evolutionary process. The best genetically-evolved program achieves an out-of-sample correlation of 0.968 and an out-of-sample error rate of 1.6%. This error rate is better than that reported for four other algorithms reported at the First International Conference on Intelligent Systems for Molecular Biology. Our genetically evolved program is an instance of an algorithm discovered by an automated learning paradigm that is superior to that written by human investigators

  • simultaneous discovery of reusable detectors and subroutines using genetic programming
    international conference on Genetic algorithms, 1993
    Co-Authors: John R Koza
    Abstract:

    This paper describes an approach for automatically decomposing a problem into subproblems and then automatically discovering reusable subroutines, and a way of assembling the results produced by these subroutines in order to solve a problem. The approach uses genetic programming with automatic Function Definition. Genetic programming provides a way to genetically breed a computer program to solve a problem. Automatic Function Definition enables genetic programming to define potentially useful subroutines dynamically during a run. The approach is applied to an illustrative problem. Genetic programming with automatic Function Definition reduced the computational effort required to learn a solution to the problem by a factor of 2.0 as compared to genetic programming without automatic Function Definition. Similarly, the average structural complexity of the solution was reduced by about 21%.

  • performance improvement of machine learning via automatic discovery of facilitating Functions as applied to a problem of symbolic system identification
    IEEE International Conference on Neural Networks, 1993
    Co-Authors: John R Koza, Martin A Keane, James P Rice
    Abstract:

    The recently developed genetic programming paradigm provides a way to genetically breed a population of computer programs to solve problems. The technique of automatic Function Definition enables genetic programming to define potentially useful Functions dynamically during a run, much as a human programmer writing a computer program creates subroutines to perform certain groups of steps which must be performed in more than one place in the main program. An approximation is found to the impulse response Function, in symbolic form, for a linear time-invariant system. The value of automatic Function Definition in enabling genetic programming to accelerate the solution to this illustrative problem is demonstrated. >

  • discovery of a main program and reusable subroutines using genetic programming
    SPIE, 1993
    Co-Authors: John R Koza
    Abstract:

    This paper describes an approach for automatically decomposing a problem into subproblems, automatically creating reusable subroutines to solve the subproblems, and automatically assembling the results produced by the subroutines in order to solve the problem. The approach uses genetic programming with the recently developed additional facility of automatic Function Definition. Genetic programming provides a way to genetically breed a computer program to solve a problem and automatic Function Definition enables genetic programming to create reusable subroutines dynamically during a run. The approach is applied to an illustrative problem containing a considerable amount of regularity. Solutions to the problem produced using automatic Function Definition are considerably smaller in size and require processing of considerably fewer individuals than is the case without automatic Function Definition. Specifically, the average program size for a solution to the problem without using automatic Function Definition is 3.65 times larger than the size for a solution when using automatic Function Definition. The number of individuals required to be processed to yield a solution with 99% probability without automatic Function Definition is 9.09 times larger than the equivalent number required with automatic Function Definition.

Simon Jones - One of the best experts on this subject based on the ideXlab platform.

  • gadts meet their match pattern matching warnings that account for gadts guards and laziness
    International Conference on Functional Programming, 2015
    Co-Authors: Georgios Karachalias, Tom Schrijvers, Dimitrios Vytiniotis, Simon Jones
    Abstract:

    For ML and Haskell, accurate warnings when a Function Definition has redundant or missing patterns are mission critical. But today's compilers generate bogus warnings when the programmer uses guards (even simple ones), GADTs, pattern guards, or view patterns. We give the first algorithm that handles all these cases in a single, uniform framework, together with an implementation in GHC, and evidence of its utility in practice.

  • pattern matching warnings that account for gadts guards and laziness
    2015
    Co-Authors: Georgios Karachalias, Tom Schrijvers, Dimitrios Vytiniotis, Simon Jones
    Abstract:

    For ML and Haskell, accurate warnings when a Function Definition has redundant or missing patterns are mission critical. But today’s compilers generate bogus warnings when the programmer uses guards (even simple ones), GADTs, pattern guards, or view patterns. We give the first algorithm that handles all these cases in a single, uniform framework, together with an implementation in GHC, and evidence of its utility in practice.

  • scrap your boilerplate with class extensible generic Functions
    International Conference on Functional Programming, 2005
    Co-Authors: Ralf Lammel, Simon Jones
    Abstract:

    The 'Scrap your boilerplate' approach to generic programming allows the programmer to write generic Functions that can traverse arbitrary data structures, and yet have type-specific cases. However, the original approach required all the type-specific cases to be supplied at once, when the recursive knot of generic Function Definition is tied. Hence, generic Functions were closed. In contrast, Haskell's type classes support open, or extensible, Functions that can be extended with new type-specific cases as new data types are defined. In this paper, we extend the 'Scrap your boilerplate' approach to support this open style. On the way, we demonstrate the desirability of abstraction over type classes, and the usefulness of recursive dictionarie.

Pelamatti Julien - One of the best experts on this subject based on the ideXlab platform.

  • Optimisation Bayésienne de problèmes à variables mixtes : application à la conception de véhicules spatiaux
    2020
    Co-Authors: Pelamatti Julien
    Abstract:

    Dans le cadre de la conception de systèmes complexes, tels que les aéronefs et les lanceurs, la présence de fonctions d'objectifs et/ou de contraintes à forte intensité de calcul (e.g., modèles d'éléments finis) couplée à la dépendance de choix de conception technologique discrets et non ordonnés entraîne des problèmes d'optimisation difficiles. De plus, une partie de ces choix technologiques est associée à un certain nombre de variables de conception continues et discrètes spécifiques qui ne doivent être prises en considération que si des choix technologiques spécifiques sont faits. Par conséquent, le problème d'optimisation qui doit être résolu afin de déterminer la conception optimale du système présente un espace de recherche et un domaine de faisabilité variant de façon dynamique. Les algorithmes existants qui permettent de résoudre ce type particulier de problèmes ont tendance à exiger une grande quantité d'évaluations de fonctions afin de converger vers l'optimum réalisable, et sont donc inadéquats lorsqu'il s'agit de résoudre les problèmes à forte intensité de calcul. Pour cette raison, cette thèse explore la possibilité d'effectuer une optimisation de l'espace de conception contraint à variables mixtes et de taille variable en s'appuyant sur des méthodes d’optimisation à base de modèles de substitution créés à l'aide de processus Gaussiens, également connue sous le nom d'optimisation Bayésienne. Plus spécifiquement, 3 axes principaux sont discutés. Premièrement, la modélisation de substitution par processus gaussien de fonctions mixtes continues/discrètes et les défis qui y sont associés sont discutés en détail. Un formalisme unificateur est proposé afin de faciliter la description et la comparaison entre les noyaux existants permettant d'adapter les processus gaussiens à la présence de variables discrètes non ordonnées. De plus, les performances réelles de modélisation de ces différents noyaux sont testées et comparées sur un ensemble de benchmarks analytiques et de conception ayant des caractéristiques et des paramétrages différents. Dans la deuxième partie de la thèse, la possibilité d'étendre la modélisation de substitution mixte continue/discrète à un contexte d'optimisation Bayésienne est discutée. La faisabilité théorique de cette extension en termes de modélisation de la fonction objectif/contrainte ainsi que de définition et d'optimisation de la fonction d'acquisition est démontrée. Différentes alternatives possibles sont considérées et décrites. Enfin, la performance de l'algorithme d'optimisation proposé, avec diverses paramétrisations des noyaux et différentes initialisations, est testée sur un certain nombre de cas-test analytiques et de conception et est comparée aux algorithmes de référence.Dans la dernière partie de ce manuscrit, deux approches permettant d'adapter les algorithmes d'optimisation bayésienne mixte continue/discrète discutés précédemment afin de résoudre des problèmes caractérisés par un espace de conception variant dynamiquement au cours de l’optimisation sont proposées. La première adaptation est basée sur l'optimisation parallèle de plusieurs sous-problèmes couplée à une allocation de budget de calcul basée sur l'information fournie par les modèles de substitution. La seconde adaptation, au contraire, est basée sur la définition d'un noyau permettant de calculer la covariance entre des échantillons appartenant à des espaces de recherche partiellement différents en fonction du regroupement hiérarchique des variables dimensionnelles. Enfin, les deux alternatives sont testées et comparées sur un ensemble de cas-test analytiques et de conception.Globalement, il est démontré que les méthodes d'optimisation proposées permettent de converger vers les optimums des différents types de problèmes considérablement plus rapidement par rapport aux méthodes existantes. Elles représentent donc un outil prometteur pour la conception de systèmes complexes.Within the framework of complex system design, such as aircraft and launch vehicles, the presence of computationallyintensive objective and/or constraint Functions (e.g., finite element models and multidisciplinary analyses)coupled with the dependence on discrete and unordered technological design choices results in challenging optimizationproblems. Furthermore, part of these technological choices is associated to a number of specific continuous anddiscrete design variables which must be taken into consideration only if specific technological and/or architecturalchoices are made. As a result, the optimization problem which must be solved in order to determine the optimalsystem design presents a dynamically varying search space and feasibility domain.The few existing algorithms which allow solving this particular type of problems tend to require a large amountof Function evaluations in order to converge to the feasible optimum, and result therefore inadequate when dealingwith the computationally intensive problems which can often be encountered within the design of complex systems.For this reason, this thesis explores the possibility of performing constrained mixed-variable and variable-size designspace optimization by relying on surrogate model-based design optimization performed with the help of Gaussianprocesses, also known as Bayesian optimization. More specifically, 3 main axes are discussed. First, the Gaussianprocess surrogate modeling of mixed continuous/discrete Functions and the associated challenges are extensivelydiscussed. A unifying formalism is proposed in order to facilitate the description and comparison between theexisting kernels allowing to adapt Gaussian processes to the presence of discrete unordered variables. Furthermore,the actual modeling performances of these various kernels are tested and compared on a set of analytical and designrelated benchmarks with different characteristics and parameterizations.In the second part of the thesis, the possibility of extending the mixed continuous/discrete surrogate modeling toa context of Bayesian optimization is discussed. The theoretical feasibility of said extension in terms of objective/-constraint Function modeling as well as acquisition Function Definition and optimization is shown. Different possiblealternatives are considered and described. Finally, the performance of the proposed optimization algorithm, withvarious kernels parameterizations and different initializations, is tested on a number of analytical and design relatedtest-cases and compared to reference algorithms.In the last part of this manuscript, two alternative ways of adapting the previously discussed mixed continuous/discrete Bayesian optimization algorithms in order to solve variable-size design space problems (i.e., problemscharacterized by a dynamically varying design space) are proposed. The first adaptation is based on the paralleloptimization of several sub-problems coupled with a computational budget allocation based on the informationprovided by the surrogate models. The second adaptation, instead, is based on the Definition of a kernel allowingto compute the covariance between samples belonging to partially different search spaces based on the hierarchicalgrouping of design variables. Finally, the two alternatives are tested and compared on a set of analytical and designrelated benchmarks.Overall, it is shown that the proposed optimization methods allow to converge to the various constrained problemoptimum neighborhoods considerably faster when compared to the reference methods, thus representing apromising tool for the design of complex systems

  • Optimisation Bayésienne mixte: Application à la conception de véhicules aérospatiaux
    HAL CCSD, 2020
    Co-Authors: Pelamatti Julien
    Abstract:

    Within the framework of complex system design, such as aircraft and launch vehicles, the presence of computationallyintensive objective and/or constraint Functions (e.g., finite element models and multidisciplinary analyses)coupled with the dependence on discrete and unordered technological design choices results in challenging optimizationproblems. Furthermore, part of these technological choices is associated to a number of specific continuous anddiscrete design variables which must be taken into consideration only if specific technological and/or architecturalchoices are made. As a result, the optimization problem which must be solved in order to determine the optimalsystem design presents a dynamically varying search space and feasibility domain.The few existing algorithms which allow solving this particular type of problems tend to require a large amountof Function evaluations in order to converge to the feasible optimum, and result therefore inadequate when dealingwith the computationally intensive problems which can often be encountered within the design of complex systems.For this reason, this thesis explores the possibility of performing constrained mixed-variable and variable-size designspace optimization by relying on surrogate model-based design optimization performed with the help of Gaussianprocesses, also known as Bayesian optimization. More specifically, 3 main axes are discussed. First, the Gaussianprocess surrogate modeling of mixed continuous/discrete Functions and the associated challenges are extensivelydiscussed. A unifying formalism is proposed in order to facilitate the description and comparison between theexisting kernels allowing to adapt Gaussian processes to the presence of discrete unordered variables. Furthermore,the actual modeling performances of these various kernels are tested and compared on a set of analytical and designrelated benchmarks with different characteristics and parameterizations.In the second part of the thesis, the possibility of extending the mixed continuous/discrete surrogate modeling toa context of Bayesian optimization is discussed. The theoretical feasibility of said extension in terms of objective/-constraint Function modeling as well as acquisition Function Definition and optimization is shown. Different possiblealternatives are considered and described. Finally, the performance of the proposed optimization algorithm, withvarious kernels parameterizations and different initializations, is tested on a number of analytical and design relatedtest-cases and compared to reference algorithms.In the last part of this manuscript, two alternative ways of adapting the previously discussed mixed continuous/discrete Bayesian optimization algorithms in order to solve variable-size design space problems (i.e., problemscharacterized by a dynamically varying design space) are proposed. The first adaptation is based on the paralleloptimization of several sub-problems coupled with a computational budget allocation based on the informationprovided by the surrogate models. The second adaptation, instead, is based on the Definition of a kernel allowingto compute the covariance between samples belonging to partially different search spaces based on the hierarchicalgrouping of design variables. Finally, the two alternatives are tested and compared on a set of analytical and designrelated benchmarks.Overall, it is shown that the proposed optimization methods allow to converge to the various constrained problemoptimum neighborhoods considerably faster when compared to the reference methods, thus representing apromising tool for the design of complex systems.Dans le cadre de la conception de systèmes complexes, tels que les aéronefs et les lanceurs, la présence de fonctions d'objectifs et/ou de contraintes à forte intensité de calcul (e.g., modèles d'éléments finis) couplée à la dépendance de choix de conception technologique discrets et non ordonnés entraîne des problèmes d'optimisation difficiles. De plus, une partie de ces choix technologiques est associée à un certain nombre de variables de conception continues et discrètes spécifiques qui ne doivent être prises en considération que si des choix technologiques spécifiques sont faits. Par conséquent, le problème d'optimisation qui doit être résolu afin de déterminer la conception optimale du système présente un espace de recherche et un domaine de faisabilité variant de façon dynamique. Les algorithmes existants qui permettent de résoudre ce type particulier de problèmes ont tendance à exiger une grande quantité d'évaluations de fonctions afin de converger vers l'optimum réalisable, et sont donc inadéquats lorsqu'il s'agit de résoudre les problèmes à forte intensité de calcul. Pour cette raison, cette thèse explore la possibilité d'effectuer une optimisation de l'espace de conception contraint à variables mixtes et de taille variable en s'appuyant sur des méthodes d’optimisation à base de modèles de substitution créés à l'aide de processus Gaussiens, également connue sous le nom d'optimisation Bayésienne. Plus spécifiquement, 3 axes principaux sont discutés. Premièrement, la modélisation de substitution par processus gaussien de fonctions mixtes continues/discrètes et les défis qui y sont associés sont discutés en détail. Un formalisme unificateur est proposé afin de faciliter la description et la comparaison entre les noyaux existants permettant d'adapter les processus gaussiens à la présence de variables discrètes non ordonnées. De plus, les performances réelles de modélisation de ces différents noyaux sont testées et comparées sur un ensemble de benchmarks analytiques et de conception ayant des caractéristiques et des paramétrages différents. Dans la deuxième partie de la thèse, la possibilité d'étendre la modélisation de substitution mixte continue/discrète à un contexte d'optimisation Bayésienne est discutée. La faisabilité théorique de cette extension en termes de modélisation de la fonction objectif/contrainte ainsi que de définition et d'optimisation de la fonction d'acquisition est démontrée. Différentes alternatives possibles sont considérées et décrites. Enfin, la performance de l'algorithme d'optimisation proposé, avec diverses paramétrisations des noyaux et différentes initialisations, est testée sur un certain nombre de cas-test analytiques et de conception et est comparée aux algorithmes de référence.Dans la dernière partie de ce manuscrit, deux approches permettant d'adapter les algorithmes d'optimisation bayésienne mixte continue/discrète discutés précédemment afin de résoudre des problèmes caractérisés par un espace de conception variant dynamiquement au cours de l’optimisation sont proposées. La première adaptation est basée sur l'optimisation parallèle de plusieurs sous-problèmes couplée à une allocation de budget de calcul basée sur l'information fournie par les modèles de substitution. La seconde adaptation, au contraire, est basée sur la définition d'un noyau permettant de calculer la covariance entre des échantillons appartenant à des espaces de recherche partiellement différents en fonction du regroupement hiérarchique des variables dimensionnelles. Enfin, les deux alternatives sont testées et comparées sur un ensemble de cas-test analytiques et de conception.Globalement, il est démontré que les méthodes d'optimisation proposées permettent de converger vers les optimums des différents types de problèmes considérablement plus rapidement par rapport aux méthodes existantes. Elles représentent donc un outil prometteur pour la conception de systèmes complexes

Martin R Carter - One of the best experts on this subject based on the ideXlab platform.

  • soil quality for sustainable land management organic matter and aggregation interactions that maintain soil Functions
    Agronomy Journal, 2002
    Co-Authors: Martin R Carter
    Abstract:

    is seen as a basic premise of soil quality (Larson and Pierce, 1991, 1994). If a soil is not suitable for a specific Soil quality concepts are commonly used to evaluate sustainable use, then it is not appropriate to attempt to assign or land management in agroecosystems. The objectives of this review were to trace the importance of soil organic matter (SOM) in Canadian describe quality for that specific use or Function. In many sustainable land management studies and illustrate the role of SOM cases, however, it is not possible to make a perfect match and aggregation in sustaining soil Functions. Canadian studies on soil between the soil and its intended use. Under these cirquality were initiated in the early 1980s and showed that loss of SOM cumstances, quality must be built into the system using and soil aggregate stability were standard features of nonsustainable best management scenarios. land use. Subsequent studies have evaluated SOM quality using the Ecosystem concepts such as Function, processes, attrifollowing logical sequence: soil purpose and Function, processes, prop- butes, and indicators, have proved to be a useful frameerties and indicators, and methodology. Limiting steps in this soil work to describe soil quality (Larson and Pierce, 1991, quality framework are the questions of critical limits and standardiza1994; Doran and Parkin, 1994; Doran et al., 1996; Carter tion for soil properties. At present, critical limits for SOM are selected et al., 1997; Karlen et al., 1997). However, a precise using a commonly accepted reference value or based on empirically derived relations between SOM and a specific soil process or Function Definition of soil quality proves to be elusive. This is (e.g., soil fertility, productivity, or erodibility). Organic matter frac- probably related to the innate difficulty in defining soil tions (e.g., macro-organic matter, light fraction, microbial biomass, itself and to the multifaceted nature (i.e., scientific, perand mineralizable C) describe the quality of SOM. These fractions sonal, and social) of environmental concerns. Carter and have biological significance for several soil Functions and processes MacEwan (1996) suggested that although soil quality and are sensitive indicators of changes in total SOM. Total SOM describes an objective state or condition of the soil, it influences soil compactibility, friability, and soil water-holding capac- also is subjective, i.e., evaluated partly on the basis of ity while aggregated SOM has major implications for the Functioning personal and social determinations. The above frameof soil in regulating air and water infiltration, conserving nutrients, work of soil quality has utility when it is directed or and influencing soil permeability and erodibility. Overall, organic focused towards the manipulation, engineering, and/or matter inputs, the dynamics of the sand-sized macro-organic matter, and the soil aggregation process are important factors in maintaining management of the soil resource. Thus, soil quality is a and regulating organic matter Functioning in soil. technology, an applied science, directed towards better soil management. The objective of this paper is to review the context

  • soil quality for sustainable land management
    Agronomy Journal, 2002
    Co-Authors: Martin R Carter
    Abstract:

    is seen as a basic premise of soil quality (Larson and Pierce, 1991, 1994). If a soil is not suitable for a specific Soil quality concepts are commonly used to evaluate sustainable use, then it is not appropriate to attempt to assign or land management in agroecosystems. The objectives of this review were to trace the importance of soil organic matter (SOM) in Canadian describe quality for that specific use or Function. In many sustainable land management studies and illustrate the role of SOM cases, however, it is not possible to make a perfect match and aggregation in sustaining soil Functions. Canadian studies on soil between the soil and its intended use. Under these cirquality were initiated in the early 1980s and showed that loss of SOM cumstances, quality must be built into the system using and soil aggregate stability were standard features of nonsustainable best management scenarios. land use. Subsequent studies have evaluated SOM quality using the Ecosystem concepts such as Function, processes, attrifollowing logical sequence: soil purpose and Function, processes, prop- butes, and indicators, have proved to be a useful frameerties and indicators, and methodology. Limiting steps in this soil work to describe soil quality (Larson and Pierce, 1991, quality framework are the questions of critical limits and standardiza1994; Doran and Parkin, 1994; Doran et al., 1996; Carter tion for soil properties. At present, critical limits for SOM are selected et al., 1997; Karlen et al., 1997). However, a precise using a commonly accepted reference value or based on empirically derived relations between SOM and a specific soil process or Function Definition of soil quality proves to be elusive. This is (e.g., soil fertility, productivity, or erodibility). Organic matter frac- probably related to the innate difficulty in defining soil tions (e.g., macro-organic matter, light fraction, microbial biomass, itself and to the multifaceted nature (i.e., scientific, perand mineralizable C) describe the quality of SOM. These fractions sonal, and social) of environmental concerns. Carter and have biological significance for several soil Functions and processes MacEwan (1996) suggested that although soil quality and are sensitive indicators of changes in total SOM. Total SOM describes an objective state or condition of the soil, it influences soil compactibility, friability, and soil water-holding capac- also is subjective, i.e., evaluated partly on the basis of ity while aggregated SOM has major implications for the Functioning personal and social determinations. The above frameof soil in regulating air and water infiltration, conserving nutrients, work of soil quality has utility when it is directed or and influencing soil permeability and erodibility. Overall, organic focused towards the manipulation, engineering, and/or matter inputs, the dynamics of the sand-sized macro-organic matter, and the soil aggregation process are important factors in maintaining management of the soil resource. Thus, soil quality is a and regulating organic matter Functioning in soil. technology, an applied science, directed towards better soil management. The objective of this paper is to review the context

Jose C Principe - One of the best experts on this subject based on the ideXlab platform.

  • generalized correlation Function Definition properties and application to blind equalization
    IEEE Transactions on Signal Processing, 2006
    Co-Authors: Ignacio Santamaria, Puskal Prasad Pokharel, Jose C Principe
    Abstract:

    With an abundance of tools based on kernel methods and information theoretic learning, a void still exists in incorporating both the time structure and the statistical distribution of the time series in the same Functional measure. In this paper, a new generalized correlation measure is developed that includes the information of both the distribution and that of the time structure of a stochastic process. It is shown how this measure can be interpreted from a kernel method as well as from an information theoretic learning points of view, demonstrating some relevant properties. To underscore the effectiveness of the new measure, a simple blind equalization problem is considered using a coded signal.