Parent Structure

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3432 Experts worldwide ranked by ideXlab platform

M P Saka - One of the best experts on this subject based on the ideXlab platform.

  • The theorems of structural variation for rectangular finite elements for plate flexure
    Computers & Structures, 2005
    Co-Authors: M P Saka
    Abstract:

    The theorems of structural variation predict the forces and displacements throughout a Structure without the need of fresh analysis when the physical properties of one or more members are altered or even its topology is changed due to removal of one or more of its elements. It has been shown that a single linear elastic analysis of a Parent Structure under the applied loads and a set of unit-loading cases is sufficient to determine the elastic, non-linear elastic and even elastic-plastic response of number of related frames. These theorems later are extended to triangular, quadrilateral and solid cubic finite element Structures. In this paper, the theorems of structural variation are extended to cover the rectangular finite elements for plate flexure. The unit-loading cases required to study the modification of a single element are derived. The displacements and nodal forces obtained from these unit-loading cases are used to calculate the variation factors. Multiplication of the response of the Parent Structure by these variation factors simply yields the response of the new Structures where one or more of its members are altered or totally removed. Two examples are included to demonstrate the application of these theorems.

  • the theorems of structural variation for grillage systems
    Computers & Structures, 2000
    Co-Authors: M P Saka
    Abstract:

    Abstract The theorems of structural variation predict the forces and displacements throughout a Structure without the need of fresh analysis when the physical properties of one or more of its members are altered. It is necessary to carry out a single linear analysis of a Parent Structure under the applied loads and a set of unit loadings in order to study the effect of infinite number of changes in the cross-sectional properties or even total removal of a member in the Structure. The member end forces obtained in this analysis are used to calculate the variation factors. The displacements and member end forces of the modified Structure are then obtained by simply multiplying the displacements and member end forces of the Parent Structure. The earlier application of these theorems covers the rigid frames and some finite element Structures. In this study, the theorems of structural variation are extended to grillage Structures. The unit loading cases required to study the modification of grillage elements are determined. A number of examples are included to demonstrate the simplicity of the method.

  • The theorems of structural variation for solid cubic finite elements
    Computers & Structures, 1998
    Co-Authors: M P Saka
    Abstract:

    Abstract The theorems of structural variation predict the forces and displacements throughout a Structure without need of fresh analysis when the physical properties of one or more of its elements are altered. It has been shown, that by means of these theorems, the elastic, non-linear elastic and elastic–plastic analysis of number of related frame Structures can be obtained from the simple elastic analysis of a Parent Structure. They are later extended to cover the triangular and quadrilateral finite element Structures. In this paper, it is shown that these theorems can also be applied to three-dimensional finite element Structures. For this purpose, eight noded solid cubic element Structures are considered. The unit loading cases required to study the modification of a single element are derived. They are later used to obtain the variation factors. These factors are utilized to predict the behavior of a cubic element Structure when one or more of its element are modified or totally removed. It is verified that the accuracy of the results is the same as the original finite element discritization of the Parent Structure.

Nir Friedman - One of the best experts on this subject based on the ideXlab platform.

  • "Ideal Parent" Structure Learning for Continuous Variable Networks
    arXiv: Learning, 2012
    Co-Authors: Iftach Nachman, Gal Elidan, Nir Friedman
    Abstract:

    In recent years, there is a growing interest in learning Bayesian networks with continuous variables. Learning the Structure of such networks is a computationally expensive procedure, which limits most applications to parameter learning. This problem is even more acute when learning networks with hidden variables. We present a general method for significantly speeding the Structure search algorithm for continuous variable networks with common parametric distributions. Importantly, our method facilitates the addition of new hidden variables into the network Structure efficiently. We demonstrate the method on several data sets, both for learning Structure on fully observable data, and for introducing new hidden variables during Structure search.

  • Ideal Parent Structure Learning for Continuous Variable Bayesian Networks
    Journal of Machine Learning Research, 2007
    Co-Authors: Gal Elidan, Iftach Nachman, Nir Friedman
    Abstract:

    Bayesian networks in general, and continuous variable networks in particular, have become increasingly popular in recent years, largely due to advances in methods that facilitate automatic learning from data. Yet, despite these advances, the key task of learning the Structure of such models remains a computationally intensive procedure, which limits most applications to parameter learning. This problem is even more acute when learning networks in the presence of missing values or hidden variables, a scenario that is part of many real-life problems. In this work we present a general method for speeding Structure search for continuous variable networks with common parametric distributions. We efficiently evaluate the approximate merit of candidate Structure modifications and apply time consuming (exact) computations only to the most promising ones, thereby achieving significant improvement in the running time of the search algorithm. Our method also naturally and efficiently facilitates the addition of useful new hidden variables into the network Structure, a task that is typically considered both conceptually difficult and computationally prohibitive. We demonstrate our method on synthetic and real-life data sets, both for learning Structure on fully and partially observable data, and for introducing new hidden variables during Structure search.

  • UAI - Ideal Parent Structure learning for continuous variable networks
    2004
    Co-Authors: Iftach Nachman, Gal Elidan, Nir Friedman
    Abstract:

    In recent years, there is a growing interest in learning Bayesian networks with continuous variables. Learning the Structure of such networks is a computationally expensive procedure, which limits most applications to parameter learning. This problem is even more acute when learning networks with hidden variables. We present a general method for significantly speeding the Structure search algorithm for continuous variable networks with common parametric distributions. Importantly, our method facilitates the addition of new hidden variables into the network Structure efficiently. We demonstrate the method on several data sets, both for learning Structure on fully observable data, and for introducing new hidden variables during Structure search.

Friedmannir - One of the best experts on this subject based on the ideXlab platform.

Gal Elidan - One of the best experts on this subject based on the ideXlab platform.

  • "Ideal Parent" Structure Learning for Continuous Variable Networks
    arXiv: Learning, 2012
    Co-Authors: Iftach Nachman, Gal Elidan, Nir Friedman
    Abstract:

    In recent years, there is a growing interest in learning Bayesian networks with continuous variables. Learning the Structure of such networks is a computationally expensive procedure, which limits most applications to parameter learning. This problem is even more acute when learning networks with hidden variables. We present a general method for significantly speeding the Structure search algorithm for continuous variable networks with common parametric distributions. Importantly, our method facilitates the addition of new hidden variables into the network Structure efficiently. We demonstrate the method on several data sets, both for learning Structure on fully observable data, and for introducing new hidden variables during Structure search.

  • Ideal Parent Structure Learning for Continuous Variable Bayesian Networks
    Journal of Machine Learning Research, 2007
    Co-Authors: Gal Elidan, Iftach Nachman, Nir Friedman
    Abstract:

    Bayesian networks in general, and continuous variable networks in particular, have become increasingly popular in recent years, largely due to advances in methods that facilitate automatic learning from data. Yet, despite these advances, the key task of learning the Structure of such models remains a computationally intensive procedure, which limits most applications to parameter learning. This problem is even more acute when learning networks in the presence of missing values or hidden variables, a scenario that is part of many real-life problems. In this work we present a general method for speeding Structure search for continuous variable networks with common parametric distributions. We efficiently evaluate the approximate merit of candidate Structure modifications and apply time consuming (exact) computations only to the most promising ones, thereby achieving significant improvement in the running time of the search algorithm. Our method also naturally and efficiently facilitates the addition of useful new hidden variables into the network Structure, a task that is typically considered both conceptually difficult and computationally prohibitive. We demonstrate our method on synthetic and real-life data sets, both for learning Structure on fully and partially observable data, and for introducing new hidden variables during Structure search.

  • UAI - Ideal Parent Structure learning for continuous variable networks
    2004
    Co-Authors: Iftach Nachman, Gal Elidan, Nir Friedman
    Abstract:

    In recent years, there is a growing interest in learning Bayesian networks with continuous variables. Learning the Structure of such networks is a computationally expensive procedure, which limits most applications to parameter learning. This problem is even more acute when learning networks with hidden variables. We present a general method for significantly speeding the Structure search algorithm for continuous variable networks with common parametric distributions. Importantly, our method facilitates the addition of new hidden variables into the network Structure efficiently. We demonstrate the method on several data sets, both for learning Structure on fully observable data, and for introducing new hidden variables during Structure search.

Van Der Thijs Hulst - One of the best experts on this subject based on the ideXlab platform.

  • Spin alignment of dark matter halos in filaments and walls
    The Astrophysical Journal, 2007
    Co-Authors: Miguel A. Aragon-calvo, Bernard J. T. Jones, Rien Van De Weygaert, Van Der Thijs Hulst
    Abstract:

    The MMF technique is used to segment the cosmic web as seen in a cosmological N-body simulation into wall-like and filament-like Structures. We find that the spins and shapes of dark matter halos are significantly correlated with each other and with the orientation of their host Structures. The shape orientation is such that the halo minor axes tend to lie perpendicular to the host Structure, be it a wall or filament. The orientation of the halo spin vector is mass-dependent. Low-mass halos in walls and filaments have a tendency to have their spins oriented within the Parent Structure, while higher mass halos in filaments have spins that tend to lie perpendicular to the Parent Structure.

  • Spin alignment of dark matter haloes in filaments and walls
    2006
    Co-Authors: Miguel A. Aragon-calvo, Van De Marinus Weijgaert, Bernard J. T. Jones, Van Der Thijs Hulst
    Abstract:

    Abstract: The MMF technique is used to segment the cosmic web as seen in a cosmological N-body simulation into wall-like and filament-like Structures. We find that the spins and shapes of dark matter haloes are significantly correlated with each other and with the orientation of their host Structures. The shape orientation is such that the halo minor axes tend to lie perpendicular to the host Structure, be it a wall or filament. The orientation of the halo spin vector is mass dependent. Low mass haloes in walls and filaments have a tendency to have their spins oriented within the Parent Structure, while higher mass haloes in filaments have spins that tend to lie perpendicular to the Parent Structure.