Rule Constraint

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 16791 Experts worldwide ranked by ideXlab platform

Hai Zhuge - One of the best experts on this subject based on the ideXlab platform.

  • The schema theory for semantic link network
    Future Generation Computer Systems, 2010
    Co-Authors: Hai Zhuge, Yunchuan Sun
    Abstract:

    The Semantic Link Network (SLN) is a loosely coupled semantic data model for managing Web resources. Its nodes can be any type of resource. Its edges can be any semantic relation. Potential semantic links can be derived out according to reasoning Rules on semantic relations. This paper proposes the schema theory for the SLN, including the concepts, Rule-Constraint normal forms, and relevant algorithms. The theory provides the basis for normalized management of semantic link network. A case study demonstrates the proposed theory. © 2009.

  • Communities and Emerging Semantics in Semantic Link Network: Discovery and Learning
    IEEE Transactions on Knowledge and Data Engineering, 2009
    Co-Authors: Hai Zhuge
    Abstract:

    The World Wide Web provides plentiful contents for Web-based learning, but its hyperlink-based architecture connects Web resources for browsing freely rather than for effective learning. To support effective learning, an e-learning system should be able to discover and make use of the semantic communities and the emerging semantic relations in a dynamic complex network of learning resources. Previous graph-based community discovery approaches are limited in ability to discover semantic communities. This paper first suggests the semantic link network (SLN), a loosely coupled semantic data model that can semantically link resources and derive out implicit semantic links according to a set of relational reasoning Rules. By studying the intrinsic relationship between semantic communities and the semantic space of SLN, approaches to discovering reasoning-Constraint, Rule-Constraint, and classification-Constraint semantic communities are proposed. Further, the approaches, principles, and strategies for discovering emerging semantics in dynamic SLNs are studied. The basic laws of the semantic link network motion are revealed for the first time. An e-learning environment incorporating the proposed approaches, principles, and strategies to support effective discovery and learning is suggested.

  • schema theory for semantic link network
    Semantics Knowledge and Grid, 2008
    Co-Authors: Hai Zhuge, Yunchuan Sun, Junsheng Zhang
    Abstract:

    Semantic link network (SLN) is a loosely coupled semantic data model for managing Web resources. Its nodes can be any types of resources. Its edges can be any semantic relations. Potential semantic links can be derived out according to reasoning Rules on semantic relations. This paper proposes the schema theory for SLN including the concepts, Rule-Constraint normal forms and relevant algorithms. The theory provides the basis for normalized management of SLN and its applications. A case study demonstrates the proposed theory.

  • SKG - Schema Theory for Semantic Link Network
    2008 Fourth International Conference on Semantics Knowledge and Grid, 2008
    Co-Authors: Hai Zhuge, Yunchuan Sun, Junsheng Zhang
    Abstract:

    Semantic link network (SLN) is a loosely coupled semantic data model for managing Web resources. Its nodes can be any types of resources. Its edges can be any semantic relations. Potential semantic links can be derived out according to reasoning Rules on semantic relations. This paper proposes the schema theory for SLN including the concepts, Rule-Constraint normal forms and relevant algorithms. The theory provides the basis for normalized management of SLN and its applications. A case study demonstrates the proposed theory.

Yunchuan Sun - One of the best experts on this subject based on the ideXlab platform.

Mitsuru Ishizuka - One of the best experts on this subject based on the ideXlab platform.

  • fast hypothetical reasoning by parallel processing
    Pacific Rim International Conference on Artificial Intelligence, 2000
    Co-Authors: Yutaka Matsuo, Mitsuru Ishizuka
    Abstract:

    Cost-based hypothetical reasoning is an important framework for knowledgebased systems because it is theoretically founded and useful for many practical problems. Basically, it tries to find the minimum-cost set of hypotheses that is sufficient for proving a given goal. However, since the inference time of hypothetical reasoning grows exponentially with respect to problem size, slow inference speed often becomes the most crucial problem in practice. We have developed a new method to efficiently find a near-optimal solution, i.e., a solution whose cost is nearly minimal. This method uses parallel software processors to search for a solution. In order to grasp the search intuitively, we emulate parallel processing on a single processor and develop efficient algorithms. We assume that each variable and each Horn-Rule (Constraint) is a processor and behaves as follows: a Variable processor sends a message to lower the cost of a solution and a Constraint processor sends a message to satisfy itself. Our approach is realized mathematically by the augmented Lagrangian method, an efficient parallel computation method. A Variable processor has a (prime) variable which takes the value [0,1]. A Constraint processor has also a (dual) variable which represents how strongly the Constraint should be considered. The messages and updating procedures are defined by mathematical formulae. This approach has two major advantages. First, we can generalize related methods such as our earlier SL method, the breakout method or Gu's nonlinear optimization method for SAT problems. They are all variants of our approach, obtained by modifying a part of the message to be sent. Second, we can design new algorithms superior to previous algorithms such as SL method. We currently experiment with seven different algorithms of the parallel processor model, and find two algorithms are prominent as to inference time and solution cost. One algorithm is similar to the breakout method except that it considers the cost of the solution. The other is an entirely new algorithm, which adds one new processor to a problematic Horn-Rule if search gets stuck in a local minimum. Using this algorithm, we can obtain solutions whose cost on average is lower than that of any other algorithm compared in our experiment.

  • PRICAI - Fast hypothetical reasoning by parallel processing
    PRICAI 2000 Topics in Artificial Intelligence, 2000
    Co-Authors: Yutaka Matsuo, Mitsuru Ishizuka
    Abstract:

    Cost-based hypothetical reasoning is an important framework for knowledgebased systems because it is theoretically founded and useful for many practical problems. Basically, it tries to find the minimum-cost set of hypotheses that is sufficient for proving a given goal. However, since the inference time of hypothetical reasoning grows exponentially with respect to problem size, slow inference speed often becomes the most crucial problem in practice. We have developed a new method to efficiently find a near-optimal solution, i.e., a solution whose cost is nearly minimal. This method uses parallel software processors to search for a solution. In order to grasp the search intuitively, we emulate parallel processing on a single processor and develop efficient algorithms. We assume that each variable and each Horn-Rule (Constraint) is a processor and behaves as follows: a Variable processor sends a message to lower the cost of a solution and a Constraint processor sends a message to satisfy itself. Our approach is realized mathematically by the augmented Lagrangian method, an efficient parallel computation method. A Variable processor has a (prime) variable which takes the value [0,1]. A Constraint processor has also a (dual) variable which represents how strongly the Constraint should be considered. The messages and updating procedures are defined by mathematical formulae. This approach has two major advantages. First, we can generalize related methods such as our earlier SL method, the breakout method or Gu's nonlinear optimization method for SAT problems. They are all variants of our approach, obtained by modifying a part of the message to be sent. Second, we can design new algorithms superior to previous algorithms such as SL method. We currently experiment with seven different algorithms of the parallel processor model, and find two algorithms are prominent as to inference time and solution cost. One algorithm is similar to the breakout method except that it considers the cost of the solution. The other is an entirely new algorithm, which adds one new processor to a problematic Horn-Rule if search gets stuck in a local minimum. Using this algorithm, we can obtain solutions whose cost on average is lower than that of any other algorithm compared in our experiment.

B. Moussallam - One of the best experts on this subject based on the ideXlab platform.

  • Analyticity of $$\eta \pi $$ η π
    The European Physical Journal C, 2014
    Co-Authors: S. Descotes-genon, B. Moussallam
    Abstract:

    We consider the evaluation of the $$\eta \pi $$ η π isospin-violating vector and scalar form factors relying on a systematic application of analyticity and unitarity, combined with chiral expansion results. It is argued that the usual analyticity properties do hold (i.e. no anomalous thresholds are present) in spite of the instability of the $$\eta $$ η meson in QCD. Unitarity relates the vector form factor to the $$\eta \pi \rightarrow \pi \pi $$ η π → π π amplitude: we exploit progress in formulating and solving the Khuri–Treiman equations for $$\eta \rightarrow 3\pi $$ η → 3 π and in experimental measurements of the Dalitz plot parameters to evaluate the shape of the $$\rho $$ ρ -meson peak. Observing this peak in the energy distribution of the $$\tau \rightarrow \eta \pi \nu $$ τ → η π ν decay would be a background-free signature of a second-class amplitude. The scalar form factor is also estimated from a phase dispersive representation using a plausible model for the $$\eta \pi $$ η π elastic scattering $$S$$ S -wave phase shift and a sum Rule Constraint in the inelastic region. We indicate how a possibly exotic nature of the $$a_0(980)$$ a 0 ( 980 ) scalar meson manifests itself in a dispersive approach. A remark is finally made on a second-class amplitude in the $$\tau \rightarrow \pi \pi \nu $$ τ → π π ν decay.

  • Analyticity of $$\eta \pi $$ η π isospin-violating form factors and the $$\tau \rightarrow \eta \pi \nu $$ τ → η π ν second-class decay
    European Physical Journal C: Particles and Fields, 2014
    Co-Authors: S. Descotes-genon, B. Moussallam
    Abstract:

    We consider the evaluation of the $\eta\pi$ isospin-violating vector and scalar form factors relying on a systematic application of analyticity and unitarity, combined with chiral expansion results. It is argued that the usual analyticity properties do hold (i.e. no anomalous thresholds are present) in spite of the instability of the $\eta$ meson in QCD. Unitarity relates the vector form factor to the $\eta\pi \to \pi\pi$ amplitude: we exploit progress in formulating and solving the Khuri-Treiman equations for $\eta\to 3\pi$ and in experimental measurements of the Dalitz plot parameters to evaluate the shape of the $\rho$-meson peak. Observing this peak in the energy distribution of the $\tau\to \eta \pi \nu$ decay would be a background-free signature of a second-class amplitude. The scalar form factor is also estimated from a phase dispersive representation using a plausible model for the $\eta\pi$ elastic scattering $S$-wave phase shift and a sum Rule Constraint in the inelastic region. We indicate how a possibly exotic nature of the $a_0(980)$ scalar meson manifests itself in a dispersive approach. A remark is finally made on a second-class amplitude in the $\tau\to\pi\pi\nu$ decay.

Yutaka Matsuo - One of the best experts on this subject based on the ideXlab platform.

  • fast hypothetical reasoning by parallel processing
    Pacific Rim International Conference on Artificial Intelligence, 2000
    Co-Authors: Yutaka Matsuo, Mitsuru Ishizuka
    Abstract:

    Cost-based hypothetical reasoning is an important framework for knowledgebased systems because it is theoretically founded and useful for many practical problems. Basically, it tries to find the minimum-cost set of hypotheses that is sufficient for proving a given goal. However, since the inference time of hypothetical reasoning grows exponentially with respect to problem size, slow inference speed often becomes the most crucial problem in practice. We have developed a new method to efficiently find a near-optimal solution, i.e., a solution whose cost is nearly minimal. This method uses parallel software processors to search for a solution. In order to grasp the search intuitively, we emulate parallel processing on a single processor and develop efficient algorithms. We assume that each variable and each Horn-Rule (Constraint) is a processor and behaves as follows: a Variable processor sends a message to lower the cost of a solution and a Constraint processor sends a message to satisfy itself. Our approach is realized mathematically by the augmented Lagrangian method, an efficient parallel computation method. A Variable processor has a (prime) variable which takes the value [0,1]. A Constraint processor has also a (dual) variable which represents how strongly the Constraint should be considered. The messages and updating procedures are defined by mathematical formulae. This approach has two major advantages. First, we can generalize related methods such as our earlier SL method, the breakout method or Gu's nonlinear optimization method for SAT problems. They are all variants of our approach, obtained by modifying a part of the message to be sent. Second, we can design new algorithms superior to previous algorithms such as SL method. We currently experiment with seven different algorithms of the parallel processor model, and find two algorithms are prominent as to inference time and solution cost. One algorithm is similar to the breakout method except that it considers the cost of the solution. The other is an entirely new algorithm, which adds one new processor to a problematic Horn-Rule if search gets stuck in a local minimum. Using this algorithm, we can obtain solutions whose cost on average is lower than that of any other algorithm compared in our experiment.

  • PRICAI - Fast hypothetical reasoning by parallel processing
    PRICAI 2000 Topics in Artificial Intelligence, 2000
    Co-Authors: Yutaka Matsuo, Mitsuru Ishizuka
    Abstract:

    Cost-based hypothetical reasoning is an important framework for knowledgebased systems because it is theoretically founded and useful for many practical problems. Basically, it tries to find the minimum-cost set of hypotheses that is sufficient for proving a given goal. However, since the inference time of hypothetical reasoning grows exponentially with respect to problem size, slow inference speed often becomes the most crucial problem in practice. We have developed a new method to efficiently find a near-optimal solution, i.e., a solution whose cost is nearly minimal. This method uses parallel software processors to search for a solution. In order to grasp the search intuitively, we emulate parallel processing on a single processor and develop efficient algorithms. We assume that each variable and each Horn-Rule (Constraint) is a processor and behaves as follows: a Variable processor sends a message to lower the cost of a solution and a Constraint processor sends a message to satisfy itself. Our approach is realized mathematically by the augmented Lagrangian method, an efficient parallel computation method. A Variable processor has a (prime) variable which takes the value [0,1]. A Constraint processor has also a (dual) variable which represents how strongly the Constraint should be considered. The messages and updating procedures are defined by mathematical formulae. This approach has two major advantages. First, we can generalize related methods such as our earlier SL method, the breakout method or Gu's nonlinear optimization method for SAT problems. They are all variants of our approach, obtained by modifying a part of the message to be sent. Second, we can design new algorithms superior to previous algorithms such as SL method. We currently experiment with seven different algorithms of the parallel processor model, and find two algorithms are prominent as to inference time and solution cost. One algorithm is similar to the breakout method except that it considers the cost of the solution. The other is an entirely new algorithm, which adds one new processor to a problematic Horn-Rule if search gets stuck in a local minimum. Using this algorithm, we can obtain solutions whose cost on average is lower than that of any other algorithm compared in our experiment.