Large Knowledge Base

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 300 Experts worldwide ranked by ideXlab platform

Taylan Altan - One of the best experts on this subject based on the ideXlab platform.

  • tube hydroforming current research applications and need for training
    Journal of Materials Processing Technology, 2000
    Co-Authors: Mustafa A Ahmetoglu, K Sutter, X J Li, Taylan Altan
    Abstract:

    Abstract Tube hydroforming is a relatively new technology compared to conventional stamping. Thus, there is no LargeKnowledge Base” that can be utilized for process and die design. To remedy this situation, considerable research is now being conducted by various institutions on significant aspects of tube hydroforming technology, including material selection, friction, pre-form design, hydroforming process and tool design, die materials and coatings. ERC/NSM is also conducting R&D in tube hydroforming in association with its industrial partners. This paper summarizes some of the early results in hydroforming of low carbon steel and aluminum alloy 6061-T9 tubes.

  • Tube hydroforming: State-of-the-art and future trends
    Journal of Materials Processing Technology, 2000
    Co-Authors: M. Ahmetoglu, Taylan Altan
    Abstract:

    With the availability of advanced machine designs and controls, tube hydroforming has become an economic alternative to various stamping processes. The technology is relatively new so that there is no Large `Knowledge Base' to assist the product and process designers. This paper reviews the fundamentals of tube hydroforming technology and discusses how various parameters, such as tube material properties, pre-form geometry, lubrication and process control affect product design and quality. In addition, relations between process variables and achievable part geometry are discussed. Finally, using examples, the status of the current technology and critical issues for future development are reviewed.

William W. Cohen - One of the best experts on this subject based on the ideXlab platform.

  • Efficient inference and learning in a Large Knowledge Base
    Machine Learning, 2015
    Co-Authors: William Yang Wang, Kathryn Mazaitis, Ni Lao, William W. Cohen
    Abstract:

    One important challenge for probabilistic logics is reasoning with very Large Knowledge Bases (KBs) of imperfect information, such as those produced by modern web-scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a propositional representation—and the size of a “grounding” grows with dataBase size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approximate “local groundings” can be constructed in time independent of dataBase size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm . We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank on a linearized version of the proof space, and Based on this connection, we develop a provably-correct approximate grounding scheme, Based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that ProPPR can learn weights for a mutually recursive program with hundreds of clauses defining scores of interrelated predicates over a KB containing one million entities.

  • structure learning via parameter learning
    Conference on Information and Knowledge Management, 2014
    Co-Authors: William Yang Wang, Kathryn Mazaitis, William W. Cohen
    Abstract:

    A key challenge in information and Knowledge management is to automatically discover the underlying structures and patterns from Large collections of extracted information. This paper presents a novel structure-learning method for a new, scalable probabilistic logic called ProPPR. Our approach builds on the recent success of meta-interpretive learning methods in Inductive Logic Programming (ILP), and we further extends it to a framework that enables robust and efficient structure learning of logic programs on graphs: using an abductive second-order probabilistic logic, we show how first-order theories can be automatically generated via parameter learning. To learn better theories, we then propose an iterated structural gradient approach that incrementally refines the hypothesized space of learned first-order structures. In experiments, we show that the proposed method further improves the results, outperforming competitive Baselines such as Markov Logic Networks (MLNs) and FOIL on multiple datasets with various settings; and that the proposed approach can learn structures in a Large Knowledge Base in a tractable fashion.

  • Efficient Inference and Learning in a Large Knowledge Base: Reasoning with Extracted Information using a Locally Groundable First-Order Probabilistic Logic
    arXiv: Artificial Intelligence, 2014
    Co-Authors: William Yang Wang, Kathryn Mazaitis, Ni Lao, Tom M Mitchell, William W. Cohen
    Abstract:

    One important challenge for probabilistic logics is reasoning with very Large Knowledge Bases (KBs) of imperfect information, such as those produced by modern web-scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves "grounding" the query---i.e., mapping it to a propositional representation---and the size of a "grounding" grows with dataBase size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which that approximate "local groundings" can be constructed in time independent of dataBase size. Technically, ProPPR is an extension to stochastic logic programs (SLPs) that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm (PRA). We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank (PPR) on a linearized version of the proof space, and using on this connection, we develop a proveably-correct approximate grounding scheme, Based on the PageRank-Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In experiments, we show that learning for ProPPR is orders magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that ProPPR can learn weights for a mutually recursive program with hundreds of clauses, which define scores of interrelated predicates, over a KB containing one million entities.

  • EMNLP-CoNLL - Reading The Web with Learned Syntactic-Semantic Inference Rules
    2012
    Co-Authors: Ni Lao, Amarnag Subramanya, Fernando Pereira, William W. Cohen
    Abstract:

    We study how to extend a Large Knowledge Base (FreeBase) by reading relational information from a Large Web text corpus. Previous studies on extracting relational Knowledge from text show the potential of syntactic patterns for extraction, but they do not exploit background Knowledge of other relations in the Knowledge Base. We describe a distributed, Web-scale implementation of a path-constrained random walk model that learns syntactic-semantic inference rules for binary relations from a graph representation of the parsed text and the Knowledge Base. Experiments show significant accuracy improvements in binary relation prediction over methods that consider only text, or only the existing Knowledge Base.

Mustafa A Ahmetoglu - One of the best experts on this subject based on the ideXlab platform.

  • tube hydroforming current research applications and need for training
    Journal of Materials Processing Technology, 2000
    Co-Authors: Mustafa A Ahmetoglu, K Sutter, X J Li, Taylan Altan
    Abstract:

    Abstract Tube hydroforming is a relatively new technology compared to conventional stamping. Thus, there is no LargeKnowledge Base” that can be utilized for process and die design. To remedy this situation, considerable research is now being conducted by various institutions on significant aspects of tube hydroforming technology, including material selection, friction, pre-form design, hydroforming process and tool design, die materials and coatings. ERC/NSM is also conducting R&D in tube hydroforming in association with its industrial partners. This paper summarizes some of the early results in hydroforming of low carbon steel and aluminum alloy 6061-T9 tubes.

Gerhard Weikum - One of the best experts on this subject based on the ideXlab platform.

  • International Semantic Web Conference (2) - YAGO: A Multilingual Knowledge Base from Wikipedia, Wordnet, and Geonames
    Lecture Notes in Computer Science, 2016
    Co-Authors: Thomas Rebele, Fabian M. Suchanek, Johannes Hoffart, Joanna Biega, Erdal Kuzey, Gerhard Weikum
    Abstract:

    YAGO is a Large Knowledge Base that is built automatically from Wikipedia, WordNet and GeoNames. The project combines information from Wikipedias in 10 different languages into a coherent whole, thus giving the Knowledge a multilingual dimension. It also attaches spatial and temporal information to many facts, and thus allows the user to query the data over space and time. YAGO focuses on extraction quality and achieves a manually evaluated precision of 95 %. In this paper, we explain how YAGO is built from its sources, how its quality is evaluated, how a user can access it, and how other projects utilize it.

Sumit Sengupta - One of the best experts on this subject based on the ideXlab platform.

  • EXSENSEL : A rule-Based approach to selection of sensors for process variables
    Chemical Engineering & Technology, 1996
    Co-Authors: Alok Barua, Sumit Sengupta
    Abstract:

    A novel expert-system-Based method for selection of sensors for process variables is presented. EXSENSEL (expert system Based sensor selection) deals with 12 process variables (gas chromatography, conductivity, humidity, level, moisture, nuclear radiation, oxygen content, pH, pollution, reaction products, strain, viscosity) and currently has 94 rules in its Knowledge Base. Despite its Large Knowledge Base, users have to answer a set of only a few questions regarding a particular process variable, which is selected from a menu of 12 variables. A general description of the chosen process variable can be viewed before invoking the rules. Once a sensor has been selected, a brief write up on that particular sensor is also available on user's request. EXSENSEL is the successor to TRANSELEX, which is a single expert system for selection of transducers in the area of temperature, pressure, and flow measurement.