The Experts below are selected from a list of 1278 Experts worldwide ranked by ideXlab platform
Susumi Hatakeyama - One of the best experts on this subject based on the ideXlab platform.
-
formal 4 1 cycloaddition of homopropargyl alcohols to diazo dicarbonyl compounds giving substituted tetrahydrofurans
ChemInform, 2014Co-Authors: Fumiya Urabe, Shohei Miyamoto, Keisuke Takahashi, Jun Ishihara, Susumi HatakeyamaAbstract:This novel ring closure reaction involves tandem OH insertion/Conia-ene cyclization under cooperative Rh(II)/Zn(II) catalysis, providing access to substituted tetrahydrofurans with complete (E)-selection in the case of nonterminal alkynes (XII).
-
formal 4 1 cycloaddition of homopropargyl alcohols to diazo dicarbonyl compounds giving substituted tetrahydrofurans
Organic Letters, 2014Co-Authors: Fumiya Urabe, Shohei Miyamoto, Keisuke Takahashi, Jun Ishihara, Susumi HatakeyamaAbstract:A novel formal [4 + 1]-cycloaddition of readily available homopropargyl alcohols with diazo dicarbonyl compounds is described, which involves tandem O–H insertion/Conia-ene cyclization under cooperative Rh(II)/Zn(II) catalysis. This reaction provides easy access to various substituted tetrahydrofurans and exhibits complete E-selectivity in the case of nonterminal alkynes.
Kosaburo Hashiguchi - One of the best experts on this subject based on the ideXlab platform.
-
algorithms for determining the smallest number of Nonterminals states sufficient for generating accepting a regular language r with r 1 r r 2 for given regular languages r 1 r 2
Theoretical Computer Science, 2002Co-Authors: Kosaburo HashiguchiAbstract:Given two regular languages R1 and R2 with R1 ⊆ R2, one can effectively determine the number of Nonterminals in a nonterminal-minimal (generalized) right linear grammar generating a regular language R with R1 ⊆ R ⊆ R2, and the number of states in a state-minimal (generalized) nondeterministic finite automation accepting a a regular language R with R1 ⊆ R ⊆ R2.
-
algorithms for determining the smallest number of Nonterminals states sufficient for generating accepting a regular language
International Colloquium on Automata Languages and Programming, 1991Co-Authors: Kosaburo HashiguchiAbstract:There exist algorithms for determining the number of Nonterminals in a nonterminal-minimal (generalized) right-linear grammar generating R, and the number of states in a state-minimal (generalized) nondeterministic finite automaton accepting R for any given regular language R.
Chihiro Shibata - One of the best experts on this subject based on the ideXlab platform.
-
learning k l context sensitive probabilistic grammars with nonparametric bayesian approach
Machine Learning, 2021Co-Authors: Chihiro ShibataAbstract:Inferring formal grammars with nonparametric Bayesian approach is one of the most powerful approach for achieving high accuracy from unsupervised data. In this paper, mildly-context-sensitive probabilities, called (k, l)-context-sensitive probabilities, are defined on context-free grammars (CFGs). Inferring CFGs where the probabilities of rules are identified from contexts can be seen as a kind of dual approaches for distributional learning, in which the contexts characterize the substrings. We can handle the data sparsity for the context-sensitive probabilities by the smoothing effect of the hierarchical nonparametric Bayesian models such as Pitman–Yor processes (PYPs). We define the hierarchy of PYPs naturally by augmenting the infinite PCFGs. The blocked Gibbs sampling is known to be effective for inferring PCFGs. We show that, by modifying the inside probabilities, the blocked Gibbs sampling is able to be applied to the (k, l)-context-sensitive probabilistic grammars. At the same time, we show that the time complexity for (k, l)-context-sensitive probabilities of a CFG is $$O(|V|^{l+3}|w|^3)$$ for each sentence w, where V is a set of Nonterminals. Since it is computationally too expensive to iterate sufficient times especially when |V| is not small, some alternative sampling algorithms are required. Therefore, we propose a new sampling method called composite sampling, with which the sampling procedure is separated into sub-procedures for Nonterminals and for derivation trees. Finally, we demonstrate that the inferred (k, 0)-context-sensitive probabilistic grammars can achieve lower perplexities than other probabilistic language models such as PCFGs, n-grams, and HMMs.
-
Inferring (k, l)-context-sensitive probabilistic context-free grammars using hierarchical Pitman-Yor processes
2015Co-Authors: Chihiro Shibata, Makoto Kanazawa, Er Clark, Ryo YoshinakaAbstract:Motivated by the idea of applying nonparametric Bayesian models to dual approaches for distributional learning, we define (k, l)-context-sensitive probabilistic context-free gram-mars (PCFGs) using hierarchical Pitman-Yor processes (PYPs). The data sparseness problem that occurs when inferring context-sensitive probabilities for rules is handled by the smoothing effect of hierarchical PYPs. Many possible definitions or constructions of PYP hierarchies can be used to represent the context sensitivity of derivations of CFGs in Chomsky normal form. In this study, we use a definition that is considered to be the most natural as an extension of infinite PCFGs defined in previous studies. A Markov Chain Monte Carlo method called blocked Metropolis-Hastings (MH) sampling is known to be effective for inferring PCFGs from unsupervised sentences. Blocked MH sampling is ap-plicable to (k, l)-context-sensitive PCFGs by modifying their so-called inside probabilities. We show that the computational cost of blocked MH sampling for (k, l)-context-sensitive PCFGs is O(|V |l+3|s|3) for each sentence s, where V is a set of Nonterminals. This cost is too high to iterate sufficient sampling times, especially when l 6 = 0, thus we propose an al-ternative sampling method that separates the sampling procedure into pointwise sampling for Nonterminals and blocked sampling for rules. The computational cost of this sampling method is O(min{|s|l, |V |l}(|V ||s|2 + |s|3))
Gyorgy Vaszil - One of the best experts on this subject based on the ideXlab platform.
-
On the Size Complexity of Non-Returning Context-Free PC Grammar Systems
2013Co-Authors: Erzsébet Csuhaj-varjú, Gyorgy VaszilAbstract:Improving the previously known best bound, we show that any recursively enumerable language can be generated with a non-returning parallel communicating (PC) grammar system having six contextfree components. We also present a non-returning universal PC grammar system generating unary languages, that is, a system where not only the number of components, but also the number of productions and the number of Nonterminals are limited by certain constants, and these size parameters do not depend on the generated language.
-
scattered context grammars generate any recursively enumerable language with two Nonterminals
Information Processing Letters, 2010Co-Authors: Erzsebet Csuhajvarju, Gyorgy VaszilAbstract:By showing that two Nonterminals are sufficient, we present the optimal lower bound on the number of Nonterminals of scattered context grammars being able to generate any recursively enumerable language.
Min Zhang - One of the best experts on this subject based on the ideXlab platform.
-
learning semantic representations for Nonterminals in hierarchical phrase based translation
Empirical Methods in Natural Language Processing, 2015Co-Authors: Xing Wang, Deyi Xiong, Min ZhangAbstract:In hierarchical phrase-based translation, coarse-grained nonterminal Xs may generate inappropriate translations due to the lack of sufficient information for phrasal substitution. In this paper we propose a framework to refine Nonterminals in hierarchical translation rules with real-valued semantic representations. The semantic representations are learned via a weighted mean value and a minimum distance method using phrase vector representations obtained from large scale monolingual corpus. Based on the learned semantic vectors, we build a semantic nonterminal refinement model to measure semantic similarities between phrasal substitutions and nonterminal Xs in translation rules. Experiment results on ChineseEnglish translation show that the proposed model significantly improves translation quality on NIST test sets.