Universal Language

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4059 Experts worldwide ranked by ideXlab platform

Cleber Zanchettin - One of the best experts on this subject based on the ideXlab platform.

  • Improving Universal Language Model Fine-Tuning using Attention Mechanism
    2019 International Joint Conference on Neural Networks (IJCNN), 2019
    Co-Authors: Flávio A. O. Santos, K.l. Ponce-guevara, David Macêdo, Cleber Zanchettin
    Abstract:

    Inductive transfer learning is widespread in computer vision applications. However, in natural Language processing (NLP) applications is still an under-explored area. The most common transfer learning method in NLP is the use of pre-trained word embeddings. The Universal Language Model Fine-Tuning (ULMFiT) is a recent approach which proposes to train a Language model and transfer its knowledge to a final classifier. During the classification step, ULMFiT uses a max and average pooling layer to select the useful information of an embedding sequence. We propose to replace max and average pooling layers with a soft attention mechanism. The goal is to learn the most important information of the embedding sequence rather than assuming that they are max and average values. We evaluate the proposed approach in six datasets and achieve the best performance in all of them against literature approaches.

  • IJCNN - Improving Universal Language Model Fine-Tuning using Attention Mechanism
    2019 International Joint Conference on Neural Networks (IJCNN), 2019
    Co-Authors: Flávio A. O. Santos, K.l. Ponce-guevara, David Macêdo, Cleber Zanchettin
    Abstract:

    Inductive transfer learning is widespread in computer vision applications. However, in natural Language processing (NLP) applications is still an under-explored area. The most common transfer learning method in NLP is the use of pre-trained word embeddings. The Universal Language Model Fine-Tuning (ULMFiT) is a recent approach which proposes to train a Language model and transfer its knowledge to a final classifier. During the classification step, ULMFiT uses a max and average pooling layer to select the useful information of an embedding sequence. We propose to replace max and average pooling layers with a soft attention mechanism. The goal is to learn the most important information of the embedding sequence rather than assuming that they are max and average values. We evaluate the proposed approach in six datasets and achieve the best performance in all of them against literature approaches.

Flávio A. O. Santos - One of the best experts on this subject based on the ideXlab platform.

  • Improving Universal Language Model Fine-Tuning using Attention Mechanism
    2019 International Joint Conference on Neural Networks (IJCNN), 2019
    Co-Authors: Flávio A. O. Santos, K.l. Ponce-guevara, David Macêdo, Cleber Zanchettin
    Abstract:

    Inductive transfer learning is widespread in computer vision applications. However, in natural Language processing (NLP) applications is still an under-explored area. The most common transfer learning method in NLP is the use of pre-trained word embeddings. The Universal Language Model Fine-Tuning (ULMFiT) is a recent approach which proposes to train a Language model and transfer its knowledge to a final classifier. During the classification step, ULMFiT uses a max and average pooling layer to select the useful information of an embedding sequence. We propose to replace max and average pooling layers with a soft attention mechanism. The goal is to learn the most important information of the embedding sequence rather than assuming that they are max and average values. We evaluate the proposed approach in six datasets and achieve the best performance in all of them against literature approaches.

  • IJCNN - Improving Universal Language Model Fine-Tuning using Attention Mechanism
    2019 International Joint Conference on Neural Networks (IJCNN), 2019
    Co-Authors: Flávio A. O. Santos, K.l. Ponce-guevara, David Macêdo, Cleber Zanchettin
    Abstract:

    Inductive transfer learning is widespread in computer vision applications. However, in natural Language processing (NLP) applications is still an under-explored area. The most common transfer learning method in NLP is the use of pre-trained word embeddings. The Universal Language Model Fine-Tuning (ULMFiT) is a recent approach which proposes to train a Language model and transfer its knowledge to a final classifier. During the classification step, ULMFiT uses a max and average pooling layer to select the useful information of an embedding sequence. We propose to replace max and average pooling layers with a soft attention mechanism. The goal is to learn the most important information of the embedding sequence rather than assuming that they are max and average values. We evaluate the proposed approach in six datasets and achieve the best performance in all of them against literature approaches.

K.l. Ponce-guevara - One of the best experts on this subject based on the ideXlab platform.

  • Improving Universal Language Model Fine-Tuning using Attention Mechanism
    2019 International Joint Conference on Neural Networks (IJCNN), 2019
    Co-Authors: Flávio A. O. Santos, K.l. Ponce-guevara, David Macêdo, Cleber Zanchettin
    Abstract:

    Inductive transfer learning is widespread in computer vision applications. However, in natural Language processing (NLP) applications is still an under-explored area. The most common transfer learning method in NLP is the use of pre-trained word embeddings. The Universal Language Model Fine-Tuning (ULMFiT) is a recent approach which proposes to train a Language model and transfer its knowledge to a final classifier. During the classification step, ULMFiT uses a max and average pooling layer to select the useful information of an embedding sequence. We propose to replace max and average pooling layers with a soft attention mechanism. The goal is to learn the most important information of the embedding sequence rather than assuming that they are max and average values. We evaluate the proposed approach in six datasets and achieve the best performance in all of them against literature approaches.

  • IJCNN - Improving Universal Language Model Fine-Tuning using Attention Mechanism
    2019 International Joint Conference on Neural Networks (IJCNN), 2019
    Co-Authors: Flávio A. O. Santos, K.l. Ponce-guevara, David Macêdo, Cleber Zanchettin
    Abstract:

    Inductive transfer learning is widespread in computer vision applications. However, in natural Language processing (NLP) applications is still an under-explored area. The most common transfer learning method in NLP is the use of pre-trained word embeddings. The Universal Language Model Fine-Tuning (ULMFiT) is a recent approach which proposes to train a Language model and transfer its knowledge to a final classifier. During the classification step, ULMFiT uses a max and average pooling layer to select the useful information of an embedding sequence. We propose to replace max and average pooling layers with a soft attention mechanism. The goal is to learn the most important information of the embedding sequence rather than assuming that they are max and average values. We evaluate the proposed approach in six datasets and achieve the best performance in all of them against literature approaches.

David Macêdo - One of the best experts on this subject based on the ideXlab platform.

  • Improving Universal Language Model Fine-Tuning using Attention Mechanism
    2019 International Joint Conference on Neural Networks (IJCNN), 2019
    Co-Authors: Flávio A. O. Santos, K.l. Ponce-guevara, David Macêdo, Cleber Zanchettin
    Abstract:

    Inductive transfer learning is widespread in computer vision applications. However, in natural Language processing (NLP) applications is still an under-explored area. The most common transfer learning method in NLP is the use of pre-trained word embeddings. The Universal Language Model Fine-Tuning (ULMFiT) is a recent approach which proposes to train a Language model and transfer its knowledge to a final classifier. During the classification step, ULMFiT uses a max and average pooling layer to select the useful information of an embedding sequence. We propose to replace max and average pooling layers with a soft attention mechanism. The goal is to learn the most important information of the embedding sequence rather than assuming that they are max and average values. We evaluate the proposed approach in six datasets and achieve the best performance in all of them against literature approaches.

  • IJCNN - Improving Universal Language Model Fine-Tuning using Attention Mechanism
    2019 International Joint Conference on Neural Networks (IJCNN), 2019
    Co-Authors: Flávio A. O. Santos, K.l. Ponce-guevara, David Macêdo, Cleber Zanchettin
    Abstract:

    Inductive transfer learning is widespread in computer vision applications. However, in natural Language processing (NLP) applications is still an under-explored area. The most common transfer learning method in NLP is the use of pre-trained word embeddings. The Universal Language Model Fine-Tuning (ULMFiT) is a recent approach which proposes to train a Language model and transfer its knowledge to a final classifier. During the classification step, ULMFiT uses a max and average pooling layer to select the useful information of an embedding sequence. We propose to replace max and average pooling layers with a soft attention mechanism. The goal is to learn the most important information of the embedding sequence rather than assuming that they are max and average values. We evaluate the proposed approach in six datasets and achieve the best performance in all of them against literature approaches.

Joseph L. Subbiondo - One of the best experts on this subject based on the ideXlab platform.

  • Competing models for a 17th century Universal Language: A study of the dispute between George Dalgarno and John Wilkins
    2007
    Co-Authors: Joseph L. Subbiondo
    Abstract:

    Many 17th-century philosophers, theologians, and educators were engaged in developing a Universal Language to remedy the confusion caused by the multiplicity of Languages. George Dalgarno (c.1619-1687), who published Ars Signorum (The Art of Signs) in 1661, and John Wilkins (1614-1672), who published An Essay Toward a Real Character, and a Philosophical Language in 1668 emerged as the two leading theorists and practitioners of the Universal Language movement in England. Early on in their work, Dalgarno and Wilkins collaborated; but soon after, they parted ways and worked separately. In Dalgarno’s “Treatise”, evidence emerges that educational reform may well have been the critical issue which eventually divided Dalgarno and Wilkins. While the scholarly debate about who was the more original Language designer is interesting, the most relevant point about the Dalgarno and Wilkins dispute for the history of the 17th-century Universal Language movement is that it sheds light on the failure of the movement. In Dalgarno’s and Wilkins’ unresolved struggle as reflected in their critiques of each other, an explanation emerges why a movement that drew significant scholarly attention, even a sense of urgency, in the early to mid 17th century had dissolved, for the most part, by the end of that century.

  • Universal Language Schemes and Seventeenth-Century Britain
    Concise History of the Language Sciences, 1995
    Co-Authors: Joseph L. Subbiondo
    Abstract:

    Publisher Summary This chapter discusses about Universal Language schemes and 17th century Britain. Seventeenth century Universal Language schemes in Britain contribute much to an understanding of the history of linguistics and Language philosophy—the advocates as well as the practitioners were engaged in a broad range of diverse issues. The Port-Royal Grammar was not restricted to a study of Latin or of any one Language; rather, it was concerned with explaining the underlying principles common to all Languages. Throughout, its authors used examples drawn from a variety of Languages; and, notably, the Grammar was written in French instead of Latin, as was still customary for that period. Seventeenth century British intellectuals responded almost immediately to the Universal Language project: Oxford became the center of the movement as Wilham Holder, John Wallis, John Ray, Francis Willoughby, Robert Hooke, Samuel Pepys, Seth Ward, George Dalgarno, Francis Lodowyck, and John Wilkins worked there to make the inspiration of Descartes, Bacon, and Comenius a reality.