Language Translator

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 31239 Experts worldwide ranked by ideXlab platform

David Carter - One of the best experts on this subject based on the ideXlab platform.

  • the spoken Language Translator
    2000
    Co-Authors: Manny Rayner, David Carter, Pierrette Bouillon, Vassilis Digalakis, Mats Wiren
    Abstract:

    This original volume describes the Spoken Language Translator (SLT), one of the first major automatic speech translation projects. The SLT system can translate between English, French, and Swedish in the domain of air travel planning, using a vocabulary of about 1500 words, and with an accuracy of about 75%. The authors detail the Language processing components, largely built on top of the SRI Core Language Engine, using a combination of general grammars and techniques that allow them to be rapidly customized to specific domains. They base speech recognition on Hidden Markov Mode technology, and use versions of the SRI DECIPHER system. This account of SLT is an essential resource for researchers interested in knowing what is achievable in spoken-Language translation today.

  • hybrid Language processing in the spoken Language Translator
    International Conference on Acoustics Speech and Signal Processing, 1997
    Co-Authors: Manny Rayner, David Carter
    Abstract:

    We present an overview of the Spoken Language Translator (SLT) system's hybrid Language-processing architecture, focusing on the way in which rule-based and statistical methods are combined to achieve robust and efficient performance within a linguistically motivated framework. In general, we argue that rules are desirable in order to encode domain-independent linguistic constraints and achieve high-quality grammatical output, while corpus-derived statistics are needed if systems are to be efficient and robust; further, that hybrid architectures are superior from the point of view of portability to architectures which only make use of one type of information. We address the topics of "multi-engine" strategies for robust translation; robust bottom-up parsing using pruning and grammar specialization; rational development of linguistic rule-sets using balanced domain corpora; and efficient supervised training by interactive disambiguation. All work described is fully implemented in the current version of the SLT-2 system.

  • hybrid Language processing in the spoken Language Translator
    arXiv: Computation and Language, 1997
    Co-Authors: Manny Rayner, David Carter
    Abstract:

    The paper presents an overview of the Spoken Language Translator (SLT) system's hybrid Language-processing architecture, focussing on the way in which rule-based and statistical methods are combined to achieve robust and efficient performance within a linguistically motivated framework. In general, we argue that rules are desirable in order to encode domain-independent linguistic constraints and achieve high-quality grammatical output, while corpus-derived statistics are needed if systems are to be efficient and robust; further, that hybrid architectures are superior from the point of view of portability to architectures which only make use of one type of information. We address the topics of ``multi-engine'' strategies for robust translation; robust bottom-up parsing using pruning and grammar specialization; rational development of linguistic rule-sets using balanced domain corpora; and efficient supervised training by interactive disambiguation. All work described is fully implemented in the current version of the SLT-2 system.

Manny Rayner - One of the best experts on this subject based on the ideXlab platform.

  • rapid construction of a web enabled medical speech to sign Language Translator using recorded video
    International Workshop on Future and Emerging Trends in Language Technology, 2016
    Co-Authors: Farhia Ahmed, Manny Rayner, Pierrette Bouillon, Nikos Tsourakis, Chelle Destefano, Johanna Gerlach, Angela Hooper, Irene Strasly, Catherine Weiss
    Abstract:

    We describe an experiment in which sign-Language output in Swiss French Sign Language (LSF-CH) and Australian Sign Language (Auslan) was added to a limited-domain medical speech translation system using a recorded video method. By constructing a suitable web tool to manage the recording procedure, the overhead involved in creating and manipulating the large set of files involved could be made easily manageable, allowing us to focus on the interesting and non-trivial problems which arise at the translation level. Initial experiences with the system suggest that the recorded videos, despite their unprofessional appearance, are readily comprehensible to Deaf informants, and that the method is promising as a simple short-term solution for this type of application.

  • comparing two different bidirectional versions of the limited domain medical spoken Language Translator medslt
    Proceedings of the 12th Annual conference of the European Association for Machine Translation, 2008
    Co-Authors: Marianne Starlander, Manny Rayner, Pierrette Bouillon, Glenn Flores, Nikos Tsourakis
    Abstract:

    This paper reports preliminary results of an evalu ation during which two different bidirectional versions of the limited-dom ain medical spoken Language Translator MedSLT were compared in a hospital setting. The more restricted version (V.1) only allows Yes-No answers and short elliptical sentence s, while the less restricted version (V.2) allows Yes-No answers, short elliptical sentences a nd full sentences. Although WER is marginally better for V.1, task performance is marg inally worse. There appear to be two main reasons for this disparity; short sentences ar e often badly recognised and patients tend to find it difficult to limit themselves to ellipsi s, even if they receive clear instructions about not using full sentences.

  • the spoken Language Translator
    2000
    Co-Authors: Manny Rayner, David Carter, Pierrette Bouillon, Vassilis Digalakis, Mats Wiren
    Abstract:

    This original volume describes the Spoken Language Translator (SLT), one of the first major automatic speech translation projects. The SLT system can translate between English, French, and Swedish in the domain of air travel planning, using a vocabulary of about 1500 words, and with an accuracy of about 75%. The authors detail the Language processing components, largely built on top of the SRI Core Language Engine, using a combination of general grammars and techniques that allow them to be rapidly customized to specific domains. They base speech recognition on Hidden Markov Mode technology, and use versions of the SRI DECIPHER system. This account of SLT is an essential resource for researchers interested in knowing what is achievable in spoken-Language translation today.

  • hybrid Language processing in the spoken Language Translator
    International Conference on Acoustics Speech and Signal Processing, 1997
    Co-Authors: Manny Rayner, David Carter
    Abstract:

    We present an overview of the Spoken Language Translator (SLT) system's hybrid Language-processing architecture, focusing on the way in which rule-based and statistical methods are combined to achieve robust and efficient performance within a linguistically motivated framework. In general, we argue that rules are desirable in order to encode domain-independent linguistic constraints and achieve high-quality grammatical output, while corpus-derived statistics are needed if systems are to be efficient and robust; further, that hybrid architectures are superior from the point of view of portability to architectures which only make use of one type of information. We address the topics of "multi-engine" strategies for robust translation; robust bottom-up parsing using pruning and grammar specialization; rational development of linguistic rule-sets using balanced domain corpora; and efficient supervised training by interactive disambiguation. All work described is fully implemented in the current version of the SLT-2 system.

  • hybrid Language processing in the spoken Language Translator
    arXiv: Computation and Language, 1997
    Co-Authors: Manny Rayner, David Carter
    Abstract:

    The paper presents an overview of the Spoken Language Translator (SLT) system's hybrid Language-processing architecture, focussing on the way in which rule-based and statistical methods are combined to achieve robust and efficient performance within a linguistically motivated framework. In general, we argue that rules are desirable in order to encode domain-independent linguistic constraints and achieve high-quality grammatical output, while corpus-derived statistics are needed if systems are to be efficient and robust; further, that hybrid architectures are superior from the point of view of portability to architectures which only make use of one type of information. We address the topics of ``multi-engine'' strategies for robust translation; robust bottom-up parsing using pruning and grammar specialization; rational development of linguistic rule-sets using balanced domain corpora; and efficient supervised training by interactive disambiguation. All work described is fully implemented in the current version of the SLT-2 system.

José Manuel Pardo - One of the best experts on this subject based on the ideXlab platform.

  • speech to sign Language translation system for spanish
    Speech Communication, 2008
    Co-Authors: Ruben Sansegundo, Ricardo De Córdoba, Javier Ferreiros, Juan Manuel Montero, R Barra, Luis Fernando Dharo, F Fernandez, J M Lucas, Javier Maciasguarasa, José Manuel Pardo
    Abstract:

    This paper describes the development of and the first experiments in a Spanish to sign Language translation system in a real domain. The developed system focuses on the sentences spoken by an official when assisting people applying for, or renewing their Identity Card. The system translates official explanations into Spanish Sign Language (LSE: Lengua de Signos Espanola) for Deaf people. The translation system is made up of a speech recognizer (for decoding the spoken utterance into a word sequence), a natural Language Translator (for converting a word sequence into a sequence of signs belonging to the sign Language), and a 3D avatar animation module (for playing back the hand movements). Two proposals for natural Language translation have been evaluated: a rule-based translation module (that computes sign confidence measures from the word confidence measures obtained in the speech recognition module) and a statistical translation module (in this case, parallel corpora were used for training the statistical model). The best configuration reported 31.6% SER (Sign Error Rate) and 0.5780 BLEU (BiLingual Evaluation Understudy). The paper also describes the eSIGN 3D avatar animation module (considering the sign confidence), and the limitations found when implementing a strategy for reducing the delay between the spoken utterance and the sign sequence animation.

Panlong Yang - One of the best experts on this subject based on the ideXlab platform.

  • signspeaker a real time high precision smartwatch based sign Language Translator
    ACM IEEE International Conference on Mobile Computing and Networking, 2019
    Co-Authors: Jiahui Hou, Peide Zhu, Zefan Wang, Yu Wang, Jianwei Qian, Panlong Yang
    Abstract:

    Sign Language is a natural and fully-formed communication method for deaf or hearing-impaired people. Unfortunately, most of the state-of-the-art sign recognition technologies are limited by either high energy consumption or expensive device costs and have a difficult time providing a real-time service in a daily-life environment. Inspired by previous works on motion detection with wearable devices, we propose Sign Speaker - a real-time, robust, and user-friendly American sign Language recognition (ASLR) system with affordable and portable commodity mobile devices. SignSpeaker is deployed on a smartwatch along with a smartphone; the smartwatch collects the sign signals and the smartphone outputs translation through an inbuilt loudspeaker. We implement a prototype system and run a series of experiments that demonstrate the promising performance of our system. For example, the average translation time is approximately $1.1$ seconds for a sentence with eleven words. The average detection ratio and reliability of sign recognition are 99.2% and 99.5%, respectively. The average word error rate of continuous sentence recognition is 1.04% on average.

Ruben Sansegundo - One of the best experts on this subject based on the ideXlab platform.

  • design development and field evaluation of a spanish into sign Language translation system
    Pattern Analysis and Applications, 2012
    Co-Authors: Ruben Sansegundo, Ricardo De Córdoba, Juan Manuel Montero, Luis Fernando Dharo, F Fernandez, Valentin Sama, Veronica Lopezludena, D Sanchez, A Garcia
    Abstract:

    This paper describes the design, development and field evaluation of a machine translation system from Spanish to Spanish Sign Language (LSE: Lengua de Signos Espanola). The developed system focuses on helping Deaf people when they want to renew their Driver’s License. The system is made up of a speech recognizer (for decoding the spoken utterance into a word sequence), a natural Language Translator (for converting a word sequence into a sequence of signs belonging to the sign Language), and a 3D avatar animation module (for playing back the signs). For the natural Language Translator, three technological approaches have been implemented and evaluated: an example-based strategy, a rule-based translation method and a statistical Translator. For the final version, the implemented Language Translator combines all the alternatives into a hierarchical structure. This paper includes a detailed description of the field evaluation. This evaluation was carried out in the Local Traffic Office in Toledo involving real government employees and Deaf people. The evaluation includes objective measurements from the system and subjective information from questionnaires. The paper details the main problems found and a discussion on how to solve them (some of them specific for LSE).

  • speech to sign Language translation system for spanish
    Speech Communication, 2008
    Co-Authors: Ruben Sansegundo, Ricardo De Córdoba, Javier Ferreiros, Juan Manuel Montero, R Barra, Luis Fernando Dharo, F Fernandez, J M Lucas, Javier Maciasguarasa, José Manuel Pardo
    Abstract:

    This paper describes the development of and the first experiments in a Spanish to sign Language translation system in a real domain. The developed system focuses on the sentences spoken by an official when assisting people applying for, or renewing their Identity Card. The system translates official explanations into Spanish Sign Language (LSE: Lengua de Signos Espanola) for Deaf people. The translation system is made up of a speech recognizer (for decoding the spoken utterance into a word sequence), a natural Language Translator (for converting a word sequence into a sequence of signs belonging to the sign Language), and a 3D avatar animation module (for playing back the hand movements). Two proposals for natural Language translation have been evaluated: a rule-based translation module (that computes sign confidence measures from the word confidence measures obtained in the speech recognition module) and a statistical translation module (in this case, parallel corpora were used for training the statistical model). The best configuration reported 31.6% SER (Sign Error Rate) and 0.5780 BLEU (BiLingual Evaluation Understudy). The paper also describes the eSIGN 3D avatar animation module (considering the sign confidence), and the limitations found when implementing a strategy for reducing the delay between the spoken utterance and the sign sequence animation.