Articulatory Phonetics - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Articulatory Phonetics

The Experts below are selected from a list of 231 Experts worldwide ranked by ideXlab platform

Hermes Irineu Del Monego – 1st expert on this subject based on the ideXlab platform

  • ICONIP (2) – Consonantal recognition using SVM and a hierarchical decision structure based in the Articulatory Phonetics
    Neural Information Processing, 2012
    Co-Authors: A. De Andrade Bresolin, Hermes Irineu Del Monego

    Abstract:

    A new concept of making the Consonantal Recognition is proposed in this work, where used units (phonemes and syllables) to make the word recognition. This concept was carried out by a hierarchical decision structure, based on the Articulatory Phonetics and SVM. The speech features used were MFCC and WPT. Eighteen consonantal phonemes have been used in the recognition. The database used for the recognition was a set of two-syllable words of the Brazilian Portuguese language. The experimental results showed success rates of 98.41% for the user-dependent case. Our focus was the dependent speaker in order to validate the new proposal.

  • consonantal recognition using svm and a hierarchical decision structure based in the Articulatory Phonetics
    International Conference on Neural Information Processing, 2012
    Co-Authors: A. De Andrade Bresolin, Hermes Irineu Del Monego

    Abstract:

    A new concept of making the Consonantal Recognition is proposed in this work, where used units (phonemes and syllables) to make the word recognition. This concept was carried out by a hierarchical decision structure, based on the Articulatory Phonetics and SVM. The speech features used were MFCC and WPT. Eighteen consonantal phonemes have been used in the recognition. The database used for the recognition was a set of two-syllable words of the Brazilian Portuguese language. The experimental results showed success rates of 98.41% for the user-dependent case. Our focus was the dependent speaker in order to validate the new proposal.

Zhen-hua Ling – 2nd expert on this subject based on the ideXlab platform

  • Target-filtering model based Articulatory movement prediction for Articulatory control of HMM-based speech synthesis
    2012 IEEE 11th International Conference on Signal Processing, 2012
    Co-Authors: Zhen-hua Ling

    Abstract:

    This paper presents a target-filtering model to predict the movements of articulators for Articulatory control of hidden Markov model (HMM) based speech synthesis. This model is a bidirectional filtering process on the time-aligned articulation target sequence. The bidirectional filtering could achieve both anticipatory coarticulation and regressive coarticulation. As all the parameters of the model have definite physical meaning, we can control the generation of the Articulatory features flexibly with the guidance of Articulatory Phonetics. And the Articulatory features produced by the target-filtering model can be adopted for a multiple regression HMM (MRHMM)-based parametric speech synthesis system. So we can control the pronunciation of vowels by Articulatory features instead of the set of context features. Experimental results show that we can control the pronunciation among /Θ/, /e/, /ae/ effectively just by modifying the articulation targets.

  • Target-filtering model based Articulatory movement prediction for Articulatory control of HMM-based speech synthesis
    2012 IEEE 11th International Conference on Signal Processing, 2012
    Co-Authors: Zhen-hua Ling

    Abstract:

    This paper presents a target-filtering model to predict the movements of articulators for Articulatory control of hidden Markov model (HMM) based speech synthesis. This model is a bidirectional filtering process on the time-aligned articulation target sequence. The bidirectional filtering could achieve both anticipatory coarticulation and regressive coarticulation. As all the parameters of the model have definite physical meaning, we can control the generation of the Articulatory features flexibly with the guidance of Articulatory Phonetics. And the Articulatory features produced by the target-filtering model can be adopted for a multiple regression HMM (MRHMM)-based parametric speech synthesis system. So we can control the pronunciation of vowels by Articulatory features instead of the set of context features. Experimental results show that we can control the pronunciation among /Θ/, /ε/, /æ/ effectively just by modifying the articulation targets.

A. De Andrade Bresolin – 3rd expert on this subject based on the ideXlab platform

  • consonantal recognition using svm and a hierarchical decision structure based in the Articulatory Phonetics
    International Conference on Neural Information Processing, 2012
    Co-Authors: A. De Andrade Bresolin, Hermes Irineu Del Monego

    Abstract:

    A new concept of making the Consonantal Recognition is proposed in this work, where used units (phonemes and syllables) to make the word recognition. This concept was carried out by a hierarchical decision structure, based on the Articulatory Phonetics and SVM. The speech features used were MFCC and WPT. Eighteen consonantal phonemes have been used in the recognition. The database used for the recognition was a set of two-syllable words of the Brazilian Portuguese language. The experimental results showed success rates of 98.41% for the user-dependent case. Our focus was the dependent speaker in order to validate the new proposal.

  • ICONIP (2) – Consonantal recognition using SVM and a hierarchical decision structure based in the Articulatory Phonetics
    Neural Information Processing, 2012
    Co-Authors: A. De Andrade Bresolin, Hermes Irineu Del Monego

    Abstract:

    A new concept of making the Consonantal Recognition is proposed in this work, where used units (phonemes and syllables) to make the word recognition. This concept was carried out by a hierarchical decision structure, based on the Articulatory Phonetics and SVM. The speech features used were MFCC and WPT. Eighteen consonantal phonemes have been used in the recognition. The database used for the recognition was a set of two-syllable words of the Brazilian Portuguese language. The experimental results showed success rates of 98.41% for the user-dependent case. Our focus was the dependent speaker in order to validate the new proposal.

  • ISM – Consonantal Recognition Using SVM and New Hierarchical Decision Structure Based in the Articulatory Phonetics
    2008 Tenth IEEE International Symposium on Multimedia, 2008
    Co-Authors: A. De Andrade Bresolin, Adrião Duarte Dória Neto, Pablo Javier Alsina

    Abstract:

    In this work, a new concept of making the consonantal recognition is proposed. We used several units (phonemes, diphones and syllables) to make the word recognition. This concept was carried out by a new hierarchical decision structure, based on the Articulatory Phonetics and SVM (support vector machine). The speech features used were MFCC (mel-frequency cepstral coefficient) and WPT (waveletpacket transform). Eighteen consonantal phonemes have been used in the recognition. The database used for the recognition was a set of two-syllable words of the Brazilian Portuguese language. The experimental results showed success rates of 98.41% for the user dependent case.