Synthesis Software

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1299 Experts worldwide ranked by ideXlab platform

T. Hase - One of the best experts on this subject based on the ideXlab platform.

  • SPEECH Synthesis Software WITH A VARIABLE SPEAKING RATE AND ITS
    2020
    Co-Authors: T. Ebihara, Y. Ishikawa, T. Sakamoto, Yasuhisa Kisuk, T. Hase
    Abstract:

    This paper describes a new speech Synthesis system that produces speech at a controllable rate. The method is based on the Oscillator Model in which output speech of a desired length can be obtained without extracting pitch synchronous positions. This model has been applied to a residual-excited vocoder to improve the sound quality of synthesized speech The proposed method is based on the durcdion of phonemes in natural speech Phonemes are classified for the time-scale modification algorithm, and this made it possible to easily control the duration of phonemes various speaking rates. Sound quality evalucdion tests confirmed that the quality of sound produced by this new method is better than that produced by existing methods. The method was verified by implementing it as real-time Synthesis Software. The Software required 8 MIPS of CPUpower and ran on a 32-bit microprocessor. 1.lntroduction

  • Speech Synthesis Software with variable speaking rate and its implementation on a 32-bit microprocessor
    2000 Digest of Technical Papers. International Conference on Consumer Electronics. Nineteenth in the Series (Cat. No.00CH37102), 2000
    Co-Authors: T. Ebihara, Y. Ishikawa, Y. Kisuki, T. Sakamoto, T. Hase
    Abstract:

    We proposed a speech Synthesis method with variable speaking rate based on phoneme duration features of natural speech. The real-time Synthesis Software required 8 MIPS CPU power was achieved on a 32-bit microprocessor.

  • Speech Synthesis Software with a variable speaking rate and its implementation on a 32-bit microprocessor
    IEEE Transactions on Consumer Electronics, 2000
    Co-Authors: T. Ebihara, Y. Ishikawa, Y. Kisuki, T. Sakamoto, T. Hase
    Abstract:

    This paper describes a new speech Synthesis system that produces speech at a controllable rate. The method is based on the oscillator model in which output speech of a desired length can be obtained without extracting pitch synchronous positions. This model has been applied to a residual-excited vocoder to improve the sound quality of synthesized speech. The proposed method is based on the duration of phonemes in natural speech. Phonemes are classified for the time-scale modification algorithm, and this made it possible to easily control the duration of phonemes various speaking rates. Sound quality evaluation tests confirmed that the quality of sound produced by this new method is better than that produced by existing methods. The method was verified by implementing it as real-time Synthesis Software. The Software required 8 MIPS of CPU power and ran on a 32-bit microprocessor.

  • Speech Synthesis Software for a 32-bit microprocessor
    IEEE Transactions on Consumer Electronics, 1998
    Co-Authors: Y. Ishikawa, Y. Kisuki, T. Sakamoto, T. Hase
    Abstract:

    The achievements in developing a new speech Synthesis algorithm with reduced computation and memory consumption as well as the implementation technology centered on the specialized embedded MCU architecture are described in this paper. To cope with these challenges, a method using 1-pitch waveforms with a common phase is proposed. In addition, data relocation in the memory was used to improve the cache hit rate. The floating-point computation was altered to a 16-bit fixed-point computation to reduce the load on a CPU not equipped with an FPU. The performance of the speech Synthesis Software was verified and evaluated by making a model system incorporating all the aforementioned features. The necessary memory size proven by the evaluation test was 420 kbytes. The data processing time was also reduced by 25% by relocating the codes and data in the memory. The use of fixed-point computation produced a processing speed forty times as high as the floating-point computation. As a result, the total computation amount of this speech Synthesis Software was proven to be 1 to 1.5 MIPS.

T. Ebihara - One of the best experts on this subject based on the ideXlab platform.

  • SPEECH Synthesis Software WITH A VARIABLE SPEAKING RATE AND ITS
    2020
    Co-Authors: T. Ebihara, Y. Ishikawa, T. Sakamoto, Yasuhisa Kisuk, T. Hase
    Abstract:

    This paper describes a new speech Synthesis system that produces speech at a controllable rate. The method is based on the Oscillator Model in which output speech of a desired length can be obtained without extracting pitch synchronous positions. This model has been applied to a residual-excited vocoder to improve the sound quality of synthesized speech The proposed method is based on the durcdion of phonemes in natural speech Phonemes are classified for the time-scale modification algorithm, and this made it possible to easily control the duration of phonemes various speaking rates. Sound quality evalucdion tests confirmed that the quality of sound produced by this new method is better than that produced by existing methods. The method was verified by implementing it as real-time Synthesis Software. The Software required 8 MIPS of CPUpower and ran on a 32-bit microprocessor. 1.lntroduction

  • Speech Synthesis Software with variable speaking rate and its implementation on a 32-bit microprocessor
    2000 Digest of Technical Papers. International Conference on Consumer Electronics. Nineteenth in the Series (Cat. No.00CH37102), 2000
    Co-Authors: T. Ebihara, Y. Ishikawa, Y. Kisuki, T. Sakamoto, T. Hase
    Abstract:

    We proposed a speech Synthesis method with variable speaking rate based on phoneme duration features of natural speech. The real-time Synthesis Software required 8 MIPS CPU power was achieved on a 32-bit microprocessor.

  • Speech Synthesis Software with a variable speaking rate and its implementation on a 32-bit microprocessor
    IEEE Transactions on Consumer Electronics, 2000
    Co-Authors: T. Ebihara, Y. Ishikawa, Y. Kisuki, T. Sakamoto, T. Hase
    Abstract:

    This paper describes a new speech Synthesis system that produces speech at a controllable rate. The method is based on the oscillator model in which output speech of a desired length can be obtained without extracting pitch synchronous positions. This model has been applied to a residual-excited vocoder to improve the sound quality of synthesized speech. The proposed method is based on the duration of phonemes in natural speech. Phonemes are classified for the time-scale modification algorithm, and this made it possible to easily control the duration of phonemes various speaking rates. Sound quality evaluation tests confirmed that the quality of sound produced by this new method is better than that produced by existing methods. The method was verified by implementing it as real-time Synthesis Software. The Software required 8 MIPS of CPU power and ran on a 32-bit microprocessor.

Y. Ishikawa - One of the best experts on this subject based on the ideXlab platform.

  • SPEECH Synthesis Software WITH A VARIABLE SPEAKING RATE AND ITS
    2020
    Co-Authors: T. Ebihara, Y. Ishikawa, T. Sakamoto, Yasuhisa Kisuk, T. Hase
    Abstract:

    This paper describes a new speech Synthesis system that produces speech at a controllable rate. The method is based on the Oscillator Model in which output speech of a desired length can be obtained without extracting pitch synchronous positions. This model has been applied to a residual-excited vocoder to improve the sound quality of synthesized speech The proposed method is based on the durcdion of phonemes in natural speech Phonemes are classified for the time-scale modification algorithm, and this made it possible to easily control the duration of phonemes various speaking rates. Sound quality evalucdion tests confirmed that the quality of sound produced by this new method is better than that produced by existing methods. The method was verified by implementing it as real-time Synthesis Software. The Software required 8 MIPS of CPUpower and ran on a 32-bit microprocessor. 1.lntroduction

  • Speech Synthesis Software with variable speaking rate and its implementation on a 32-bit microprocessor
    2000 Digest of Technical Papers. International Conference on Consumer Electronics. Nineteenth in the Series (Cat. No.00CH37102), 2000
    Co-Authors: T. Ebihara, Y. Ishikawa, Y. Kisuki, T. Sakamoto, T. Hase
    Abstract:

    We proposed a speech Synthesis method with variable speaking rate based on phoneme duration features of natural speech. The real-time Synthesis Software required 8 MIPS CPU power was achieved on a 32-bit microprocessor.

  • Speech Synthesis Software with a variable speaking rate and its implementation on a 32-bit microprocessor
    IEEE Transactions on Consumer Electronics, 2000
    Co-Authors: T. Ebihara, Y. Ishikawa, Y. Kisuki, T. Sakamoto, T. Hase
    Abstract:

    This paper describes a new speech Synthesis system that produces speech at a controllable rate. The method is based on the oscillator model in which output speech of a desired length can be obtained without extracting pitch synchronous positions. This model has been applied to a residual-excited vocoder to improve the sound quality of synthesized speech. The proposed method is based on the duration of phonemes in natural speech. Phonemes are classified for the time-scale modification algorithm, and this made it possible to easily control the duration of phonemes various speaking rates. Sound quality evaluation tests confirmed that the quality of sound produced by this new method is better than that produced by existing methods. The method was verified by implementing it as real-time Synthesis Software. The Software required 8 MIPS of CPU power and ran on a 32-bit microprocessor.

  • Speech Synthesis Software for a 32-bit microprocessor
    IEEE Transactions on Consumer Electronics, 1998
    Co-Authors: Y. Ishikawa, Y. Kisuki, T. Sakamoto, T. Hase
    Abstract:

    The achievements in developing a new speech Synthesis algorithm with reduced computation and memory consumption as well as the implementation technology centered on the specialized embedded MCU architecture are described in this paper. To cope with these challenges, a method using 1-pitch waveforms with a common phase is proposed. In addition, data relocation in the memory was used to improve the cache hit rate. The floating-point computation was altered to a 16-bit fixed-point computation to reduce the load on a CPU not equipped with an FPU. The performance of the speech Synthesis Software was verified and evaluated by making a model system incorporating all the aforementioned features. The necessary memory size proven by the evaluation test was 420 kbytes. The data processing time was also reduced by 25% by relocating the codes and data in the memory. The use of fixed-point computation produced a processing speed forty times as high as the floating-point computation. As a result, the total computation amount of this speech Synthesis Software was proven to be 1 to 1.5 MIPS.

T. Sakamoto - One of the best experts on this subject based on the ideXlab platform.

  • SPEECH Synthesis Software WITH A VARIABLE SPEAKING RATE AND ITS
    2020
    Co-Authors: T. Ebihara, Y. Ishikawa, T. Sakamoto, Yasuhisa Kisuk, T. Hase
    Abstract:

    This paper describes a new speech Synthesis system that produces speech at a controllable rate. The method is based on the Oscillator Model in which output speech of a desired length can be obtained without extracting pitch synchronous positions. This model has been applied to a residual-excited vocoder to improve the sound quality of synthesized speech The proposed method is based on the durcdion of phonemes in natural speech Phonemes are classified for the time-scale modification algorithm, and this made it possible to easily control the duration of phonemes various speaking rates. Sound quality evalucdion tests confirmed that the quality of sound produced by this new method is better than that produced by existing methods. The method was verified by implementing it as real-time Synthesis Software. The Software required 8 MIPS of CPUpower and ran on a 32-bit microprocessor. 1.lntroduction

  • Speech Synthesis Software with variable speaking rate and its implementation on a 32-bit microprocessor
    2000 Digest of Technical Papers. International Conference on Consumer Electronics. Nineteenth in the Series (Cat. No.00CH37102), 2000
    Co-Authors: T. Ebihara, Y. Ishikawa, Y. Kisuki, T. Sakamoto, T. Hase
    Abstract:

    We proposed a speech Synthesis method with variable speaking rate based on phoneme duration features of natural speech. The real-time Synthesis Software required 8 MIPS CPU power was achieved on a 32-bit microprocessor.

  • Speech Synthesis Software with a variable speaking rate and its implementation on a 32-bit microprocessor
    IEEE Transactions on Consumer Electronics, 2000
    Co-Authors: T. Ebihara, Y. Ishikawa, Y. Kisuki, T. Sakamoto, T. Hase
    Abstract:

    This paper describes a new speech Synthesis system that produces speech at a controllable rate. The method is based on the oscillator model in which output speech of a desired length can be obtained without extracting pitch synchronous positions. This model has been applied to a residual-excited vocoder to improve the sound quality of synthesized speech. The proposed method is based on the duration of phonemes in natural speech. Phonemes are classified for the time-scale modification algorithm, and this made it possible to easily control the duration of phonemes various speaking rates. Sound quality evaluation tests confirmed that the quality of sound produced by this new method is better than that produced by existing methods. The method was verified by implementing it as real-time Synthesis Software. The Software required 8 MIPS of CPU power and ran on a 32-bit microprocessor.

  • Speech Synthesis Software for a 32-bit microprocessor
    IEEE Transactions on Consumer Electronics, 1998
    Co-Authors: Y. Ishikawa, Y. Kisuki, T. Sakamoto, T. Hase
    Abstract:

    The achievements in developing a new speech Synthesis algorithm with reduced computation and memory consumption as well as the implementation technology centered on the specialized embedded MCU architecture are described in this paper. To cope with these challenges, a method using 1-pitch waveforms with a common phase is proposed. In addition, data relocation in the memory was used to improve the cache hit rate. The floating-point computation was altered to a 16-bit fixed-point computation to reduce the load on a CPU not equipped with an FPU. The performance of the speech Synthesis Software was verified and evaluated by making a model system incorporating all the aforementioned features. The necessary memory size proven by the evaluation test was 420 kbytes. The data processing time was also reduced by 25% by relocating the codes and data in the memory. The use of fixed-point computation produced a processing speed forty times as high as the floating-point computation. As a result, the total computation amount of this speech Synthesis Software was proven to be 1 to 1.5 MIPS.

David Worrall - One of the best experts on this subject based on the ideXlab platform.

  • Sonipy : the design of an extendable Software framework for sonification research and auditory display
    2020
    Co-Authors: David Worrall, Michael Bylstra, Stephen Barrass, Roger T. Dean
    Abstract:

    The need for better Software tools was highlighted in the 1997 Sonification Report [1]. It included some general proposals for adapting sound Synthesis Software to the needs of sonification research. Now, a decade later, it is evident that the demands on Software by sonification research are greater than those afforded by music composition and sound Synthesis Software. This paper compares some major contributions made towards achieving the Report’s proposals with current sonification demands and outlines SoniPy, a broader and more robust model which can integrate the expertise and prior development of Software components using a public-domain community-development approach.

  • OVERCOMING Software INERTIA IN DATA SONIFICATION RESEARCH USING THE SoniPy FRAMEWORK
    2020
    Co-Authors: David Worrall
    Abstract:

    It has been assumed that the much-needed development of data sonification Software would occur from the adaptation of sound Synthesis Software, principally that developed for computer music. As the Software demands of data sonification research grow, some limitations of this approach are becoming evident. This paper outlines an extendable Software framework, called SoniPy, which attempts to redress some of those limitations.

  • Towards a Data Sonification Design Framework
    Human–Computer Interaction Series, 2019
    Co-Authors: David Worrall
    Abstract:

    The need for better Software tools for data sonification was highlighted in the 1997 Sonification Report, the first comprehensive status review of the field which included some general proposals for adapting sound Synthesis Software to the needs of sonification research. It outlined the reasons the demands on Software by sonification research are greater than those afforded by music composition and sound Synthesis Software alone. As its Sample Research Proposal acknowledged, the development of a comprehensive sonification shell is not easy and the depth and breadth of knowledge, and skills required to effect such a project are easily underestimated. Although many of the tools developed to date have various degrees of flexibility and power for the integration of sound Synthesis and data processing, a complete heterogeneous Data Sonification Design Framework (DSDF) for research and auditory display has not yet emerged. This chapter outlines the requirements for such a comprehensive framework, and proposes an integration of various existing independent components such as those for data acquisition, storage and analysis, together with a means to include new work on cognitive and perceptual mappings, and user interface and control, by encapsulating them, or control of them, as Python libraries, as well as a wrappers for new initiatives, which together, form the basis of SoniPy, a comprehensive toolkit for computational sonification designing.