Microphone

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 70932 Experts worldwide ranked by ideXlab platform

Ruth A. Bentler - One of the best experts on this subject based on the ideXlab platform.

  • predictive measures of directional benefit part 1 estimating the directivity index on a manikin
    Ear and Hearing, 2007
    Co-Authors: Andrew Dittberner, Ruth A. Bentler
    Abstract:

    OBJECTIVE: In this investigation, a method for computing a directivity index (DI) on a manikin for directional Microphones in hearing aids was proposed and evaluated comparatively to other conventional methods. DESIGN: Test devices included first- and second-order directional Microphones. Signal presentation, implemented in an anechoic chamber, involved a single noise source rotated completely around the directional Microphone in a hearing aid, in free field and on a manikin, at a defined radius. The area covered was equivalent to the approximate surface area of a sphere. It was anticipated that an equal angular resolution of 10 degrees (elevation and azimuth) would effectively estimate the DI of first-, second-, and higher-order directional Microphone systems located in a hearing aid on a manikin. A total of 450 spatially varied presentation points were analyzed, each weighted in reference to direction of arrival on the directional Microphone. RESULTS: Empiric differences between the DI derived from the 3D-DI method proposed in this investigation and the conventionally derived 2D-DI method DI on a manikin were as large as 3.8 dB in the higher frequencies, depending on the device under test. CONCLUSIONS: The magnitude of these differences was dependent on the device under test Microphone location. The further the Microphone was placed into the ear of the manikin, the larger the empiric differences.

  • evaluation of a second order directional Microphone hearing aid i speech perception outcomes
    Journal of The American Academy of Audiology, 2006
    Co-Authors: Ruth A. Bentler, Catherine V Palmer, Gustav H Mueller
    Abstract:

    This clinical trial was undertaken to evaluate the benefit obtained from hearing aids employing second-order adaptive directional Microphone technology, used in conjunction with digital noise reduction. Data were collected for 49 subjects across two sites. New and experienced hearing aid users were fit bilaterally with behind-the-ear hearing aids using the National Acoustics Laboratory—Nonlinear version 1 (NAL-NL1) prescriptive method with manufacturer default settings for various parameters of signal processing (e.g., noise reduction, compression, etc.). Laboratory results indicated that (1) for the stationary noise environment, directional Microphones provided better speech perception than omnidirectional Microphones, regardless of the number of Microphones; and (2) for the moving noise environment, the three-Microphone option (whether in adaptive or fixed mode) and the two-Microphone option in its adaptive mode resulted in better performance than the two-Microphone fixed mode, or the omnidirectional modes.

  • method for calculating directivity index of a directional Microphone in a hearing aid on a manikin
    Journal of the Acoustical Society of America, 2005
    Co-Authors: Andrew Dittberner, Ruth A. Bentler
    Abstract:

    A method for computing a directivity index (DI) on a manikin for directional Microphones in hearing aids is proposed and investigated. Test devices included first‐ and second‐order directional Microphones in hearing aids. Signal presentation involved a single noise source rotated completely around the directional Microphone, in free field and on a manikin, at a defined radius. The area covered was equivalent to the approximate surface area of a sphere. It was anticipated that an equal angular resolution of 10 deg (elevation and azimuth) would effectively estimate the DI of first‐, second‐, and higher‐order directional Microphone systems located in a hearing aid on a manikin. A total of 450 spatially varied presentation points was analyzed, each weighted in reference to direction of arrival on the directional Microphone. The absolute difference between the Directivity Index derived from the modified method proposed in this investigation and the conventionally derived Directivity Index on a manikin were as ...

Paris Smaragdis - One of the best experts on this subject based on the ideXlab platform.

  • communication cost aware Microphone selection for neural speech enhancement with ad hoc Microphone arrays
    International Conference on Acoustics Speech and Signal Processing, 2021
    Co-Authors: Jonah Casebeer, Jamshed Kaikaus, Paris Smaragdis
    Abstract:

    In this paper, we present a method for jointly-learning a Microphone selection mechanism and a speech enhancement network for multi-channel speech enhancement with an ad-hoc Microphone array. The attention-based Microphone selection mechanism is trained to reduce communication-costs through a penalty term which represents a task-performance/ communication-cost trade-off. While working within the trade-off, our method can intelligently stream from more Microphones in lower SNR scenes and fewer Microphones in higher SNR scenes. We evaluate the model in complex echoic acoustic scenes with moving sources and show that it matches the performance of models that stream from a fixed number of Microphones while reducing communication costs.

P A Chou - One of the best experts on this subject based on the ideXlab platform.

  • energy based sound source localization and gain normalization for ad hoc Microphone arrays
    International Conference on Acoustics Speech and Signal Processing, 2007
    Co-Authors: Zhengyou Zhang, Liwei He, P A Chou
    Abstract:

    We present an energy-based technique to estimate both Microphone and speaker/talker locations from an ad hoc network of Microphones. An example of such ad hoc Microphone network is a set of Microphones built in the laptops that some meeting participants bring in a meeting room. Compared with traditional sound source localization approaches based on time of flight, our technique does not require accurate synchronization, and it does not require each laptop to emit special signals. We estimate the meeting participants' positions based on average energies of their speech signals. In addition, we present a technique, which is independent of the volumes of the speakers, to estimate the relative gains of the Microphones. This is crucial to aggregate various audio channels from the ad hoc Microphone network into a single stream for audio conferencing.

  • Energy-Based Position Estimation of Microphones and Speakers for Ad Hoc Microphone Arrays
    2007 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2007
    Co-Authors: Minghua Chen, P A Chou, Zicheng Liu, Zhengyou Zhang
    Abstract:

    We present a novel energy-based algorithm to estimate the positions of Microphones and speakers in an ad hoc Microphone array setting. Compared to traditional time-of-flight based approaches, energy-based approach has the advantage that it does not require accurate time synchronization. This property is particularly useful for ad hoc Microphone arrays because highly accurate synchronization across Microphones may be difficult to obtain since these Microphones usually belong to different devices. This new algorithm extends our previous energy-based position estimation algorithm [1] in that it does not assume the speakers are in the same positions as their corresponding Microphones. In fact, our new algorithm estimates both the Microphones and speakers simultaneously. Experiment results are shown to demonstrate its performance improvement over the previous approach in [1], and evaluate its robustness against time synchronization errors.

Simon Doclo - One of the best experts on this subject based on the ideXlab platform.

  • relative transfer function estimation exploiting spatially separated Microphones in a diffuse noise field
    International Workshop on Acoustic Signal Enhancement, 2018
    Co-Authors: Nico Gobling, Simon Doclo
    Abstract:

    Many multi-Microphone speech enhancement algorithms require the relative transfer function (RTF) vector of the desired speech source, relating the acoustic transfer functions of all array Microphones to a reference Microphone. In this paper, we propose a computationally efficient method to estimate the RTF vector in a diffuse noise field, which requires an additional Microphone that is spatially separated from the Microphone array, such that the spatial coherence between the noise components in the Microphone array signals and the additional Microphone signal is low. Assuming this spatial coherence to be zero, we show that an unbiased estimate of the RTF vector can be obtained. Based on real-world recordings experimental results show that the proposed RTF estimator outperforms state-of-the-art estimators using only the Microphone array signals in terms of estimation accuracy and noise reduction performance.

  • rtf based binaural mvdr beamformer exploiting an external Microphone in a diffuse noise field
    arXiv: Audio and Speech Processing, 2018
    Co-Authors: Nico Gosling, Simon Doclo
    Abstract:

    Besides suppressing all undesired sound sources, an important objective of a binaural noise reduction algorithm for hearing devices is the preservation of the binaural cues, aiming at preserving the spatial perception of the acoustic scene. A well-known binaural noise reduction algorithm is the binaural minimum variance distortionless response beamformer, which can be steered using the relative transfer function (RTF) vector of the desired source, relating the acoustic transfer functions between the desired source and all Microphones to a reference Microphone. In this paper, we propose a computationally efficient method to estimate the RTF vector in a diffuse noise field, requiring an additional Microphone that is spatially separated from the head-mounted Microphones. Assuming that the spatial coherence between the noise components in the head-mounted Microphone signals and the additional Microphone signal is zero, we show that an unbiased estimate of the RTF vector can be obtained. Based on real-world recordings, experimental results for several reverberation times show that the proposed RTF estimator outperforms the widely used RTF estimator based on covariance whitening and a simple biased RTF estimator in terms of noise reduction and binaural cue preservation performance.

  • Reference Microphone Selection for MWF-based Noise Reduction Using Distributed Microphone Arrays
    Speech Communication; 10. ITG Symposium; Proceedings of, 2012
    Co-Authors: Toby Christian Lawin-ore, Simon Doclo
    Abstract:

    Using an acoustic sensor network, consisting of spatially distributed Microphones, a significant noise reduction can be achieved with the centralized multi-channel Wiener filter (MWF), which aims to estimate the desired speech component in one of the Microphones, referred to as the reference Microphone. However, since the distributed Microphones are typically placed at different locations, the selection of the reference Microphone has a significant impact on the performance of the MWF, largely depending on the position of the desired source with respect to the Microphones. In this paper, different optimal and suboptimal reference selection procedures are presented, both broadband and frequency-dependent. Experiment results show that the proposed procedures yield better performance than an arbitrarily selected reference Microphone.

Yuhei Oikawa - One of the best experts on this subject based on the ideXlab platform.

  • Physical-model based efficient data representation for many-channel Microphone array
    2016 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2016
    Co-Authors: Yoshiyuki Koyano, Kenji Yatabe, Y. Ikeda, Yuhei Oikawa
    Abstract:

    Recent development of Microphone arrays which consist of more than several tens or hundreds Microphones enables acquisition of rich spatial information of sound. Although such information possibly improve performance of any array signal processing technique, the amount of data will increase as the number of Microphones increases; for instance, a 1024 ch MEMS Microphone array, as in Fig. 1, generates data more than 10 GB per minute. In this paper, a method constructing an orthogonal basis for efficient representation of sound data obtained by the Microphone array is proposed. The proposed method can obtain a basis for arrays with any configuration including rectangle, spherical, and random Microphone array. It can also be utilized for designing a Microphone array because it offers a quantitative measure for comparing several array configurations.