Voice Disorder

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 8307 Experts worldwide ranked by ideXlab platform

Ghulam Muhammad - One of the best experts on this subject based on the ideXlab platform.

  • Edge Computing with Cloud for Voice Disorder Assessment and Treatment
    IEEE Communications Magazine, 2018
    Co-Authors: Ghulam Muhammad, Mohammed F. Alhamid, Mansour Alsulaiman, Brij B. Gupta
    Abstract:

    The advancement of next-generation network technologies provides a huge improvement in healthcare facilities. Technologies such as 5G, edge computing, cloud computing, and the Internet of Things realize smart healthcare that a client can have anytime, anywhere, and in real time. Edge computing offers useful computing resources at the edge of the network to maintain low-latency and real-time computing. In this article, we propose a smart healthcare framework using edge computing. In the framework, we develop a Voice Disorder assessment and treatment system using a deep learning approach. A client provides his or her Voice sample captured by smart sensors, and the sample goes to the edge computing for initial processing. Then the edge computing sends data to a core cloud for further processing. The assessment and management are controlled by a service provider through a cloud manager. Once the automatic assessment is done, the decision is sent to specialists, who prescribe appropriate treatment to the clients. The proposed system achieves 98.5 percent accuracy and 99.3 percent sensitivity using the Saarbrucken Voice Disorder database.

  • development of the arabic Voice pathology database and its evaluation by using speech features and machine learning algorithms
    Journal of Healthcare Engineering, 2017
    Co-Authors: Tamer A Mesallam, Mansour Alsulaiman, Khalid H Malki, Mohamed Farahat, Ahmed Alnasheri, Ghulam Muhammad
    Abstract:

    A Voice Disorder database is an essential element in doing research on automatic Voice Disorder detection and classification. Ethnicity affects the Voice characteristics of a person, and so it is necessary to develop a database by collecting the Voice samples of the targeted ethnic group. This will enhance the chances of arriving at a global solution for the accurate and reliable diagnosis of Voice Disorders by understanding the characteristics of a local group. Motivated by such idea, an Arabic Voice pathology database (AVPD) is designed and developed in this study by recording three vowels, running speech, and isolated words. For each recorded samples, the perceptual severity is also provided which is a unique aspect of the AVPD. During the development of the AVPD, the shortcomings of different Voice Disorder databases were identified so that they could be avoided in the AVPD. In addition, the AVPD is evaluated by using six different types of speech features and four types of machine learning algorithms. The results of detection and classification of Voice Disorders obtained with the sustained vowel and the running speech are also compared with the results of an English-language Disorder database, the Massachusetts Eye and Ear Infirmary (MEEI) database.

  • An Automatic Health Monitoring System for Patients Suffering From Voice Complications in Smart Cities
    IEEE Access, 2017
    Co-Authors: Ghulam Muhammad, Mohammed F. Alhamid
    Abstract:

    Current evolutions in the Internet of Things and cloud computing make it believable to build smart cities and homes. Smart cities provide smart technologies to residents for the improved and healthier life, where smart healthcare systems cannot be ignored due to rapidly growing elderly people around the world. Smart healthcare systems can be cost-effective and helpful in the optimal use of healthcare resources. The Voice is a primary source of communication and any complication in the production of Voice affects the personal as well as professional life of a person. Early screening of Voice through an automatic Voice Disorder detection system may save life of a person. In this paper, an automatic Voice Disorder detection system to monitor the resident of all age group and professional backgrounds is implemented. The proposed system detects the Voice Disorder by determining the source signal from the speech through the linear prediction analysis. The analysis calculates the features from normal and Disordered subjects. Based on these features, the spectrum is computed, which provided distribution of energy in normal and Voice Disordered subjects to differentiate between them. It is found that lower frequencies from 1 to 1562 Hz contributes significantly in the detection of Voice Disorders. The system is developed by using sustained vowel and running speech so that it can be deployed in a real world. The obtained accuracy for the detection of Voice Disorder with the sustained vowel is 99.94% ± 0.1, and that is for running speech is 99.75% ± 0.8.

  • Vocal fold Disorder detection based on continuous speech by using MFCC and GMM
    2013 7th IEEE GCC Conference and Exhibition (GCC), 2013
    Co-Authors: Mansour Alsulaiman, Ghulam Muhammad, Irraivan Elamvazuthi, Tamer A Mesallam
    Abstract:

    Vocal fold Voice Disorder detection with a sustained vowel is well investigated by research community during recent years. The detection of Voice Disorder with a sustained vowel is a comparatively easier task than detection with continuous speech. The speech signal remains stationary in case of sustained vowel but it varies over time in continuous time. This is the reason; Voice detection by using continuous speech is challenging and demands more investigation. Moreover, detection with continuous speech is more realistic because people use it in their daily conversation but sustained vowel is not used in everyday talks. An accurate Voice assessment can provide unique and complementary information for the diagnosis, and can be used in the treatment plan. In this paper, vocal fold Disorders, cyst, polyp, nodules, paralysis, and sulcus, are detected using continuous speech. Mel-frequency cepstral coefficients (MFCC) are used with Gaussian mixture model (GMM) to build an automatic detection system capable of differentiating normal and pathological Voices. The detection rate of the developed detection system with continuous speech is 91.66%.

  • multidirectional regression mdr based features for automatic Voice Disorder detection
    Journal of Voice, 2012
    Co-Authors: Ghulam Muhammad, Awais Mahmood, Tamer A Mesallam, Khalid H Malki, Mohamed Farahat, Mansour Alsulaiman
    Abstract:

    Summary Background and Objective Objective assessment of Voice pathology has a growing interest nowadays. Automatic speech/speaker recognition (ASR) systems are commonly deployed in Voice pathology detection. The aim of this work was to develop a novel feature extraction method for ASR that incorporates distributions of Voiced and unVoiced parts, and Voice onset and offset characteristics in a time-frequency domain to detect Voice pathology. Materials and Methods The speech samples of 70 dysphonic patients with six different types of Voice Disorders and 50 normal subjects were analyzed. The Arabic spoken digits (1–10) were taken as an input. The proposed feature extraction method was embedded into the ASR system with Gaussian mixture model (GMM) classifier to detect Voice Disorder. Results Accuracy of 97.48% was obtained in text independent (all digits' training) case, and over 99% accuracy was obtained in text dependent (separate digit's training) case. The proposed method outperformed the conventional Mel frequency cepstral coefficient (MFCC) features. Conclusion The results of this study revealed that incorporating Voice onset and offset information leads to efficient automatic Voice Disordered detection.

Awais Mahmood - One of the best experts on this subject based on the ideXlab platform.

  • multidirectional regression mdr based features for automatic Voice Disorder detection
    Journal of Voice, 2012
    Co-Authors: Ghulam Muhammad, Awais Mahmood, Tamer A Mesallam, Khalid H Malki, Mohamed Farahat, Mansour Alsulaiman
    Abstract:

    Summary Background and Objective Objective assessment of Voice pathology has a growing interest nowadays. Automatic speech/speaker recognition (ASR) systems are commonly deployed in Voice pathology detection. The aim of this work was to develop a novel feature extraction method for ASR that incorporates distributions of Voiced and unVoiced parts, and Voice onset and offset characteristics in a time-frequency domain to detect Voice pathology. Materials and Methods The speech samples of 70 dysphonic patients with six different types of Voice Disorders and 50 normal subjects were analyzed. The Arabic spoken digits (1–10) were taken as an input. The proposed feature extraction method was embedded into the ASR system with Gaussian mixture model (GMM) classifier to detect Voice Disorder. Results Accuracy of 97.48% was obtained in text independent (all digits' training) case, and over 99% accuracy was obtained in text dependent (separate digit's training) case. The proposed method outperformed the conventional Mel frequency cepstral coefficient (MFCC) features. Conclusion The results of this study revealed that incorporating Voice onset and offset information leads to efficient automatic Voice Disordered detection.

  • Automatic Voice Disorder classification using vowel formants
    2011 IEEE International Conference on Multimedia and Expo, 2011
    Co-Authors: Ghulam Muhammad, Mansour Alsulaiman, Awais Mahmood
    Abstract:

    In this paper, we propose an automatic Voice Disorder classification system using first two formants of vowels. Five types of Voice Disorder, namely, cyst, GERD, paralysis, polyp and sulcus, are used in the experiments. Spoken Arabic digits from the Voice Disordered people are recorded for input. First formant and second formant are extracted from the vowels [Fatha] and [Kasra], which are present in Arabic digits. These four features are then used to classify the Voice Disorder using two types of classification methods: vector quantization (VQ) and neural networks. In the experiments, neural network performs better than VQ. For female and male speakers, the classification rates are 67.86% and 52.5%, respectively, using neural networks. The best classification rate, which is 78.72%, is obtained for female sulcus Disorder.

  • ICME - Automatic Voice Disorder classification using vowel formants
    2011 IEEE International Conference on Multimedia and Expo, 2011
    Co-Authors: Ghulam Muhammad, Mansour Alsulaiman, Awais Mahmood
    Abstract:

    In this paper, we propose an automatic Voice Disorder classification system using first two formants of vowels. Five types of Voice Disorder, namely, cyst, GERD, paralysis, polyp and sulcus, are used in the experiments. Spoken Arabic digits from the Voice Disordered people are recorded for input. First formant and second formant are extracted from the vowels [Fatha] and [Kasra], which are present in Arabic digits. These four features are then used to classify the Voice Disorder using two types of classification methods: vector quantization (VQ) and neural networks. In the experiments, neural network performs better than VQ. For female and male speakers, the classification rates are 67.86% and 52.5%, respectively, using neural networks. The best classification rate, which is 78.72%, is obtained for female sulcus Disorder.

Mansour Alsulaiman - One of the best experts on this subject based on the ideXlab platform.

  • Edge Computing with Cloud for Voice Disorder Assessment and Treatment
    IEEE Communications Magazine, 2018
    Co-Authors: Ghulam Muhammad, Mohammed F. Alhamid, Mansour Alsulaiman, Brij B. Gupta
    Abstract:

    The advancement of next-generation network technologies provides a huge improvement in healthcare facilities. Technologies such as 5G, edge computing, cloud computing, and the Internet of Things realize smart healthcare that a client can have anytime, anywhere, and in real time. Edge computing offers useful computing resources at the edge of the network to maintain low-latency and real-time computing. In this article, we propose a smart healthcare framework using edge computing. In the framework, we develop a Voice Disorder assessment and treatment system using a deep learning approach. A client provides his or her Voice sample captured by smart sensors, and the sample goes to the edge computing for initial processing. Then the edge computing sends data to a core cloud for further processing. The assessment and management are controlled by a service provider through a cloud manager. Once the automatic assessment is done, the decision is sent to specialists, who prescribe appropriate treatment to the clients. The proposed system achieves 98.5 percent accuracy and 99.3 percent sensitivity using the Saarbrucken Voice Disorder database.

  • development of the arabic Voice pathology database and its evaluation by using speech features and machine learning algorithms
    Journal of Healthcare Engineering, 2017
    Co-Authors: Tamer A Mesallam, Mansour Alsulaiman, Khalid H Malki, Mohamed Farahat, Ahmed Alnasheri, Ghulam Muhammad
    Abstract:

    A Voice Disorder database is an essential element in doing research on automatic Voice Disorder detection and classification. Ethnicity affects the Voice characteristics of a person, and so it is necessary to develop a database by collecting the Voice samples of the targeted ethnic group. This will enhance the chances of arriving at a global solution for the accurate and reliable diagnosis of Voice Disorders by understanding the characteristics of a local group. Motivated by such idea, an Arabic Voice pathology database (AVPD) is designed and developed in this study by recording three vowels, running speech, and isolated words. For each recorded samples, the perceptual severity is also provided which is a unique aspect of the AVPD. During the development of the AVPD, the shortcomings of different Voice Disorder databases were identified so that they could be avoided in the AVPD. In addition, the AVPD is evaluated by using six different types of speech features and four types of machine learning algorithms. The results of detection and classification of Voice Disorders obtained with the sustained vowel and the running speech are also compared with the results of an English-language Disorder database, the Massachusetts Eye and Ear Infirmary (MEEI) database.

  • Vocal fold Disorder detection based on continuous speech by using MFCC and GMM
    2013 7th IEEE GCC Conference and Exhibition (GCC), 2013
    Co-Authors: Mansour Alsulaiman, Ghulam Muhammad, Irraivan Elamvazuthi, Tamer A Mesallam
    Abstract:

    Vocal fold Voice Disorder detection with a sustained vowel is well investigated by research community during recent years. The detection of Voice Disorder with a sustained vowel is a comparatively easier task than detection with continuous speech. The speech signal remains stationary in case of sustained vowel but it varies over time in continuous time. This is the reason; Voice detection by using continuous speech is challenging and demands more investigation. Moreover, detection with continuous speech is more realistic because people use it in their daily conversation but sustained vowel is not used in everyday talks. An accurate Voice assessment can provide unique and complementary information for the diagnosis, and can be used in the treatment plan. In this paper, vocal fold Disorders, cyst, polyp, nodules, paralysis, and sulcus, are detected using continuous speech. Mel-frequency cepstral coefficients (MFCC) are used with Gaussian mixture model (GMM) to build an automatic detection system capable of differentiating normal and pathological Voices. The detection rate of the developed detection system with continuous speech is 91.66%.

  • multidirectional regression mdr based features for automatic Voice Disorder detection
    Journal of Voice, 2012
    Co-Authors: Ghulam Muhammad, Awais Mahmood, Tamer A Mesallam, Khalid H Malki, Mohamed Farahat, Mansour Alsulaiman
    Abstract:

    Summary Background and Objective Objective assessment of Voice pathology has a growing interest nowadays. Automatic speech/speaker recognition (ASR) systems are commonly deployed in Voice pathology detection. The aim of this work was to develop a novel feature extraction method for ASR that incorporates distributions of Voiced and unVoiced parts, and Voice onset and offset characteristics in a time-frequency domain to detect Voice pathology. Materials and Methods The speech samples of 70 dysphonic patients with six different types of Voice Disorders and 50 normal subjects were analyzed. The Arabic spoken digits (1–10) were taken as an input. The proposed feature extraction method was embedded into the ASR system with Gaussian mixture model (GMM) classifier to detect Voice Disorder. Results Accuracy of 97.48% was obtained in text independent (all digits' training) case, and over 99% accuracy was obtained in text dependent (separate digit's training) case. The proposed method outperformed the conventional Mel frequency cepstral coefficient (MFCC) features. Conclusion The results of this study revealed that incorporating Voice onset and offset information leads to efficient automatic Voice Disordered detection.

  • Automatic Voice Disorder classification using vowel formants
    2011 IEEE International Conference on Multimedia and Expo, 2011
    Co-Authors: Ghulam Muhammad, Mansour Alsulaiman, Awais Mahmood
    Abstract:

    In this paper, we propose an automatic Voice Disorder classification system using first two formants of vowels. Five types of Voice Disorder, namely, cyst, GERD, paralysis, polyp and sulcus, are used in the experiments. Spoken Arabic digits from the Voice Disordered people are recorded for input. First formant and second formant are extracted from the vowels [Fatha] and [Kasra], which are present in Arabic digits. These four features are then used to classify the Voice Disorder using two types of classification methods: vector quantization (VQ) and neural networks. In the experiments, neural network performs better than VQ. For female and male speakers, the classification rates are 67.86% and 52.5%, respectively, using neural networks. The best classification rate, which is 78.72%, is obtained for female sulcus Disorder.

Giovanna Sannino - One of the best experts on this subject based on the ideXlab platform.

  • Leveraging Artificial Intelligence to Improve Voice Disorder Identification Through the Use of a Reliable Mobile App
    IEEE Access, 2019
    Co-Authors: Laura Verde, Giuseppe De Pietro, Mubarak Alrashoud, Ahmed Ghoneim, Khaled N. Al-mutib, Giovanna Sannino
    Abstract:

    The evolution of the Internet of Things, cloud computing and wireless communication has contributed to an advance in the interconnectivity, efficiency and data accessibility in smart cities, improving environmental sustainability, quality of life and well-being, knowledge and intellectual capital. In this scenario, the satisfaction of security and privacy requirements to preserve data integrity, confidentiality and authentication is of fundamental importance. In particular, this is essential in the healthcare sector, where health-related data are considered sensitive information able to reveal confidential details about the subject. In this regard, to limit the possibility of security attacks or privacy violations, we present a reliable mobile Voice Disorder detection system capable of distinguishing between healthy and pathological Voices by using a machine learning algorithm. This latter is totally embedded in the mobile application, so it is able to classify the Voice without the necessity of transmitting user data to or storing user data on any server. A Boosted Trees algorithm was used as the classifier, opportunely trained and validated on a dataset composed of 2003 Voices. The most frequently considered acoustic parameters constituted the inputs of the classifier, estimated and analyzed in real time by the mobile application.

  • Voice Disorder detection via an m health system design and results of a clinical study to evaluate vox4health
    BioMed Research International, 2018
    Co-Authors: U Cesari, Giuseppe De Pietro, Giovanna Sannino, Elio Marciano, Ciro Niri, Laura Verde
    Abstract:

    Objectives. The current study presents a clinical evaluation of Vox4Health, an m-health system able to estimate the possible presence of a Voice Disorder by calculating and analyzing the main acoustic measures required for the acoustic analysis, namely, the Fundamental Frequency, jitter, shimmer, and Harmonic to Noise Ratio. The acoustic analysis is an objective, effective, and noninvasive tool used in clinical practice to perform a quantitative evaluation of Voice quality. Materials and Methods. A clinical study was carried out in collaboration with medical staff of the University of Naples Federico II. 208 volunteers were recruited (mean age, 44.2 ± 13.9 years), 58 healthy subjects (mean age, 36.7 ± 13.3 years) and 150 pathological ones (mean age, 47 ± 13.1 years). The evaluation of Vox4Health was made in terms of classification performance, i.e., sensitivity, specificity, and accuracy, by using a rule-based algorithm that considers the most characteristic acoustic parameters to classify if the Voice is healthy or pathological. The performance has been compared with that achieved by using Praat, one of the most commonly used tools in clinical practice. Results. Using a rule-based algorithm, the best accuracy in the detection of Voice Disorders, 72.6%, was obtained by using the jitter or shimmer value. Moreover, the best sensitivity is about 96% and it was always obtained by using jitter. Finally, the best specificity was achieved by using the Fundamental Frequency and it is equal to 56.9%. Additionally, in order to improve the classification accuracy of the next version of the Vox4Health app, an evaluation by using machine learning techniques was conducted. We performed some preliminary tests adopting different machine learning techniques able to classify the Voice as healthy or pathological. The best accuracy (77.4%) was obtained by the Logistic Model Tree algorithm, while the best sensitivity (99.3%) was achieved using the Support Vector Machine. Finally, Instance-based Learning performed the best specificity (36.2%). Conclusions. Considering the achieved accuracy, Vox4Health has been considered by the medical experts as a “good screening tool” for the detection of Voice Disorders in its current version. However, this accuracy is improved when machine learning classifiers are considered rather than the rule-based algorithm.

  • Voice Disorder Identification by Using Machine Learning Techniques
    IEEE Access, 2018
    Co-Authors: Laura Verde, Giuseppe De Pietro, Giovanna Sannino
    Abstract:

    Nowadays, the use of mobile devices in the healthcare sector is increasing significantly. Mobile technologies offer not only forms of communication for multimedia content (e.g. clinical audio-visual notes and medical records) but also promising solutions for people who desire the detection, monitoring, and treatment of their health conditions anywhere and at any time. Mobile health systems can contribute to make patient care faster, better, and cheaper. Several pathological conditions can benefit from the use of mobile technologies. In this paper we focus on dysphonia, an alteration of the Voice quality that affects about one person in three at least once in his/her lifetime. Voice Disorders are rapidly spreading, although they are often underestimated. Mobile health systems can be an easy and fast support to Voice pathology detection. The identification of an algorithm that discriminates between pathological and healthy Voices with more accuracy is necessary to realize a valid and precise mobile health system. The key contribution of this paper is to investigate and compare the performance of several machine learning techniques useful for Voice pathology detection. All analyses are performed on a dataset of Voices selected from the Saarbruecken Voice database. The results obtained are evaluated in terms of accuracy, sensitivity, specificity, and receiver operating characteristic area. They show that the best accuracy in Voice diseases detection is achieved by the support vector machine algorithm or the decision tree one, depending on the features evaluated by using opportune feature selection methods.

Laura Verde - One of the best experts on this subject based on the ideXlab platform.

  • Leveraging Artificial Intelligence to Improve Voice Disorder Identification Through the Use of a Reliable Mobile App
    IEEE Access, 2019
    Co-Authors: Laura Verde, Giuseppe De Pietro, Mubarak Alrashoud, Ahmed Ghoneim, Khaled N. Al-mutib, Giovanna Sannino
    Abstract:

    The evolution of the Internet of Things, cloud computing and wireless communication has contributed to an advance in the interconnectivity, efficiency and data accessibility in smart cities, improving environmental sustainability, quality of life and well-being, knowledge and intellectual capital. In this scenario, the satisfaction of security and privacy requirements to preserve data integrity, confidentiality and authentication is of fundamental importance. In particular, this is essential in the healthcare sector, where health-related data are considered sensitive information able to reveal confidential details about the subject. In this regard, to limit the possibility of security attacks or privacy violations, we present a reliable mobile Voice Disorder detection system capable of distinguishing between healthy and pathological Voices by using a machine learning algorithm. This latter is totally embedded in the mobile application, so it is able to classify the Voice without the necessity of transmitting user data to or storing user data on any server. A Boosted Trees algorithm was used as the classifier, opportunely trained and validated on a dataset composed of 2003 Voices. The most frequently considered acoustic parameters constituted the inputs of the classifier, estimated and analyzed in real time by the mobile application.

  • Voice Disorder detection via an m health system design and results of a clinical study to evaluate vox4health
    BioMed Research International, 2018
    Co-Authors: U Cesari, Giuseppe De Pietro, Giovanna Sannino, Elio Marciano, Ciro Niri, Laura Verde
    Abstract:

    Objectives. The current study presents a clinical evaluation of Vox4Health, an m-health system able to estimate the possible presence of a Voice Disorder by calculating and analyzing the main acoustic measures required for the acoustic analysis, namely, the Fundamental Frequency, jitter, shimmer, and Harmonic to Noise Ratio. The acoustic analysis is an objective, effective, and noninvasive tool used in clinical practice to perform a quantitative evaluation of Voice quality. Materials and Methods. A clinical study was carried out in collaboration with medical staff of the University of Naples Federico II. 208 volunteers were recruited (mean age, 44.2 ± 13.9 years), 58 healthy subjects (mean age, 36.7 ± 13.3 years) and 150 pathological ones (mean age, 47 ± 13.1 years). The evaluation of Vox4Health was made in terms of classification performance, i.e., sensitivity, specificity, and accuracy, by using a rule-based algorithm that considers the most characteristic acoustic parameters to classify if the Voice is healthy or pathological. The performance has been compared with that achieved by using Praat, one of the most commonly used tools in clinical practice. Results. Using a rule-based algorithm, the best accuracy in the detection of Voice Disorders, 72.6%, was obtained by using the jitter or shimmer value. Moreover, the best sensitivity is about 96% and it was always obtained by using jitter. Finally, the best specificity was achieved by using the Fundamental Frequency and it is equal to 56.9%. Additionally, in order to improve the classification accuracy of the next version of the Vox4Health app, an evaluation by using machine learning techniques was conducted. We performed some preliminary tests adopting different machine learning techniques able to classify the Voice as healthy or pathological. The best accuracy (77.4%) was obtained by the Logistic Model Tree algorithm, while the best sensitivity (99.3%) was achieved using the Support Vector Machine. Finally, Instance-based Learning performed the best specificity (36.2%). Conclusions. Considering the achieved accuracy, Vox4Health has been considered by the medical experts as a “good screening tool” for the detection of Voice Disorders in its current version. However, this accuracy is improved when machine learning classifiers are considered rather than the rule-based algorithm.

  • Voice Disorder Identification by Using Machine Learning Techniques
    IEEE Access, 2018
    Co-Authors: Laura Verde, Giuseppe De Pietro, Giovanna Sannino
    Abstract:

    Nowadays, the use of mobile devices in the healthcare sector is increasing significantly. Mobile technologies offer not only forms of communication for multimedia content (e.g. clinical audio-visual notes and medical records) but also promising solutions for people who desire the detection, monitoring, and treatment of their health conditions anywhere and at any time. Mobile health systems can contribute to make patient care faster, better, and cheaper. Several pathological conditions can benefit from the use of mobile technologies. In this paper we focus on dysphonia, an alteration of the Voice quality that affects about one person in three at least once in his/her lifetime. Voice Disorders are rapidly spreading, although they are often underestimated. Mobile health systems can be an easy and fast support to Voice pathology detection. The identification of an algorithm that discriminates between pathological and healthy Voices with more accuracy is necessary to realize a valid and precise mobile health system. The key contribution of this paper is to investigate and compare the performance of several machine learning techniques useful for Voice pathology detection. All analyses are performed on a dataset of Voices selected from the Saarbruecken Voice database. The results obtained are evaluated in terms of accuracy, sensitivity, specificity, and receiver operating characteristic area. They show that the best accuracy in Voice diseases detection is achieved by the support vector machine algorithm or the decision tree one, depending on the features evaluated by using opportune feature selection methods.