Pay attention to the speech: COVID-19 diagnosis using machine learning and crowdsourced respiratory and speech recordings

https://doi.org/10.1016/j.aej.2021.08.070Get rights and content
Under a Creative Commons license
open access

Abstract

Since the outbreak of COVID-19, many efforts have been made to utilize the respiratory sounds and coughs collected by smartphones for training Machine Learning models to classify and distinguish COVID-19 sounds from healthy ones. Embedding those models into mobile applications or Internet of things devices can make effective COVID-19 pre-screening tools afforded by anyone anywhere. Most of the previous researchers trained their classifiers with respiratory sounds such as breathing or coughs, and they achieved promising results. We claim that using special voice patterns besides other respiratory sounds can achieve better performance. In this study, we used the Coswara dataset where each user has recorded 9 different types of sounds as cough, breathing, and speech labeled with COVID-19 status. A combination of models trained on different sounds can diagnose COVID-19 more accurately than a single model trained on cough or breathing only. Our results show that using simple binary classifiers can achieve an AUC of 96.4% and an accuracy of 96% by averaging the predictions of multiple models trained and evaluated separately on different sound types. Finally, this study aims to draw attention to the importance of the human voice alongside other respiratory sounds for the sound-based COVID-19 diagnosis.

Keywords

COVID-19
Machine learning
Cough sounds
Speech
Respiratory sounds

Cited by (0)

Peer review under responsibility of Faculty of Engineering, Alexandria University.