Date of this Version

5-2-2022

Abstract

Non-speech human sounds, such as coughing and wheezing are common symptoms of many respiratory diseases, including chronic obstructive pulmonary disease (COPD), asthma, and COVID-19. Thereby, an objective reporting of these symptoms from smartphone microphones can help physicians to monitor and treat patients with different respiratory diseases. However, most often audio recordings from smartphones come with different types of background noises and other non-desired sounds, such as laughing, crying, and snoring. Therefore, it is crucial to first process the raw audio clips to extract desired events, such as coughing and wheezing. In our work, we use the Audacity desktop application to process raw audio clips obtained from various publicly available datasets, including COUGHVID and Environmental Sound Sense (ESC-50). While processing, we consider both visual and auditory inspection to extra the desired events. Next, we consider different data visualization techniques using the Audacity toolbox to analyze the processed audio signals and find the differences among various non-speech human sounds. These findings will help us to develop machine learning models and artificially intelligent systems that can be deployed on smartphones to detect various objective symptoms, such as coughs and wheezes, in real-time and can help physicians and healthcare providers to assess patient conditions remotely and promptly.

Share

COinS