Date of this Version

2022

Abstract

The state of current knowledge-based wearable authentication systems requires users to physically interact with a device to initiate and validate their presence, thereby imposing a burden on the user. However, with the recent advancements of sensor technologies in consumer smart wearables (e.g., Fitbit and Apple watches), we were able to utilize vectors of statistical features extracted from the continuous stream of data from these IoT devices to implicitly validate a user's activities and its spatiotemporal context via the use of machine learning techniques. To improve the performance of our models, additional soft biometric data (i.e., respiratory sounds) was collected, and we demonstrated the feasibility of extracting and using exhalation instances and its respective Mel Frequency Cepstral Coefficents from audio recordings as we found a notable uplift in activity and epoch (i.e., day 9am-6pm, evening 6pm-12am, night 12am-9am) prediction performance with this new combination of features. This approach can ultimately be used to distinguish a user from a potential imposter without requiring any interaction nor expensive/intrusive technology, thus alleviating the burden on the end users.

Share

COinS