E-Scooter Rider Detection System in Driving Environments
Abstract
E-scooters have become omnipresent vehicles in major cities around the world. The numbers of e-scooters keep escalating, increasing their interactions with other cars on the road. An e-scooter rider’s normal behavior varies enormously from other vulnerable road users. This situation creates new challenges for vehicle active safety systems and automated driving functionalities, which require the detection of e-scooter riders as the first step. There is no open-sourced existing image dataset or computer vision model to detect these e-scooter riders. This work presents a novel vision-based system to differentiate between e-scooter riders and regular pedestrians and a benchmark dataset for e-scooter riders in a natural environment. An efficient pipeline built using two existing state-of-the-art convolutional neural networks (CNN), You Only Look Once (YOLOv3) and MobileNetV2, performs detection of these vulnerable vehicle riders. By fine-tuning and training MobileNetV2 over the dataset, the model can precisely distinguish between e-scooter riders and pedestrians in an input image. The whole pipeline can generate output at a precision of 0.95 and a recall of around 0.75 on the test sample. Moreover, the classification accuracy of trained MobileNetV2 on top of YOLOv3 is over 91% for e-scooter riders with precision and recall over 0.9.
Degree
M.Sc.
Subject Area
Artificial intelligence|Logic|Transportation
Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server.