Inferring Takeover and Trust in SAE Level 2 Automated Vehicles
With the rise of automation in today's society, a growing amount of research is focused on improving human-automation interaction. This includes a particular emphasis on how humans trust automation, as human trust is a driving factor of human reliance on automation. Specifically, trust needs to be calibrated for successful interaction between humans and automation. To avoid trust miscalibration (i.e., over/under trust), there is a need to design human-aware systems that can predict human trust and adapt their behavior accordingly. However, current computational trust models often overlook aspects of trust that have been highlighted by qualitative modeling efforts. Specifically, it is not clear how trust develops over longer timescales (greater than minutes) or across a gap in interaction. In this thesis, a computational trust model aimed at capturing changes in trust behavior over multiple timescales and interaction gaps is developed. A Non-linear Autoregressive with Exogenous Inputs (NARX) model is chosen to predict human trust levels by leveraging behavioral, psychophysiological, and environmental data. Trust is studied in an SAE Level 2 context, using a medium-fidelity driving simulator. A unique experiment is designed such that trust dynamics can be studied across two distinct interactions, separated by a period of one week with no interaction. The data collected from this experiment are evaluated to determine which features of the data set best predicted trust. These features are then used to train multiple trust models, which are then analyzed and compared. While model analysis reveals that trust dynamics differ between interactions, it also indicates that the differences may be captured by a single model.
Jain, Purdue University.
Computer Engineering|Mechanical engineering
Off-Campus Purdue Users:
To access this dissertation, please log in to our