Abstract

To attain improved human-machine collaboration, it is necessary for autonomous systems to infer human trust and workload and respond accordingly. In turn, autonomous systems require models that capture both human trust and workload dynamics. In a companion paper, we developed a trust-workload partially observable Markov decision process (POMDP) model framework that captured changes in human trust and workload for contexts that involve interaction between a human and an intelligent decision-aid system. In this paper, we defne intuitive reward functions and show that these can be readily transformed for integration with the proposed POMDP model. We synthesize a near-optimal control policy using transparency as the feedback variable based on solutions for two cases: 1) increasing human trust and reducing workload, and 2) improving overall performance along with the aforementioned objectives for trust and workload. We implement these solutions in a reconnaissance mission study in which human subjects are aided by a virtual robotic assistant in completing a series of missions. We show that it is not always benefcial to aim to improve trust; instead, the control objective should be to optimize a context-specifc performance objective when designing intelligent decision-aid systems that infuence trust-workload behavior.

Comments

This is the publishers version of Akash, Kumar & Reid, Tahira & Jain, Neera. (2019). Improving Human-Machine Collaboration Through Transparency-based Feedback – Part II: Control Design and Synthesis. IFAC-PapersOnLine. 51. 322-328. 10.1016/j.ifacol.2019.01.026.

Keywords

Trust in automation, human-machine interface, intelligent machines, Markov decision processes, stochastic modeling, parameter estimation, dynamic behavior

Date of this Version

2-8-2019

DOI

10.1016/j.ifacol.2019.01.026

Share

COinS