In this paper, we establish a partially observable Markov decision process(POMDP) model framework that captures dynamic changes in human trust and workload for contexts that involve interactions between humans and intelligent decision-aid systems. We use a reconnaissance mission study to elicit a dynamic change in human trust and workload with respect to the system’s reliability and user interface transparency as well as the presence or absence of danger. We use human subject data to estimate transition and observation probabilities of the POMDP model and analyze the trust-workload behavior of humans. Our results indicate that higher transparency is more likely to increase human trust when the existing trust is low but also is more likely to decrease trust when it is already high. Furthermore, we show that by using high transparency, the workload of the human is always likely to increase. In our companion paper, we use this estimated model to develop an optimal control policy that varies system transparency to affect human trust-workload behavior towards improving human-machine collaboration.


This is the publishers version of Akash, Kumar & Polson, Katelyn & Reid, Tahira & Jain, Neera. (2019). Improving Human-Machine Collaboration Through Transparency-based Feedback – Part I: Human Trust and Workload Model. IFAC-PapersOnLine. 51. 315-321. 10.1016/j.ifacol.2019.01.028.


Trust in automation, human-machine interface, intelligent machines, Markov decision processes, stochastic modeling, parameter estimation, dynamic behavior

Date of this Version