Automated Modeling of Human-In-The-loop Systems

Noah Marquand, Purdue University

Abstract

Safety in human in the loop systems, systems that change behavior with human input, is difficult to achieve. This difficulty can cost lives. As desired system capability grows, so too does the requisite complexity of the system. This complexity can result in designers not accounting for every use case of the system and unintentionally designing in unsafe behavior. Furthermore, complexity of operation and control can result in operators becoming confused during use or receiving insufficient training in the first place. All these cases can result in unsafe operations. One method of improving safety is implementing the use of formal models during the design process. These formal models can be analyzed mathematically to detect dangerous conditions, but can be difficult to produce without time, money, and expertise. This document details the study of potential methods for constructing formal models autonomously from recorded observations of system use, minimizing the need for system expertise, saving time, money, and personnel in this safety critical process. I first discuss how different system characteristics affect system modeling, isolating specific traits that most clearly affect the modeling process Then, I develop a technique for modeling a simple, digital, menu-based system based on a record of user inputs. This technique attempts to measure the availability of different inputs for the user, and then distinguishes states by comparing input availabilities. From there, I compare paths between states and check for shared behaviors. I then expand the general procedure to capture the behavior of a flight simulator. This system more closely resembles real-world safety critical systems and can therefore be used to approximate a real use case of the method outlined. I use machine learning tools for statistical analysis, comparing patterns in system behavior and user behaviors. Last, I discuss general conclusions on how the modeling approaches outlined in this document can be improved and expanded upon. For simple systems, we find that inputs alone can produce state machines, but without corresponding system information, they are less helpful for determining relative safety of different use cases than is needed. Through machine learning, we find that records of complex system use can be decomposed into sets of nominal and anomalous states but determining the causal link between user inputs and transitions between these conditions is not simple and requires further research.

Degree

M.Sc.

Advisors

Marais, Purdue University.

Subject Area

Design|Agronomy|Artificial intelligence

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS