DOI

10.1038/s41598-024-65080-7

Date of this Version

6-21-2024

Keywords

Human-AI alignment, Large language models, Explainable AI, Eye tracking, Cognitive engineering, Human-computer interaction

Abstract

To understand the alignment between reasonings of humans and artificial intelligence (AI) models, this empirical study compared the human text classification performance and explainability with a traditional machine learning (ML) model and large language model (LLM). A domain-specific noisy textual dataset of 204 injury narratives had to be classified into 6 cause-of-injury codes. The narratives varied in terms of complexity and ease of categorization based on the distinctive nature of cause-of-injury code. The user study involved 51 participants whose eye-tracking data was recorded while they performed the text classification task. While the ML model was trained on 120,000 pre-labelled injury narratives, LLM and humans did not receive any specialized training. The explainability of different approaches was compared based on the top words they used for making classification decision. These words were identified using eye-tracking for humans, explainable AI approach LIME for ML model, and prompts for LLM. The classification performance of ML model was observed to be relatively better than zero-shot LLM and non-expert humans, overall, and particularly for narratives with high complexity and difficult categorization. The top-3 predictive words used by ML and LLM for classification agreed with humans to a greater extent as compared to later predictive words.

Comments

This is the publisher PDF of Divya Venkatesh, J., Jaiswal, A. & Nanda, G. Comparing human text classification performance and explainability with large language and machine learning models using eye-tracking. Sci Rep 14, 14295 (2024). This article is distributed under a CC-BY license, and is available at DOI: 10.1038/s41598-024-65080-7.

Included in

Engineering Commons

Share

COinS