Commonsense Knowledge Representation and Reasoning in Statistical Script Learning

I-Ta Lee, Purdue University

Abstract

A recent surge of research on commonsense knowledge has given the AI community new opportunities and challenges. Many studies focus on constructing commonsense knowledge representations from natural language data. However, how to learn such representations from large-scale text data is still an open question. This thesis addresses the problem through statistical script learning, which learns event representations from stereotypical event relationships using weak supervision. These event representations serve as an abundant source of commonsense knowledge to be applied in downstream language tasks. We propose three script learning models that generalize previous works with new insight. A feature-enriched model characterizes fine-grained and entity-based event properties to address specific semantics. A multi-relational model generalizes traditional script learning models which rely on one type of event relationship—co-occurrence—to a multi-relational model that considers typed event relationships, going beyond simple event similarities. A narrative graph model leverages a narrative graph to inform an event with a grounded situation to maintain a global consistency of event states. Also, pretrained language models such as BERT are used to further improve event semantics. Our three script learning models do not rely on annotated datasets, as the cost of creating these at large scales is unreasonable. Based on weak supervision, we extract events from large collections of textual data. Although noisy, the learned event representations carry profound commonsense information, enhancing performance in downstream language tasks. We evaluate their performance with various intrinsic and extrinsic evaluations. In the intrinsic evaluations, although the three models are evaluated in terms of various aspects, the shared core task is Multiple Choice Narrative Cloze (MCNC) [1], [2], which measures the model’s ability to predict what happens next, out of five candidate events, in a given situation. This task facilitates fair comparisons between script learning models for commonsense inference. The three models were proposed in three consecutive years, from 2018 to 2020, each outperforming the previous year’s model as well as the competitors’ baselines. Our best model outperforms EventComp [2], a widely recognized baseline, by a large margin in MCNC: i.e., absolute accuracy improvements of 9.73% (53.86% → 63.59%). In the extrinsic evaluations, we use our models for implicit discourse sense classification (IDSC), a challenging task in which two argument spans are annotated with an implicit discourse sense; the task is to predict the sense type, which requires a deep understanding of common sense between discourse arguments. Moreover, in an additional work we touch on a more interesting group of tasks about psychological commonsense reasoning. Solving these requires reasoning about and understanding human mental states such as motivation, emotion, and desire. Our best model, an enhancement of the narrative graph model, combines the advantages of the above three works to address entity-based features, typed event relationships, and grounded context in one model. The model successfully captures the context in which events appear and interactions between characters’ mental states, outperforming previous works. The main contributions of this thesis are as follows: • We identify the importance of entity-based features for representing commonsense knowledge with script learning.

Degree

Ph.D.

Advisors

Goldwasser, Purdue University.

Subject Area

Artificial intelligence

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS