Learning models and formulas of a temporal event logic
Abstract
We study novel learning and inference algorithms for temporal, relational data and their application to trainable video interpretation. With these algorithms we extend an existing visual-event recognition system, Leonard (Siskind 2001), in two directions. First, we develop, analyze, and evaluate a supervised learning algorithm for automatically acquiring high-level visual event definitions from low-level force-dynamic interpretations of video—relieving the user of the need to hand code definitions. We introduce a simple temporal event-description logic called AMA and give algorithms and complexity bounds for the AMA subsumption and generalization problems. A learning method is developed based on these algorithms and applied to the task of learning relational event definitions from video. Experiments show that the learned definitions are competitive with hand-coded ones. Second, we study the problem of relational sequential inference with application to inferring force-dynamic models from video data for use in event learning and recognition. We introduce two frameworks for this problem that provide different approaches to leveraging “nearly sound” logical constraints on a process. We study learning and inference in both frameworks and our empirical results compare favorably to pre-existing hand-coded model reconstructors.
Degree
Ph.D.
Advisors
Givan, Purdue University.
Subject Area
Computer science
Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server.