Computational models of three-dimensional shape perception

Yunfeng Li, Purdue University

Abstract

In this study, two computational models were formulated to simulate human monocular and binocular 3D shape perception. In the monocular model, simplicity constraints (symmetry, planarity, maximum compactness and minimum surface area) were used to recover a 3D shape from its single image. In the binocular model, the ordinal depth of points in a 3D shape provided by stereoacuity was combined with the simplicity constraints to recover a 3D shape. In two psychophysical experiments, human monocular and binocular 3D shape recovery was measured. The comparison between subjects’ performance and the performance of the models showed that they were very similar. Specifically, monocular performance of both the subjects and the model was close to veridical for slants of the symmetry plane in the range between 30 and 60 deg. When slants were close to 0 deg or 90 deg (degenerate views), monocular performance deteriorated, but the type and the magnitude of errors were very similar in the subjects and the model. Binocular performance, on the other hand, was close to veridical for almost the entire range of slant of the symmetry plane. This is the first empirical study demonstrating veridical 3D shape perception and the first computational model that performs as well, or even better than the subjects do.

Degree

Ph.D.

Advisors

Pizlo, Purdue University.

Subject Area

Psychology|Cognitive psychology

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS