The Impact of 2-D and 3-D Grouping Cues on Depth From Binocular Disparity
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Stereopsis is a powerful source of information about the relative depth of objects in the world. In isolation, humans can see depth from binocular disparity without any other depth cues. However, many different stimulus properties can dramatically influence the depth we perceive. For example, there is an abundance of research showing that the configuration of a stimulus can impact the percept of depth, in some cases diminishing the amount of depth experience. Much of the previous research has focused on discrimination thresholds; in one example, stereoacuity for a pair of vertical lines was shown to be markedly reduced when these lines were connected to form a rectangle apparently slanted in depth (eg: McKee, 1983). The contribution of Gestalt figural grouping to this phenomenon has not been studied.
This dissertation addresses the role that perceptual grouping plays in the recovery of suprathreshold depth from disparity. First, I measured the impact of perceptual closure on depth magnitude. Observers estimated the separation in depth of a pair of vertical lines as the amount of perceptual closure was varied. In a series of experiments, I characterized the 2-D and 3-D properties that contribute to 3-D closure and the estimates of apparent depth. Estimates of perceived depth were highly correlated to the strength of subjective closure. Furthermore, I highlighted the perceptual consequences (both costs and benefits) of a new disparity-based grouping cue that interacts with perceived closure, which I call good stereoscopic continuation. This cue was shown to promote detection in a visual search task but reduces depth percepts compared to isolated features.
Taken together, the results reported here show that specific 2-D and 3-D grouping constraints are required to promote recovery of a 3-D object. As a consequence, quantitative depth is reduced, but the object is rapidly detected in a visual search task. I propose that these phenomena are the result of object-based disparity smoothing operations that enhance object cohesion.