VSS 2016 Abstracts


Jump to:
 
Firestone, C., & Scholl, B. J. (2016). Seeing stability: Intuitive physics automatically guides selective attention. Poster presented at the annual meeting of the Vision Sciences Society, 5/15/16, St. Pete Beach, FL.  
We can quickly and effortlessly evaluate whether a tower of blocks will collapse or a stack of dishes will come crashing down. What is the nature of this ability? Although such "intuitive physics" are traditionally associated with higher-level reasoning, here we explore the possibility that such sophisticated physical intuitions are underwritten by more basic processes -- and specifically whether visual attention and memory are automatically drawn toward physically relevant features. In a modified change-detection task, an image of a physically stable block-tower was briefly displayed, after which it disappeared and was replaced either in the same configuration or with one block slightly displaced. On some trials, this change upset the tower's balance, rendering it unstable; on other trials, the change was identical in magnitude but did not alter the tower's stability. Detection was reliably better for changes to blocks that altered overall stability, compared to either (1) equivalent changes to the same blocks that did not influence stability, or (2) equivalent changes to different blocks. Critically, this pattern was robust even though stability was entirely incidental to the task. Follow-up studies demonstrated that improved change detection based on physical stability was reliable even when attending to physical stability could confer no strategic advantage, and was robust even for observers who never consciously noticed any variation in the towers' (in)stability. Further work isolated perceived stability, per se: when the towers' ground-truth stability (according to physics) was contrasted with subjectively perceived stability (as rated by independent subjects), change-detection was better predicted by how stable the towers *looked* than by how stable they actually were. Collectively, this work shows how basic processes of attention and memory are sensitive to a scene's underlying physics, and how selective attention is automatically drawn to those objects and features that are especially physically relevant.
 
Kominsky, J., & Scholl, B. J. (2016). Retinotopic adaptation reveals multiple distinct categories of causal perception. Poster presented at the annual meeting of the Vision Sciences Society, 5/14/16, St. Pete Beach, FL.  
We can perceive not only low-level features of events such as color and motion, but also seemingly higher-level properties such as causality. Perhaps the best example of causal perception is the 'launching effect': one object (A) moves toward a stationary second object (B) until they are adjacent, at which point A stops and B starts moving in the same direction. Beyond the kinematics of these motions themselves, and regardless of any higher-level beliefs, this display induces a vivid impression of causality, wherein A is seen to cause B's motion. Do such percepts reflect a unitary category of visual processing, or might there be multiple distinct forms of causal perception? On one hand, the launching effect is often simply equated with causal perception more broadly. On the other hand, researchers have sometimes described other phenomena such as 'braking' (in which B moves much slower than A) or 'triggering' (in which B moves much faster than A). We used psychophysical methods to determine whether these labels really carve visual processing at its joints, and how they relate to each other. Previous research demonstrated a form of retinotopically specific adaptation to causality: exposure to causal launching makes subsequent ambiguous events in that same location more likely to be seen as non-causal 'passing'. We replicated this effect, and then went on to show that exposure to launching also yields retinotopically specific adaptation for subsequent ambiguous braking displays, but not for subsequent ambiguous triggering displays. Furthermore, exposure to triggering not only yielded retinotopically specific adaptation for subsequent ambiguous triggering displays, but also for subsequent ambiguous launching displays. Collectively, these results reveal that there is more to causal perception than just the launching effect: visual processing distinguishes some (but not all) types of causal interactions.
 
Uddenberg, S., Newman, G., & Scholl, B. J. (2016). Perceptual averaging of scientific data: Implications of ensemble representations for the perception of patterns in graphs. Poster presented at the annual meeting of the Vision Sciences Society, 5/17/16, St. Pete Beach, FL.  
One of the most prominent trends in recent visual cognition research has been the study of ensemble representations, as in the phenomenon of perceptual averaging: people are impressively accurate and efficient at extracting average properties of visual stimuli, such as the average size of an array of objects, or the average emotion of a collection of faces. Here we explored the nature and implications of perceptual averaging in the context of a particular sort of ubiquitous visual stimulus: graphs of numerical data. The most common way to graph numerical data involves presenting average values explicitly, as the heights of bars in bar graphs. But the use of bar graphs also leads to biased perception and reasoning, as observers implicitly behave as if data are more likely to be contained within the bars themselves, even when they depict averages (as in the so-called 'within-the-bar bias', perhaps due to object-based attention). Here we tested observers' ability to perceive and remember average values via perceptual averaging when they viewed entire distributions of values. Observers had to extract and report (via mouse clicks) the average values of two distributions, depicted either as bar graphs or as 'beeswarm plots' (a kind of one-dimensional scatterplot, in which each datapoint is depicted by a non-overlapping dot -- with no explicit representation of the average value). Observers were surprisingly accurate at extracting average values from beeswarm plots. Indeed, observers were just as accurate at reporting averages from visible beeswarm plots as they were when simply recalling the heights of bars from bar graphs. Even reports of average values from beeswarms made from memory were highly accurate (though not as accurate as when the beeswarms were visible). These results collectively demonstrate that perceptual averaging operates efficiently when viewing scientific data, and could be exploited for information visualization.
 
van Buren, B., Gao, T., & Scholl, B. J. (2016). What are the underlying units of perceived animacy?: Chasing detection is intrinsically object-based. Talk given at the annual meeting of the Vision Sciences Society, 5/15/16, St. Pete Beach, FL.  
One of the most foundational questions that can be asked about any visual process is the nature of the underlying 'units' over which it operates -- e.g. features, objects, or spatial regions. Here we address this question -- for the first time, to our knowledge -- in the context of the perception of animacy. Visual processing recovers not only low-level features such as color and orientation, but also seemingly higher-level properties such as animacy and intentionality. Even simple geometric shapes may appear to be animate (e.g. chasing one another) when they move in certain ways -- and this appears to reflect irresistible visual processing, occurring regardless of one's beliefs. What are the underlying units of such processing? Do such percepts arise whenever any visual feature moves appropriately, or do they require that the relevant features first be individuated as discrete objects? Observers viewed displays containing several moving discs. Most discs moved randomly, but on chasing-present trials, one (the 'wolf') chased another ('the sheep'), by continually updating its heading in the direction of the sheep. On chasing-absent trials, the wolf instead chased the sheep's mirror-image (thus controlling for correlated motion). Observers' task on each trial was simply to detect the presence of chasing. Critically, however, two pairs of discs were always connected by thin lines. On Unconnected trials, both lines connected pairs of distractors; but on Connected trials, one line connected the wolf to a distractor, and the other connected the sheep to a different distractor. Signal detection analyses revealed that chasing detection was severely impaired on Connected trials: observers could readily detect an object chasing another object, but not a line-end chasing another line-end -- even when both were easily discriminable. We conclude that the underlying units of perceived animacy are discrete visual objects.