VSS 2009 Abstracts

Jump to:
Albrecht, A. R., & Scholl, B. J. (2009). Perceptually averaging in a continuous visual world: Extracting statistical summary representations over time. Talk given at the annual meeting of the Vision Sciences Society, 5/11/09, Naples, FL.  
We typically think of perception in terms of processing individual features, objects, and scenes, but a great deal of information is also distributed over time and space. Recent work has emphasized how the mind extracts such information, as in the surprisingly efficient ability to perceive and report the average size of a set of objects. The extraction of such statistical summary representations (SSRs) is fast and accurate, but it remains unclear what types of populations these statistics can be computed over. Previous studies have always used discrete input -- either spatial arrays of shapes, or temporal sequences of shapes presented one at a time. Real-world visual environments, in contrast, are intrinsically continuous and dynamic. To better understand how SSRs may operate in naturalistic environments, we investigated if and how the visual system averages continuous visual input. When faced with a single disc that continuously expanded and contracted over time -- oscillating among nine 'anchor' sizes -- observers were equally accurate at reporting the average disc size as when the nine anchors were presented in a single spatial array. We further demonstrated that the averaging process samples continuously (and not just over the 'anchor' sizes, for example) by manipulating the durations over which the objects expanded and contracted. When a disc expanded, for example, it could spend more time during either its initial expansion (when it was smaller) or its subsequent expansion (when it was larger) -- and this manipulation greatly influenced the reported average sizes, even though the discs always oscillated between the same anchor sizes. These studies, along with additional manipulations, show that SSRs are continuously updated over time, and that the resulting averages are as accurate as with spatial arrays. As such, these results illustrate how SSRs may be well adapted to dynamically changing real-world environments.
Betzler, R. J., Turk-Browne, N. B., Christiansen, M. H., & Scholl, B. J. (2009). Statistical learning in everyday perception: The case of variable segment lengths. Poster presented at the annual meeting of the Vision Sciences Society, 5/10/09, Naples, FL.  
Statistical learning is a mechanism of perception that helps to parse continuous input into discrete segments on the basis of distributed temporal and spatial regularities. Nearly all previous studies of temporal statistical learning have used fixed segment lengths, e.g. composing a continuous stream of input out of three-item 'triplets'. In natural environments, however, the units of perceptual experience -- e.g. events and words -- rarely come in fixed lengths. As such, we explored the operation of statistical learning in temporal streams containing segments of variable lengths. Observers passively viewed a continuous stream of novel shapes appearing one at a time. These streams had no overt segmentation, but they were constructed from segments of either a fixed length (all triplets) or variable lengths (combining one-, two-, and three-shape subsequences) -- controlling for both overall stream duration and segment frequency. Statistical learning was then assessed with a two-alternative forced-choice familiarity test pitting subsequences from the stream against recombinations of those same shapes into novel subsequences. The resulting learning was equally robust for observers who had viewed fixed-length segments and observers who had viewed variable-length segments, demonstrating that statistical learning does not assume that "one-size-fits-all". This sort of variability is not only characteristic of the visual environment, but is also a basic property of language. Accordingly, we also replicated these studies (with similar results) in auditory statistical learning of pseudowords. In other studies, we have explored the ability of both visual and auditory statistical learning to cope with other forms of variability, such as when individual elements are 'recycled' into multiple subsequences in the same stream -- since the same objects are present in multiple events and the same syllables are used in multiple words. These results contribute to a growing body of research demonstrating the usefulness of statistical learning for everyday perception.
Ellner, S., Flombaum, J. I., & Scholl, B. J. (2009). Extrapolation vs. individuation in multiple object tracking. Poster presented at the annual meeting of the Vision Sciences Society, 5/12/09, Naples, FL.  
A central task of perception is not only to segment the visual environment into discrete objects, but also to keep track of objects as persisting individuals over time and motion. Object persistence can be studied using multiple object tracking (MOT), in which observers track several featurally identical targets that move haphazardly and unpredictably among identical distractors. How is MOT possible? One intuitive idea is that this ability is mediated in part by a form of automatic trajectory extrapolation. Some previous studies attempted to support this view by demonstrating that subjects are better able to recover targets following a gap -- the momentary disappearance of the entire display -- when the gap was preceded by a coherent motion trajectory rather than a static array of objects. Such demonstrations are susceptible to a simpler interpretation, though: perhaps the pre-gap motion simply serves to better individuate the objects, rather than supporting trajectory extrapolation. To address this, we studied MOT using four conditions, each involving gaps after which the objects appeared in the same locations across conditions. In the Move condition, objects moved continuously before the gap, such that the final locations were at the 'correct' extrapolated locations. In the Static condition, objects remained stationary before the gap. In the Vibrate condition, objects oscillated in place before the gap. And in the Orthogonal condition, objects moved continuously before the gap at a 90-degree angle from their post-gap positions. Compared to the Static baseline, performance was equal in the Vibrate condition, much better in the Move condition, and significantly worse in the Orthogonal condition. This provides decisive evidence that extrapolation occurs during MOT, and perhaps cannot even be ignored -- since unreliable trajectories yielded worse performance than no trajectories at all. These and other conditions begin to elucidate the underlying processes that effectively 'implement' MOT.
Gao, T., McCarthy, G., & Scholl, B. J. (2009). 'Directionality' as an especially powerful cue to perceived animacy: Evidence from 'wolfpack' manipulations. Talk given at the annual meeting of the Vision Sciences Society, 5/10/09, Naples, FL.  
The currency of visual experience consists not only of features such as color and shape, but also higher-level properties such as animacy. We explored one cue that appears to automatically trigger the perception of animacy in an especially powerful manner: directionality, wherein an object (1) appears to have a particular orientation based on its shape (as a wolf's head tells you which way it is facing), and (2) varies this heading systematically with respect to the environment (as a wolf consistently faces its prey during a hunt). Previous studies of perceived animacy have relied on problematic perceptual reports, but we used several new performance measures to demonstrate the power of directionality in some surprising new ways. In all experiments, subjects viewed oriented 'darts' that appeared to face in particular directions as they moved. First, in the "Don't-Get-Caught!" task, subjects controlled the trajectory of a 'sheep' on the screen to avoid getting caught by a 'wolf' dart that pursued them. Subjects escaped much more readily when the darts (including the wolf) consistently faced 'ahead' as they moved. Second, in the "Search-For-Chasing" task, subjects had to detect the presence of a chase between two darts. Performance suffered dramatically when all darts (including the wolf) acted as a 'wolfpack' -- consistently all pointing toward the same irrelevant object. The shapes coordinated orientations masked actual chasing, while simultaneously making each object seem to 'stalk' the irrelevant object. Third, in the "Leave-Me-Alone!" task, subjects had to avoid touching darts that moved on random trajectories, and tended to steer clear of display regions where the darts were oriented as a 'wolfpack' facing the subject's shape -- demonstrating that subjects find such behavior to be aversive. These results demonstrate a new cue to perceived animacy, and show how it can be measured with rigor using new paradigms.
McCarthy, G., Gao, T., & Scholl, B. J. (2009). Processing animacy in the posterior superior temporal sulcus. Poster presented at the annual meeting of the Vision Sciences Society, 5/13/09, Naples, FL.  
Heider and Simmel (1944) have shown that the motion of geometric shapes can be perceived as animate and goal-directed behavior. Neuroimaging studies have shown that viewing such displays evokes strong activation in temporoparietal cortex, including areas in and near the posterior superior temporal sulcus (pSTS). These brain regions are sensitive to socially relevant information, and have been implicated in the perception of biological motion and in'theory of mind' processing. Further investigation of the function(s) of pSTS, however, have been limited by the complex constructions of previous animate displays that make it difficult to determine which low level visual cues trigger the perception of animacy. Also, these displays elicit uncontrolled shifts of attention, making it hard to distinguish the cues influencing perceived animacy from spatial attentional shifting. In the current fMRI study, both of these issues were addressed. Subjects viewed a display containing four moving darts (or arrowheads). Subjects were required to track all four darts continuously and to covertly count how many dot probes briefly flashed upon them. On different trials, the perceived animacy of the darts was manipulated by varying whether the darts moved along their long axis (facing ahead) or orthogonal to their long axis (sideways). We also manipulated whether one dart (the 'wolf') chased another dart (the 'sheep'). Prior behavior results have shown that both the 'facing ahead' and 'chasing' cues trigger the perception of animacy; however, here both of these animacy manipulations were irrelevant to the dot-probe detection task. Behavioral results revealed no difference in probe detection between conditions, indicating that attention was well controlled. Activation of the pSTS was greater for animate than inanimate displays -- suggesting that animacy detection was automatically triggered by these low level cues.
New, J. J, & Scholl, B. J. (2009). The functional nature of motion-induced blindness: Further explorations of the 'perceptual scotoma' hypothesis. Talk given at the annual meeting of the Vision Sciences Society, 5/9/09, Naples, FL.  
Perhaps the most striking phenomenon of visual awareness to be discovered in the last decade is that of motion-induced blindness (MIB). In MIB, fully visible and attended objects may repeatedly fluctuate into and out of conscious awareness when superimposed onto certain global moving patterns. While frequently considered as a limitation or failure of visual perception, we have proposed that MIB may actually reflect a specific functional heuristic in visual processing for identifying and compensating for some visual impairments. In particular, when a small object is invariant despite changes that are occurring in the surrounding visual field, the visual system may interpret that stimulus as akin to a scotoma, and may thus expunge it from awareness. Here we further explore this 'perceptual scotoma' hypothesis (New & Scholl, 2008, Psychological Science), reporting several new features of MIB, and responding to some apparent challenges. In particular, we explore the role of moving targets in MIB. Though scotomas can be stationary, some ('motile scotomas', or 'floaters' consisting of material within the eye) may frequently move. The character of such movements, however, yielded the unique prediction that moving targets in MIB displays may be more likely to perceptually disappear when they are floating downward vs. rising upward through the same positions -- a prediction that was robustly confirmed. In additional experiments, we explored the effects of targets in MIB that moved with vs. against smooth horizontal eye movements. Targets moving with fixation (as would a scotoma) disappeared much more readily. Because this effect occurred when both types of moving targets were present in the display at the same time, such effects cannot be explained by appeal to microsaccades or attentional effects. These and other new effects each support the idea that MIB reflects an adaptive visual function.