VSS 2012 Abstracts


Jump to:
 
Albrecht, A., & Scholl, B. J. (2012). Perceptual size averaging: It's not just for circles anymore. Poster presented at the annual meeting of the Vision Sciences Society, 5/14/12, Naples, FL.  
Much recent research has explored the phenomenon of perceptual averaging: beyond constructing representations of individual objects, the visual system also computes statistical summaries of scenes (perhaps as a way of coping with capacity limitations). For example, observers are able to efficiently and accurately report the average size of an array of discs. To date, however, the displays used in such studies have been homogeneous in several ways that are not reflective of real-world scenes. For example, the arrays in such experiments have always used displays that contain identical shapes, varying only in size. To explore perceptual averaging for heterogeneous displays, observers viewed 1-second arrays of either pacmen or wedges (equated for area), where the angular extent of the wedges (or of the 'missing' wedges from pacmen) varied within each array. Observers reported the average area of such arrays using an adjustable test shape that matched the array shapes (e.g. using a constant-radius wedge whose angular sweep could be adjusted, or using a constant-angle wedge whose radius could be adjusted). Observers were no less accurate at averaging in this situation than when they reported the average area of an array of discs using a test disc. However, there was a marked cost of adding a different kind of heterogeneity at test: for example, when reporting average area using a test disc, observers are more accurate for arrays filled with discs than for arrays filled with either pacmen or wedges. Thus we see that perceptual averaging is relatively unaffected by some types of heterogeneity (e.g. within-array shape differences), but can be frustrated by others (e.g. heterogeneity across initial arrays vs. test shapes). These and other manipulations suggest that perceptual averaging is well adapted to at least some of the kinds of heterogeneity that are characteristic of real-world visual experience.
 
De Freitas, J., Liverence, B., & Scholl, B. J. (2012). Attentional rhythm: A temporal analogue of object-based attention. Poster presented at the annual meeting of the Vision Sciences Society, 5/12/12, Naples, FL.  
A critical step in understanding any perceptual process is determining the underlying units over which it operates. For example, decades of research have demonstrated that the underlying units of visual attention are often visual objects. This conclusion has been supported by the demonstration of a 'same-object advantage': for example, nonpredictive cues lead to faster target responses when cue and target both occur on the same object versus when they occur on distinct objects, equating for spatial distance. Such effects have been well characterized in the context of spatial attention, but to our knowledge no previous studies have investigated the possibility of analogous effects for temporal attention. Here we explore whether a particular class of temporally-extended auditory "objects" -- rhythmic phrases -- might similarly serve as units of temporal attention. Participants listened to a repeating sequence of rhythmic phrases (3-4 seconds each) of a low-pitch tone while trying to quickly detect sporadic higher-pitch target tones, each of which was preceded by a fully-predictive cue tone. While equating for the brute duration of the cue-target interval, some cue-target pairs occurred within the same rhythmic phrase, while others spanned a boundary between phrases. We observed a significant "same-phrase" advantage: participants responded more quickly to Within-Phrase than Between-Phrase targets. These results reveal a new phenomenon of temporal attention, as well as a new analogue between visual and auditory processing. In particular, they suggest a more general interpretation of typical object-based effects in visual attention: just as the structure of a scene will constrain the allocation of attention in space, so too might the structure of a sequence constrain the allocation of attention in time. Thus, rather than being driven by particular visual cues, per se, "object-based attention" may reflect a more general influence of perceived structure of any kind on attention.
 
Firestone, C., & Scholl, B. J. (2012). "Please tap the shape, anywhere you like": The psychological reality of shape skeletons. Poster presented at the annual meeting of the Vision Sciences Society, 5/15/12, Naples, FL.  
An intriguing hypothesis in research on shape perception is that shapes are represented in terms of their inferred interior structure, rather than their visible borders. Such 'skeletal' or 'medial axis' shape representations are thought to afford computational efficiency and flexibility. However, an old (and in our view tragically unheralded) report from the 1970s supported the psychological reality of such representations, using a remarkably direct method: subjects were presented with a 2D shape drawn on paper, and they simply used a pencil to make a single dot within the shape. When many subjects' dots were aggregated, the resulting plots bore a striking resemblance to a traditional shape skeleton. Here we revive this paradigm for the digital era, replicating previous effects and extending them in several new ways. Using a tablet computer with a touch-sensitive screen, hundreds of observers were shown geometric shapes (including some that changed size and shape dynamically during viewing) and were simply asked to "touch the screen, inside the shape, anywhere you like". We discovered just when the aggregated touches did and did not reveal shape skeletons -- including tests of several types of stimuli that to our knowledge have not been considered in the shape-representation literature. We also employed this method to test predictions made by specific computational accounts. For example, one prominent account based on Bayesian estimation holds that subtle perturbations in a border should not affect the computed shape skeleton. The psychological effects of such perturbations, however, were quantitatively large and visually striking: whereas the aggregated touches in a normal rectangle lined up with the conventional medial axis transformation, the addition of even a very small notch near one corner dramatically altered this pattern. We discuss these and many other results in (re)introducing this surprisingly direct window onto otherwise-hidden visual processes.
 
Kominsky, J., & Scholl, B. J. (2012). The window of 'postdiction' in visual perception is flexible: Evidence from causal perception. Poster presented at the annual meeting of the Vision Sciences Society, 5/12/12, Naples, FL.  
One of the most counterintuitive effects in perception is postdiction -- the ability of the visual system to integrate information from shortly after an event occurs into the perception of that very event. Postdictive effects have been found in many contexts, but their underlying nature remains somewhat mysterious. Here we report several studies of causal perception that explore whether the temporal extent of the postdiction 'window' is fixed, or whether it can flexibly adapt to different contexts. Previous work has explored a display in which one disc (A) moves to fully overlap a second disc (B), at which point A stops and B starts moving. This 'full-overlap' display will often be seen in terms of non-causal 'passing', wherein A is seen to pass over a stationary B (even when their colors differ). However, when a third 'context' object (C) begins moving in the same direction as B at roughly the same time, the full-overlap event will often be perceived as causal 'launching', wherein A collides with B, causing it's motion. These percepts can be influenced postdictively, when C begins moving slightly after the A/B overlap. By varying the objects' speeds, we were able to determine whether the duration of the postdiction window is a constant, or whether it varies as a function of an event's pace. The results clearly demonstrated that the postdiction window is flexible, and in a surprising way. The existence of a postdictive effect was a function not just of the temporal offset (between when B and C started moving) but also of a spatial offset -- how far B had moved before C started moving, largely independent of the timing. This suggests that the visual system is adapting to the speed of the stimuli and adjusting the extent of the postdiction 'window' accordingly.
 
Liverence, B., & Scholl, B. J. (2012). Attentional selection increases the refresh rate of perception: Evidence from multiple-object tracking. Talk given at the annual meeting of the Vision Sciences Society, 5/13/12, Naples, FL.  
Selective attention enhances accuracy and speeds responses in many perceptual tasks. In some cases, these enhancements may be probabilistic, e.g. reflecting an increased likelihood that an attentional spotlight will sample from selected (rather than background) objects at any given moment. Might attentional selection also lead to qualitatively different sampling? Here we explore the possibility that selection can alter the functional 'refresh rate' of perception. While fixating, observers tracked 2 out of 5 objects in a simplified multiple object tracking task. As they moved, the objects also rapidly changed colors (4-12 times per second, varied from trial to trial), and participants simultaneously monitored for probe events in which any 2 objects' changing colors momentarily became synchronized. Target-target probe detection was dramatically enhanced relative to target-distractor or distractor-distractor probe detection at every rate tested, with target-target detection on 12 Hz trials roughly equivalent to target-distractor (or distractor-distractor) detection on 4 Hz trials. (And this effect replicated for multiple visual features, including rapid shape changes.) Critically, baseline performance (as revealed by a control experiment with probe detection but no tracking) was also just as low as the target-distractor and distractor-distractor conditions, indicating that the selection effect reflects target enhancement rather than distractor impairment. Nontemporal accounts were unable to predict these data. For example, a deflationary interpretation wherein task demands simply kept observers from processing distractors would predict worse performance for distractor-distractor (and target-distractor) probes relative to baseline probes -- but this was not observed. Data from additional control experiments also ruled out interpretations based on sampling of selected objects that was no more frequent, but instead was more synchronous, or involved higher-resolution samples. These data collectively suggest that attentional selection leads to more frequent sampling of selected objects -- an increase in the functional refresh rate of perception.
 
Strickland, B., & Scholl, B. J. (2012). "Event type" representations in vision are triggered rapidly and automatically: A case study of containment vs. occlusion. Talk given at the annual meeting of the Vision Sciences Society, 5/15/12, Naples, FL.  
Recent infant cognition research suggests that the mind reflexively categorizes dynamic visual input into representations of "event types" (such as occlusion or containment), which then prioritize attention to relevant visual features -- e.g. prioritizing attention to the dimension (height vs. width) that predicts whether a rectangular object will fit inside another in the context of containment, but not occlusion, even when those events are highly visually similar. We recently discovered that this form of "core knowledge" continues to operate in adults' visual processing: using a form of change detection, we showed that the category of an event dramatically influences the ability to detect changes to certain features. In the current study we explored just how event type representations may be quickly and flexibly triggered by specific visual cues. Subjects viewed dynamic 2D displays depicting repeating events wherein 5 rectangles oscillated horizontally, moving either behind or into 5 horizontally-oriented and haphazardly placed containers. Occasionally, a rectangle changed its height or width while out of sight, and observers pressed a key when they detected such changes. Detection was better for height changes than for width changes in containment events, but not in occlusion events (since height predicts fit in horizontal containment events). This was true not only when each individual rectangle always consistently underwent occlusion or containment, but also when each rectangle randomly underwent occlusion or containment during each oscillation. We also independently varied containment vs. occlusion for the disappearance and reappearance of the rectangles, and discovered that enhanced change detection for the "fit"-relevant dimension only occurred when containment cues were present for both the disappearance and reappearance. Collectively, these and other results indicate that event-type representations are formed and discarded during online visual processing in response to cues that may change from moment to moment.
 
Suben, A., & Scholl, B. J. (2012). Recently disoccluded objects are preferentially attended during multiple-object tracking. Poster presented at the annual meeting of the Vision Sciences Society, 5/13/12, Naples, FL.  
The human visual system allows multiple featurally-identical objects to be simultaneously tracked, and the presence of periodic occlusion does little or nothing to impair this ability. However, studies of the resources that underlie this ability demonstrate the periods of momentary occlusion nevertheless demand the allocation of extra bursts of attention -- the so-called "attentional high-beams effect". Here we explored how and when these resources are allocated. Across several experiments, observers tracked multiple featurally-identical objects as they moved about displays containing static occluders. At the same time, observers also had to detect small probes that appeared sporadically on the occluders, or on targets while they were in one of six states: unoccluded, about to be occluded, partially occluded, fully occluded, partially unoccluded, or just recently unoccluded. Probe detection rates for these categories were taken as indexes of the distribution of attention. (Distractors were probed just as often, so that probes did not predict target identity.) We replicated the high-beams effect: probe detection rates were higher for occluded targets than visible targets. For partially occluded targets, however, we observed an asymmetry: objects in the process of becoming disoccluded were still attentionally prioritized, but objects in the process of becoming occluded were not. This same qualitative pattern occurred for fully visible targets that were very close to occluders, with a benefit for targets that had been recently occluded, but no benefit for targets that were about to become occluded. Thus, the highbeams effect occurs not only for occluded (and thus invisible) targets but also for fully visible (but just-recently occluded) targets. This surprising result also emphasizes that the highbeams effect truly reflects a functional difference, rather than a visual difference. This effect of dynamic attention also appears to be subject to a form of inertia, but is not driven predictively.