Yale Perception & Cognition Lab

VSS '13 Abstracts
 
 
Jump to:  
Albrecht, A., Scholl, B. J., & McCarthy, G. (2013). Is perceptual averaging an ability or a reflex? Electrophysiological evidence for automatic averaging. Poster presented at the annual meeting of the Vision Sciences Society, 5/14/13, Naples, FL.  
To cope with the vast amount of incoming visual information, we not only select regions for further processing (constructing a high-resolution representation of a small amount of input, via attention), but we summarize the visual world along multiple dimensions (constructing a low-resolution representation of a large amount of input). In studies of perceptual averaging, for example, observers are able to quickly and accurately extract the mean size of an array of shapes, or the mean orientation of an array of lines. Despite many impressive demonstrations of perceptual averaging in recent years, it is still unclear what kind of phenomenon this is. In particular, is averaging an ability (something that we can intentionally engage when asked to do so by task instructions) or is it an incidental visual process (something that occurs even without a conscious attempt to do so)? We explored perceptual averaging of orientation without an explicit averaging task, by measuring repetition suppression with EEG. Observers viewed static line segments of varying orientations arrayed in an annulus around fixation. Their only task was to respond to rare displays during which a line segment jiggled momentarily (10% of trials, later excluded from the EEG analyses). On Match displays, the mean orientation was identical to that of the previous display (though the individual line segments always had distinct orientations), whereas on No Match displays, the mean orientation differed. Across several conditions, we observed significant suppression in the EEG waveform to Match displays, compared to No Match displays. This difference occurred within approximately 250 ms of the display onset, and was primarily apparent at relatively posterior electrode sites. These and other results suggest that perceptual averaging is an incidental visual process that the mind engages in even when not explicitly tasked to do so.
 
Chen, H., & Scholl, B. J. (2013). Congruence with items held in visual working memory boosts invisible stimuli into awareness: Evidence from motion-induced blindness. Poster presented at the annual meeting of the Vision Sciences Society, 5/13/13, Naples, FL.  
Attention and awareness are intimately related -- with the former sometimes serving as a gateway to the latter, as in phenomena such as inattentional blindness. Other recent work suggests that attention and visual working memory (VWM) are also intimately related. For example, objects that are congruent with items held in VWM tend to attract attention. Combining these insights leads to the intriguing possibility that VWM congruence could make otherwise-invisible stimuli visible. We tested this by exploiting Motion-Induced Blindness (MIB), wherein salient targets fluctuate into and out of conscious awareness in the presence of a superimposed global motion pattern. A single number was presented at the beginning and end of each trial, and observers had a simple VWM task: Was the final number the same as the initial one? During retention, observers viewed an MIB display with two targets (in the two upper display quadrants), each the number '8' drawn as a rectangle with a single bisecting line. Observers held down independent keys to indicate when the two targets disappeared. While both were invisible, line segments on each target gradually faded out to yield two other numbers (e.g. "5" and "2"), one of them congruent with the VWM number. After the two targets reappeared, observers indicated which of them (left, right, or both) had become visible first. Whether the VWM number was a numerical digit or a written word, the VWM-congruent target was more likely to reappear first -- an effect not due to endogenous attention (since observers didn't know which invisible target would be congruent until it reappeared) or to a response bias (since it did not occur in catch trials that forced simultaneous reappearance). This discovery shows how VWM congruence can boost invisible stimuli into awareness, and suggests new types of interactions between VWM, attention, and awareness.
 
Chen, Y. -C., & Scholl, B. J. (2013). Seeing and liking: Biased perception of ambiguous figures based on aesthetic preferences for how objects should face within a frame. Poster presented at the annual meeting of the Vision Sciences Society, 5/10/13, Naples, FL.  
Aesthetic preferences are ubiquitous in visual experience. Indeed, it seems nearly impossible in many circumstances to perceive a scene without also liking or disliking it. While aesthetic factors are occasionally studied in vision science, they are often treated as something that occurs only after the rest of visual processing is complete. In contrast, the present study explores whether aesthetic preferences influence other types of visual processing -- focusing on the disambiguation of bistable figures. We used bistable images whose competing interpretations differed not only in their semantic content (e.g. duck vs. rabbit) but also in the direction they appeared to be facing (to the left vs. to the right). Observers viewed one such figure at a time, placed within a visible frame -- near the left edge, near the right edge, or in the center -- and they pressed a key to indicate which interpretation they saw throughout each 15-second trial. Previous work with unambiguous images identified an "inward bias": when an object is near the border of a frame, we like the image more if the object is facing inward (toward the center) vs. outward. When observers in the present project viewed a bistable figure, its position within the frame influenced what they saw at the beginning of each trial. For example, seeing a rightward-facing figure (whether duck or rabbit) was most likely when the figure was near the left border, and least likely when near the right border. The same pattern held for the total duration of each percept throughout a trial. In sum, observers tended to see whichever interpretation would cause the figure to be facing inward -- i.e. whichever they would like more. We discuss the roles of attention and familiarity in such effects, and conclude that aesthetic factors play an active role in visual processing.
 
De Freitas, J., Liverence, B. M., & Scholl, B. J. (2013). Visual and auditory object-based attention driven by rhythmic structure over time. Poster presented at the annual meeting of the Vision Sciences Society, 5/11/13, Naples, FL.  
Objects often serve as fundamental units of visual attention. Perhaps the most well-known demonstration of object-based attention is the 'same-object advantage': when attention is directed to one part of an object, it is easier to shift to another part of the same object than to an equidistant location on a different object. Does this effect apply only to spatial shifts of attention, or can same-object advantages also occur based on purely temporal structure? We explored this question using rhythmic stimuli, composed of repeating "phrases" (of several seconds each), and presented either auditorily or visually. Auditory stimuli consisted of sequences of tones (of a single frequency), temporally arranged to yield regular (and independently normed) rhythms. Visual stimuli consisted of the same rhythms "tapped out" by a moving bar on a computer screen. Subjects detected infrequent high-pitch probe tones in the auditory experiment, or high-luminance probe flashes in the visual experiment. Probes were preceded by temporally-predictive cue tones (or flashes), so that each cue-probe pair either occurred within a single phrase repetition (Within-Phrase) or spanned a phrase boundary (Between-Phrase), with the brute cue-target duration equated. In both modalities, subjects detected Within-Phrase probes faster than Between-Phrase probes - and further control studies confirmed that these effects weren't driven by the absolute probe positions within each phrase. Thus same-object advantages are driven by temporal as well as spatial structure, and in multiple modalities. In this sense, "object-based visuospatial attention" may not require objects, and may not be fundamentally visual or spatial. Rather, it may reflect a broader phenomenon in which attention is constrained by many kinds of perceptual structure (in space or time, in vision or audition).
 
Firestone, C., & Scholl, B. J. (2013). 'Top-down' effects where none should be found: The El Greco fallacy in perception research. Talk given at the annual meeting of the Vision Sciences Society, 5/13/13, Naples, FL.  
A tidal wave of recent research purports to have discovered that higher-level states such as moods, action-capabilities, and categorical knowledge can literally and directly affect what we see. Are these truly effects on perception, or might some instead reflect influences on judgment, memory, or response bias? Here, we exploit an infamous art-historical reasoning error (the so-called "El Greco fallacy") to demonstrate in five experiments that multiple alleged top-down effects (ranging from effects of morality on lightness perception to effects of action capabilities on spatial perception) cannot truly be effects on perception. We do so by actively replicating these very effects, but in previously untested circumstances where their motivating theories demand their absence. We first replicated a finding that holding a wide rod across one's body decreases width estimates of a potentially passable aperture, as measured by subjects' adjustments of a tape. However, we also observed the same narrowing effect when the 'matching' instrument was itself an aperture (instead of a tape). If rod-holding truly makes apertures look narrower, then this second experiment should have 'failed', because both apertures should have looked narrower, with the distortions cancelling out. A second series of experiments replicated a finding that recalling unethical deeds makes stimuli look darker, as measured by ratings on a 7-point number scale. However, we also observed the same darkening effect when the scale itself consisted of 7 grayscale patches (which should themselves have looked darker, too!). In each of these cases, the alleged effect is real but cannot be perceptual -- since if it were, then the instrument used to measure it would have been similarly distorted, and the distortions would have cancelled out. We suggest that this new research strategy is widely applicable, and has broad implications for debates over the (dis)continuity of perception and cognition.
 
Liverence, B. M., & Scholl, B. J. (2013). Object persistence enhances spatial navigation in visual menus: A case study in smartphone vision science. Poster presented at the annual meeting of the Vision Sciences Society, 5/13/13, Naples, FL.  
Disruptions of spatiotemporal continuity in dynamic events impair many types of online visual cognition (e.g. change detection, multiple object tracking), suggesting that persisting objects serve as underlying units of attention and working memory. Can persistence also constrain visual learning over time, and could this make a difference in real-world contexts? We explored the role of persistence in spatial navigation through virtual scenes, using a novel "menu navigation" paradigm inspired by smartphone interfaces. Observers viewed "icons" (real-world object pictures) spatially organized into virtual grids that remained stable across all trials in a block. Only a subset of icons was visible at a time, viewed through a virtual window. On each trial, subjects navigated through the grid (via keypresses) to find 4 randomly-chosen target icons, in order, as quickly as possible. On Persist blocks, icons slid smoothly from page to page (mimicking many smartphone animations). On Fade blocks, icons were replaced by a fading animation with no motion (equating brute duration). Across many variations, subjects were significantly faster on Persist blocks -- often by several seconds/trial. This difference occurred for both simple displays (e.g. grids of 4x4 pages with 1 visible icon/page) and for displays that more closely mirrored smartphone interfaces (e.g. grids of 3x3 pages with 4 visible icons/page.). It also occurred even towards the end of each block (after performance had reached asymptote), suggesting a reliance on robust implicit spatial representations (during Persist blocks) that could not be matched by explicit memorization (during Fade blocks). Further conditions verified that this result reflected object persistence per se, rather than attention capture, distraction, or momentary disappearance due to the fading. These results suggest that object persistence can control spatial learning over time as well as online perception, and they show how vision science can provide a foundation for understanding computer interface design.
 
Ward, E., & Scholl, B. J. (2013). Making the switch: Transient unconscious cues can disambiguate bistable images. Talk given at the annual meeting of the Vision Sciences Society, 5/14/13, Naples, FL.  
What we see is a function not only of incoming stimulation, but of unconscious inferences in visual processing. Perhaps the most powerful demonstrations of this are bistable images, wherein the same stimulus alternates between two very different percepts, corresponding to two competing stable states of an underlying dynamic system. What causes the percepts to switch? Previous research has implicated voluntary effort (e.g. mediated by attention) and stochastic processing. Here we explore a third possibility, wherein percepts may switch as a result of data-driven manipulations, even when those manipulations are brief and observers are unaware of them. This is difficult to study with most bistable images, since the percepts are so volatile and the switching so frequent. A notable exception is the Spinning Dancer animation: a spinning woman is depicted in silhouette, so both her orientation in depth and direction of rotation are ambiguous. Still, many observers see her rotating in the same direction for long periods of time, interrupted only rarely by involuntary switches. We introduced disambiguating information into this display, in the form of explicit contours on the silhouette that indicated occlusion (e.g. which leg is behind the other). These contours were subtle and presented quickly enough that most observers failed to notice them throughout the entire experiment. Nevertheless, their impact on switching was strong and systematic: the cue typically led to a perceptual switch shortly thereafter, especially for contours that conflicted with the observer's current percept. Yet to the observers, the switches seemed stochastic. These results show not only how transient disruptions can shock a stochastic system into a new stable state, but also how the visual inferences that determine perception extract the content of incoming visual information to constrain conscious percepts -- even when neither the content nor the brute existence of that information ever reaches awareness.