VSS 2010 Abstracts
Jump to:
Albrecht, A. R., & Scholl, B. J. (2010). The nature of perceptual averaging: Automaticity, selectivity, and simultaneity. Poster presented at the annual meeting of the Vision Sciences Society, 5/9/10, Naples, FL.
Perception represents not only discrete features and objects, but also information distributed in time and space. One intriguing example is perceptual averaging: we are surprisingly efficient at perceiving and reporting the average size of objects in spatial arrays or temporal sequences. Extracting such statistical summary representations (SSRs) is fast and accurate, but several fundamental questions remain about their underlying nature. We explored three such questions, investigating SSRs of size for static object arrays, and for a single continuously growing/shrinking object (as introduced in Albrecht & Scholl, in press, Psychological Science). Question 1: Are SSRs computed automatically, or only intentionally? When viewing a set of discs, observers completed three trials of a 'decoy' task, pressing a key when they detected a sudden luminance change. Observers also reported the discs' average size on the final trial, but could receive these instructions either before the final display onset, or after its offset. Performance for the second ('incidental averaging') group was no worse than for the first ('intentional averaging') group -- suggesting that some SSRs can be computed automatically. Question 2: Can SSRs be computed selectively from temporal subsets? Observers viewed a continuously growing/shrinking disc that changed color briefly during each trial. Observers were asked to average either the entire sequence, or only the differently-colored subset -- via instructions presented either before the display onset, or after its offset. Performance was as accurate with subsets as with the whole -- suggesting that SSRs can be temporally selective. Question 3: Can we simultaneously extract multiple SSRs from temporally overlapping sequences? In the same experiments, there was a small but reliable cost to receiving the instructions after the display offset -- suggesting that SSRs cannot automatically compute multiple temporally-overlapping averages. Collectively, these and other results clarify both the flexibility and intrinsic limitations of perceptual averaging.
Gao, T., & Scholl, B. J. (2010). Chasing vs. stalking: Interrupting the perception of animacy. Talk given at the annual meeting of the Vision Sciences Society, 5/11/10, Naples, FL.
Visual experience involves not only physical features such as color and shape, but also higher-level properties such as animacy and goal-directed behavior. Perceiving animacy is an inherently dynamic experience, in part because agents' goals and mental states may be constantly in flux -- unlike many of their physical properties. How does the visual system maintain and update representations of agents' goal-directed behavior over time and motion? The present study explored this question in the context of a particularly salient form of perceived animacy: chasing, in which one shape (the 'wolf') pursues another shape (the 'sheep'). The participants themselves controlled the movements of the sheep, and the perception of chasing was assessed in terms of their ability to avoid being caught by the wolf -- which looked identical to many moving distractors, and so could be identified only by its motion. In these experiments the wolf's pursuit was periodically interrupted by short intervals in which it did not chase the sheep. When the wolf moved randomly during these interruptions, the detection of chasing was greatly impaired. This could be for two reasons: decreased evidence in favor of chasing, or increased evidence against chasing. These interpretations were tested by having the wolf simply remain static (or jiggle in place) during the interruptions (among distractors that behaved similarly). In these cases chasing detection was unimpaired, supporting the 'evidence against chasing' model. Moreover, random-motion interruptions only impaired chasing detection when they were grouped into fewer temporally extended chunks rather than being dispersed into a greater number of shorter intervals. These results reveal (1) how perceived animacy is determined by the character and temporal grouping (rather than just the brute amount) of 'pursuit' over time; and (2) how these temporal dynamics can lead the visual system to either construct or actively reject interpretations of chasing.
Liverence, B. M., & Scholl, B. J. (2010). Do we experience events in terms of time, or time in terms of events? Talk given at the annual meeting of the Vision Sciences Society, 5/10/10, Naples, FL.
In visual images, we perceive both space (as a continuous visual medium) and objects (that inhabit space). Similarly, in dynamic visual experience, we perceive both continuous time and discrete events. What is the relationship between these units of experience? The most intuitive answer is similar to the spatial case: time is perceived as an underlying medium, which is later segmented into discrete event representations. Here we explore the opposite possibility -- that events are perceptually primitive, and that our subjective experience of temporal durations is constructed out of events. In particular, we explore one direct implication of this possibility: if we perceive time in terms of events, then temporal judgments should be influenced by how an object's motion is segmented into discrete perceptual events, independent of other factors. We observed such effects with several types of event segmentation. For example, the subjective duration of an object's motion along a visible path is longer with a smooth trajectory than when the same trajectory is split into shorter independent pieces, played back in a shuffled order (a path shuffling manipulation). Path shuffling apparently disrupts object continuity -- resulting in new event representations, and flushing detailed memories of the previous segments. In contrast, segmentation cues that preserve event continuity (e.g. a continuous path but with segments separated by sharp turns) shorten subjective durations relative to the same stimuli without any segmentation (e.g. when the segments are bound into a single smoothly-curving path, in trajectory inflection manipulations). In all cases, event segmentation was manipulated independently of psychophysical factors previously implicated in time perception, including overall stimulus energy, attention and predictability. These and other results suggest a new way to think about the fundamental relationship between time and events, and imply that time may be less primitive in the mind than it seems to be.
Strickland, B., & Scholl, B. J. (2010). Representations of "event types" in visual cognition: The case of containment vs. occlusion. Poster presented at the annual meeting of the Vision Sciences Society, 5/12/10, Naples, FL.
The visual system segments dynamic visual input into discrete event representations, but these are typically considered to be token representations, wherein particular events are picked out by universal segmentation routines. In contrast, recent infant cognition research by Renee Baillargeon and others suggests that our core knowledge of the world involves "event type" representations: during perception, the mind automatically categorizes dynamic events into types such as occlusion, containment, support, etc. This categorization then automatically guides attention to different properties of events based on their category. For example, an object's width is particularly relevant to containment events (wherein one object is lowered inside another), because that variable specifies whether the event is possible (i.e. whether it will 'fit'). However, this is not true for the variable of height. This framework has been supported by looking-time experiments from Baillargeon's group: when viewing containment events, infants encode objects' widths at a younger age than they encode their heights -- but no such difference in age is observed for similar occlusion events. Here we tested the possibility that this type of 'core knowledge' can also be observed in mid-level object-based visual cognition in adults. Participants viewed dynamic 2D displays that each included several repeating events wherein rectangles either moved into or behind containers. Occasionally, the moving rectangles would change either their height or width while out of sight, and observers pressed a key when they detected such changes. Change detection performance mirrored the developmental results: detection was significantly better for width changes than for height changes in containment events, but no such difference was found for occlusion events. This was true even though many observers did not report noticing the subtle difference between occlusion and containment. These results suggest that event-type representations are a part of the underlying currency of adult visual cognition.
|