VSS 2020 Abstracts
Jump to:
Hu, Y., Ongchoco, J. D. K., & Scholl, B. J. (2020). From causal perception to event segmentation: Using spatial memory to reveal how many visual events are involved in causal launching. Poster presented at the annual meeting of the Vision Sciences Society, 5/20/20, Online.
The currency of visual experience is frequently not static scenes, but dynamic events. And perhaps the most central topic in the study of event perception is *event segmentation* -- how the visual system carves a continuous stream of input into discrete temporal units. A different tradition has tended to focus on particular types of events, the most famous example of which may be *causal launching*: a disc (A) moves until it reaches another stationary disc (B), at which point A stops and B starts moving in the same direction. Since these two well-studied topics (event segmentation and causal perception) have never been integrated, we asked a simple question: how many events are there in causal launching? Just one (the impact)? Or two (A's motion and B's motion)? We explored this using spatial memory, predicting that memory for intermediate moments within a single event representation should be worse than memory for moments at event boundaries. Observers watched asynchronous animations in which each of six discs started and stopped moving at different times, and (in different experiments) simply indicated each disc's initial and final position. The discs came in pairs, and in some cases A launched B. To ensure that the results reflect perceived causality, other trials involved the same component motions but with spatiotemporal gaps between them (which eliminate perceived launching). The critical locations were the two intermediate ones (A's final position and B's initial position), and spatial memory was indeed worse for launching displays (perhaps because these locations occurred in the middle of a single ongoing event) compared to displays with spatiotemporal gaps (perhaps because these same locations now occurred at the perceived event boundary between A's motion and B's motion). This suggests that causal perception leads the two distinct motions to be represented as a single visual event.
Kwak, J., Uddenberg, S., & Scholl, B. J. (2020). Will it fall?: Exploring the properties that mediate perceived physical instability. Poster presented at the annual meeting of the Vision Sciences Society, 5/20/20, Online.
We often think of perception in terms of relatively low-level properties (such as color and shape), but we can also perceive seemingly higher-level properties, such as physical stability -- as when we can see at a glance whether a tower of blocks will fall or not. Prior work has demonstrated that physical stability is extracted quickly and automatically -- both by deep networks, and during human visual processing -- but it remains unclear just which properties are used to compute such percepts. In the current study, observers viewed pseudorandom 3D computer-generated images of block towers, such that the ground truth of each tower's stability could be simulated in a physics engine, and compared with observers' percepts of whether each tower would fall. Critically, towers were carefully constructed so that percepts of (in)stability could not be based on especially trivial properties such as global asymmetry, or the shape of a tower's boundary envelope. Our analyses demonstrate that observers are sensitive not only to whether a tower will fall, but also to continuous degrees of instability. In particular, the most powerful factor driving observers' percepts of instability was the summed distances that each block moved between the initial and post-fall tableaus, independent of the towers' initial heights (even though of course observers never actually saw the towers falling) -- a factor that wasn't as salient in past models. Variance in the blocks' initial horizontal positions was also a powerful predictor of perceived (in)stability, independent of global symmetry. By combining psychophysics with physics-based simulation and computational modeling, these and other results help to reveal just how we can perceive physical (in)stability at a glance -- a capacity that may be of great adaptive value, given the importance in vision of predicting how our local environments may be about to change.
Ongchoco, J. D. K., & Scholl, B. J. (2020). The hierarchy of experience: Visual memory is differentially disrupted by local vs. global event boundaries. Poster presented at the annual meeting of the Vision Sciences Society, 5/20/20, Online.
Though static scenes so often dominate our experimental displays, our visual experience is inherently populated by dynamic visual events: out there in the world, things *happen*. And perhaps the two most salient themes in the study of event perception are *memory flushing* at event boundaries, and the *hierarchical* nature of our dynamic experience. Visual working memory appears to be effectively flushed at event boundaries (just as one might empty a cache in a computer program), perhaps because this is when the statistics of our local environments tend to change most dramatically — and holding on to now-obsolete information may be maladaptive for guiding behavior in new contexts. The series of events we experience arrives not as a linear sequence, though, but as a structured *hierarchy*, with global events built up from more local events. (A morning might involve showering, then breakfast — but breakfast might involve pouring coffee, then burning toast, etc.) Curiously, these two central themes of event perception have never been connected, so here we explore for the first time how they interact. Observers viewed faces, one at a time. Certain features (such as size or spatial location) changed relatively frequently (inducing ‘local’ boundaries), while others changed less frequently (inducing ‘global’ boundaries). Critically, hierarchical position was dissociated from absolute frequency (such that a given frequency might be ‘local’ in one condition, but ‘global’ in another). On each trial, observers simply reported which of two faces had appeared first — where the pair could span a local boundary, a global boundary, or no boundary. Across a wide variety of experiments, memory was disrupted only by the most global boundaries that were present, regardless of their frequency. Thus, whether a particular event boundary will flush visual memory depends on how it is situated in the hierarchy of our experience.
|