VSS 2023 Abstracts

Jump to:
Dhar, P., Ongchoco, J. D. K., Wong, K., & Scholl, B. J. (2023). Somehow, everything has changed: Event boundaries defined only by unnoticed changes in implicit visuospatial statistics drive active forgetting in visual working memory. Poster presented at the annual meeting of the Vision Sciences Society, 5/20/23, St. Pete Beach, FL.  
Visual memories can fade not only due to interference and decay, but also due to 'active forgetting'. Perhaps the most salient example of this involves visual event segmentation: both recognition and recall decline when observers experience event boundaries (e.g. when a visual feature suddenly changes, or when they see themselves pass through a doorway while walking down a long hallway). Such effects are often assumed to be adaptive: event boundaries are taken as cues that the statistics of the world are likely to have changed, rendering pre-boundary memories obsolete. In previous work, however, the event boundaries have always been explicit, with pre- and post-boundary stimuli having similar or identical visual statistics. Here we reversed this pattern: is active forgetting triggered even by completely unnoticed changes in implicit visual statistics, without any overt segmentation cues? Subjects viewed a list of pseudowords for 5 seconds, and later their recognition memory was tested. Critically, they viewed a sequence of images between study and test that either did or did not contain an event boundary defined purely by changes in implicit statistics. Inspired by studies of visual statistical learning, images consisted of differently colored dots positioned within a 3x3 grid. Images contained spatial regularities in the dots’ relative positions despite randomized absolute positioning (such that a red dot was always directly above a blue dot, e.g.). For some subjects, these spatial statistics remained constant; for others, they changed midway through the sequence. Even when subjects were unaware of the implicit statistical patterns, those patterns still influenced resulting memory performance — with impaired recognition (as measured by d’) for subjects who viewed sequences with a change in statistics. Thus, active forgetting due to event segmentation does not depend on observers consciously noticing event boundaries, but rather reflects the underlying architecture of visual working memory.
Erdogan, M., Troje, N., & Scholl, B. J. (2023). Percepts of biological motion disappear in slow-moving displays: Evidence for domain-specific agent perception. Poster presented at the annual meeting of the Vision Sciences Society, 5/24/23, St. Pete Beach, FL.  
The most important stimuli we perceive may be other agents, given their direct effects on our fitness. Accordingly, perception may be specialized for processing agents, and one of the best-studied examples may be biological motion: displays of surprisingly few moving dots ('point-light walkers'; PLWs) nevertheless give rise to rich percepts of locomoting agents -- even when static frames from such displays appear as meaningless jumbles of dots. Does this reflect a distinct, domain-specific form of perception, or is it (merely) a complex instance of general motion/shape perception? Inspired by the fact that humans tend to move at certain minimum speeds, we explored this by simply asking how slow PLWs can be before percepts of biological motion are impaired or destroyed. Are those limits similar to lower-level motion perception thresholds? Or might biological motion have its own domain-specific lower "speed limit"? Observers viewed PLWs moving in place, embedded in noise (with extra irrelevant moving dots). Animations moved at typical speeds, or at speeds that were considerably slower but still very far above motion perception thresholds. And for slow displays, we always tested both duration-matched versions (with less overall motion) and trajectory-matched versions (that simply lasted longer). Across several experiments, observers tried to discriminate various properties -- such as the direction of locomotion, the walker's apparent gender, or even whether a PLW was present or absent in the first place. We always obtained the same results (which were also apparent as powerful phenomenological demonstrations): discrimination of each of these aspects of biological motion was greatly impaired (often to chance level) in the slower displays, despite the fact that the motion was still readily visible. This demonstrates the utility of exploring 'slow visual cognition', and supports the characterization of biological motion perception as a domain-specific form of visual processing.
Ji, H., & Scholl, B. J. (2023). 'Visual verbs': Dynamic event types (such as twisting vs. rotating) are extracted quickly and spontaneously during visual perception. Poster presented at the annual meeting of the Vision Sciences Society, 5/22/23, St. Pete Beach, FL.  
The underlying units of visual representation often transcend lower-level properties, for example when we see objects in terms of a small number of generic stimulus types (e.g. animals, plants, faces, etc.). There has been much less attention, however, to the possibility that we also represent dynamic information in terms of a small number of primitive event types -- such as twisting, rotating, bouncing, rolling, etc. (In models that posit a "language of vision", these would be the foundational visual verbs.) We explored the possibility that such 'event type' representations are formed quickly and spontaneously during visual perception -- even when they are entirely task-irrelevant. We did so by exploiting the phenomenon of categorical perception -- wherein the differences between two stimuli are more readily noticed when they are represented in terms of different underlying categories. Observers simply viewed pairs of images or animations (presented very briefly, one at a time), and reported for each pair whether they were the same or different in any way. Cross-Type changes involved switches in the underlying event type (e.g. a towel being twisted in someone's hands, replaced by a towel being >rotated in someone's hands), while Within-Type changes maintained the same event type (e.g. a towel being more or less twisted in someone's hands). Critically, this distinction was always task-irrelevant, and Within-Type changes were always objectively greater in magnitude than were Cross-Type changes. Nevertheless, Cross-Type changes were much more readily noticed. And additional controls confirmed that such effects could not be explained by appeal to lower-level stimulus differences (such as the different hand positions involved in twisting vs. rotating). This spontaneous perception of a potentially continuous range of stimuli in terms of a smaller set of primitive "visual verbs" might promote both generalization and prediction about how events are likely to unfold.
Ongchoco, J. D. K., Wong, K., & Scholl, B. J. (2023). The "unfinishedness" of dynamic events is spontaneously extracted in visual processing: A new 'Visual Zeigarnik Effect'. Talk presented at the annual meeting of the Vision Sciences Society, 5/23/23, St. Pete Beach, FL.  
The events that occupy our thoughts in an especially persistent way are often those that are unfinished -- half-written papers, unfolded laundry, and items not yet crossed off from to-do lists. And this factor has also been emphasized in work within higher-level cognition, as in the "Zeigarnik effect": when people carry out various tasks, but some are never finished due to extrinsic interruptions, memory tends to be better for those tasks that were unfinished. But just how foundational is this sort of "unfinishedness" in mental life? Might such unfinishedness be spontaneously extracted and prioritized even in lower-level visual processing? To explore this, we had observers watch animations in which a dot moved through a maze, starting at one disc (the 'startpoint') and moving toward another disc (the 'endpoint'). We tested the fidelity of visual memory by having probes (colored squares) appear briefly along the dot's path; after the dot finished moving, observers simply had to indicate where the probes had appeared. On 'Completed' trials, the motion ended when the dot reached the endpoint, but on 'Unfinished' trials, the motion ended shortly before the dot reached the endpoint. Although this manipulation was entirely task-irrelevant, it nevertheless had a powerful influence on visual memory: observers placed probes much closer to their correct locations on Unfinished trials. This same pattern held across several different experiments, even while carefully controlling for various lower-level properties of the displays (such as the speed and duration of the dot's motion). And the effect also generalized across different types of displays (e.g. also replicating when the moving dot left a visible trace). This new type of Visual Zeigarnik Effect suggests that the unfinishedness of events is not just a matter of higher-level thought and motivation, but can also be extracted as a part of visual perception itself.
Shah, A., Wong, K., Yildirim, I., & Scholl, B. J. (2023). Perceiving precarity (beyond instability) in block towers. Poster presented at the annual meeting of the Vision Sciences Society, 5/23/23, St. Pete Beach, FL.  
Intuitive physics has traditionally been associated with higher-level cognition, but recent work has also focused on the exciting possibility that properties such as physical stability may be rapidly and spontaneously extracted as a part of seeing itself -- as when you look at a tower of blocks, and can appreciate at a glance that it is about to topple. Much of this work has contrasted towers that appear stable vs. unstable, in terms of whether they would fall as a result of external physical forces (such as gravity) alone. But the 'perception of physics' in block towers seems richer than a binary stable/unstable state. Even when a tower is (and appears to be) stable, for example, we might still readily perceive how precarious it is -- in terms of how much force would be required in order to knock it over. Here we explored perceived 'precariousness' using change detection. Observers viewed pairs of block-tower images (one at a time, separated by a mask), and simply reported whether the second image was different. The towers were always stable, but could be differentially precarious. On More-Precarious trials, a single block was shifted slightly so that the tower became less resistant to falling (as quantified by physics-based simulations with variable amounts of spatial jitter). On corresponding Less-Precarious trials, that same block was shifted slightly so that the tower became more resistant to falling. We expected greater attention to (and memory for) changes that introduced a greater likelihood of collapse. But we obtained exactly the opposite pattern: observers were far better at detecting changes on Less-Precarious trials, compared to More-Precarious trials. We explore the possibility that this surprising result may be explained by the 'perception of history', in terms of appreciating how such towers were constructed in the first place.
Wong, K., & Scholl, B. J. (2023). What memories are formed by dynamic 'visual routines'? Poster presented at the annual meeting of the Vision Sciences Society, 5/22/23, St. Pete Beach, FL.  
You can readily see at a glance how two objects spatially relate to each other. But seeing how 20 objects all relate seems impossible, due to computational explosion (with 190 pairs). Such situations require visual routines: dynamic visual procedures that efficiently compute various properties 'on demand' -- e.g. whether two points lie on the same winding path, in a busy scene containing many points and paths ('path tracing'). Some surprisingly foundational questions about visual routines remain unexplored, including: what (if anything) remains in visual memory after the execution of a visual routine? Does path tracing result in a memory of the traced path itself? Or just of whether there was a path? Or nothing at all, after the moment has passed? We explored this for spontaneous path tracing in 2D mazes. Observers saw a maze in which two probes appeared in positions connected by a path. They were then shown two mazes, and had to select which was the initially presented maze. Across experiments, the incorrect maze could be (1) a Path-Obstruction maze, where a new contour blocked the initial inter-probe path; (2) an Irrelevant-Obstruction maze, where a new contour was introduced elsewhere; or (3) an Alternative-Path maze, where the same new Path-Obstruction contour was accompanied by the removal of an existing contour, providing an alternative inter-probe path. Performance on Path-Obstruction trials was much better than on Irrelevant-Obstruction trials (always controlling for lower-level contour properties across trial types). But Alternative-Path trials entirely eliminated this advantage. This suggests that a visual memory is formed by spontaneous path tracing, but that its content is not the path itself, but only whether a path existed. If visual routines exist to answer on-demand questions during perception, then the resulting memories may consist only of the answers themselves, and not the processing that generated them.