![]() ![]() Jump to:
Colombatto, C., & Scholl, B. J. (2022). Attending to attention: Reverse correlation reveals how we perceive attentiveness in other people's faces. Poster presented at the annual meeting of the Vision Sciences Society, 5/17/22, St. Pete Beach, FL. When looking at faces, we readily extract a variety of social properties -- from stable traits (such as trustworthiness or extraversion) to more transient states (such as surprise or anger). But one of the most important properties we can perceive from others' faces is their attentional state -- since the likelihood of someone in our local environment affecting our fitness is enhanced when they are attentive. But just how can we tell whether another person is attentive (vs. distracted)? Some cues (such as direct gaze) may be relatively straightforward, but others may be more subtle and non-intuitive. We explored this using reverse correlation, a data-driven approach that can reveal the nature of internal representations without prior assumptions. Observers viewed pairs of faces created by adding randomly generated noise (across many spatial frequencies) to a constant base face, and had to select which appeared to be most attentive. Analyses of automatically extracted facial landmarks from the resulting 'classification images' revealed the determinants of perceived attentiveness. Some cues were straightforward: attentive faces had more direct eye gaze, and larger pupils. But other equally robust cues seemed less intuitive: for example, attentive faces also had smaller mouths -- perhaps because attention is correlated with more global facial expressions. These results are consistent with the view that eyes and faces are prioritized during perception because of their unique informativeness about others' mental states. And these powerful and consistent effects of facial cues on impressions of attentiveness highlight the importance of attention not just as a perceptual process, but as an object of perception itself. Erdogan, M., & Scholl, B. J. (2022). How slow can you go?: Domain-specific psychophysical limits on the perception of animacy in slow-moving displays. Poster presented at the annual meeting of the Vision Sciences Society, 5/14/22, St. Pete Beach, FL. Perception is necessarily constrained by underlying psychophysical thresholds. This is most obvious when considering the perception of low-level visual features, such as motion: certain rates of change (as in the moving second hand on a clock) result in vivid impressions of motion, while others (such as the moving minute hand) do not. What we see often transcends such basic properties, however: when viewing the movements of objects, we may also have vivid impressions of seemingly high-level properties such as causality or animacy. It has been argued that such properties are also extracted during relatively early and automatic visual processing itself, but are they also constrained in the same ways? In particular, does the perception of such properties simply inherit the psychophysical thresholds of lower-level motion perception? Or might they operate according to their own domain-specific limits? We explored this in the context of several seemingly high-level visual properties, one of which is perhaps the most direct example of perceiving 'minds from motion': the perception of *chasing*. Observers viewed displays with many moving discs, of which one (the 'wolf') was pursuing another (the 'sheep') with some degree of directedness (or 'chasing subtlety'). Previous work revealed that such chasing is often detected highly efficiently, but how does this ability scale with speed? We discovered that the perception of chasing is degraded to a surprising degree when the objects move slowly, yet still very far above familiar motion thresholds. This was true when we equated both overall display durations across speeds (with shorter trajectories for slower objects), and overall trajectories across speeds (with longer animations for slower objects). These results (along with other manipulations of perceived causality and intentionality) show how higher-level properties are perceived according to their own domain-specific psychophysical limits -- and they emphasize the utility of exploring 'slow visual cognition'. Ongchoco, J. D. K., Walter-Terrill, R., & Scholl, B. J. (2022). Visual event boundaries promote cognitive reflection over gut intuitions. Poster presented at the annual meeting of the Vision Sciences Society, 5/14/22, St. Pete Beach, FL. There are often two ways to make decisions: you can go with your (fast, automatic) gut intuition, or you can engage in (slow, effortful) deliberation. This contrast is especially salient in tests of "cognitive reflection" -- in which these two routes typically lead to different answers. ("If you're running a race and you pass the person in second place, what place are you in?" The immediate intuitive answer is "first place", but the reflective correct answer is "second place".) What determines whether people will engage in cognitive reflection? It doesn't intuitively seem like the answers to this question would have anything to do with vision science -- since, after all, seeing seems to be among the least "reflective" processes in our minds, being neither slow nor deliberative. Here we show how a simple visual manipulation can nevertheless have a surprisingly powerful effect on cognitive reflection. Subjects viewed an immersive 3D virtual animation in which they walked down a long room. During their walk, some subjects saw a visual event boundary (by passing through a doorway), while others did not -- equating paths, speeds, distances, and overall room layouts. At the end of their walk, subjects then answered a question from a "cognitive reflection test". The results were clear and striking: across multiple questions and experiments (and direct replications), the visual event boundaries led to far more reflective (and thus correct) responses. These results suggest a novel connection between perception and thought: the prevalence of intuition vs. reflection may wax and wane over time, with reflection prioritized immediately after event boundaries -- when intuitions based on previous events may have just become obsolete. Such shifts in higher-level thought may be directly driven by subtle image cues which lead to temporal segmentation and new event representations in visual processing. Walter-Terrill, R., & Scholl, B. J. (2022). Postdiction enhances temporal experience. Poster presented at the annual meeting of the Vision Sciences Society, 5/15/22, St. Pete Beach, FL. Perception often seems instantaneous, but the underlying reality is richer and stranger. Consider a post-cueing paradigm: multiple stimuli are presented briefly (as S1, at time t), and then after they disappear one of them is highlighted (at time t+1) by a cue (S2). Intuitively, it would seem impossible for S2 to influence the subjective perception of S1 -- since by t+1 it should be "too late". Yet this is precisely what can happen: S2 may enhance the perception of S1 (as in retrospective triggering of awareness), or may degrade it (as in object substitution masking) -- a phenomenon known as *postdiction*. In past studies, the S2 cue has nearly always altered the perception of a *static* property of S1 (e.g. its shape or orientation). Here, in contrast, we ask whether postdiction is sophisticated enough to enhance the perception of temporal order itself. On each trial, observers saw two outlined circles. Four unique colors then appeared briefly -- two at a time, one in each circle. Afterward, an arrow appeared, to highlight one of the (again-empty) circles as the 'target', and observers simply reported which two colors had appeared (and in what order) inside the target circle. Critically, a task-irrelevant post-cue also appeared on some trials: after the colors had disappeared (but before the target was identified), a randomly selected circle flashed momentarily. This gave rise to a robust postdictive performance enhancement: observers were more accurate at identifying each target color -- and the order in which they appeared -- when the target circle happened to flash (compared to when the non-target circle flashed, or when there was no flash). Thus postdiction can not only alter our perception of an object's static features, but can also enhance our temporal experience of the world. Wong, K., & Scholl, B. J. (2022). Spatial affordances can automatically trigger dynamic visual routines: Spontaneous path tracing in task-irrelevant mazes. Talk presented at the annual meeting of the Vision Sciences Society, 5/14/22, St. Pete Beach, FL. Visual processing usually seems both incidental and instantaneous. But imagine viewing a jumble of shoelaces, and wondering whether two particular tips are part of the same lace. You can answer this by looking, but doing so may require something dynamic happening in vision (as the lace is effectively 'traced'). Such tasks are thought to involve 'visual routines': dynamic visual procedures that efficiently compute various properties on demand, such as whether two points lie on the same curve. Past work has suggested that visual routines are invoked by observers' particular (conscious, voluntary) goals, but here we explore the possibility that some visual routines may also be automatically triggered by certain stimuli themselves. In short, we suggest that certain stimuli effectively *afford* the operation of particular visual routines (as in Gibsonian affordances). We explored this using stimuli that are familiar in everyday experience, yet relatively novel in human vision science: mazes. You might often solve mazes by drawing paths with a pencil -- but even without a pencil, you might find yourself tracing along various paths *mentally*. Observers had to compare the visual properties of two probes that were presented along the paths of a maze. Critically, the maze itself was entirely task-irrelevant, but we predicted that simply *seeing* the visual structure of a maze in the first place would afford automatic mental path tracing. Observers were indeed slower to compare probes that were further from each other along the paths, even when controlling for lower-level visual properties (such as the probes' brute linear separation, i.e. ignoring the maze 'walls'). This novel combination of two prominent themes from our field -- affordances and visual routines -- suggests that at least some visual routines may operate in an automatic (fast, incidental, and stimulus-driven) fashion, as a part of basic visual processing itself. |