![]() ![]() Jump to:
Chen, Y. -C., Raila, H., & Scholl, B. J. (2017). Sad minds seeking happy stimuli: Trait happiness predicts how quickly happy faces reach visual awareness. Poster presented at the annual meeting of the Vision Sciences Society, 5/23/17, St. Pete Beach, FL. The light entering the eyes conveys far more information than can possibly be promoted into visual awareness in any given moment, and so vision is inherently selective. We have learned a great deal about this selection in recent years, especially about the factors that influence whether (and how quickly) we are likely to consciously perceive various sorts of stimuli. At the same time, however, we know much less about how patterns of unconscious, automatic selection may differ across people. Here we explored such individual differences in the context of an especially salient aspect of our lives: trait happiness. People differ widely in how happy they generally are (beyond their temporary state moods), and we asked about whether this factor might interact with how quickly happy vs. unhappy information reaches visual awareness. We showed people happy, sad, fearful, and neutral faces that were rendered invisible using continuous flash suppression (CFS), and then we measured how quickly such faces broke through CFS into awareness. Several different subsequent measures of trait happiness and life satisfaction (but not measures of state mood) were reliably correlated with performance: the less happy observers were (controlling for state mood), the faster they became aware of happy faces (using neutral faces as a baseline). Critically, this pattern occurred only for happy faces, and not sad or fearful faces -- and a monocular control experiment ruled out response-based explanations that did not involve visual awareness, per se. People who are less happy may thus automatically and unconsciously prioritize happy stimuli, perhaps because of the ability of those stimuli to modulate their emotional experience. In this way, people who are differentially happy may literally experience different visual worlds even when in the same environment -- such that the study of perception may contribute to affective science. Colombatto, C., van Buren, B., & Scholl, B. J. (2017). 'Mind contact': Might eye-gaze effects actually reflect more general phenomena of perceived attention and intention? Poster presented at the annual meeting of the Vision Sciences Society, 5/20/17, St. Pete Beach, FL. Eye gaze is an especially powerful social signal, and direct eye contact has profound effects on us, influencing multiple aspects of attention and memory. Existing work has typically assumed that such phenomena are specific to eye gaze -- but might such effects instead reflect more general phenomena of perceived attention and intention (which are, after all, what we so often signify with our eyes)? If so, then such effects might replicate with distinctly non-eyelike stimuli -- such as simple geometric shapes that are seen to be pointing in various directions. Here we report a series of experiments of this sort, each testing whether a previously discovered 'eye gaze' effect generalizes to other stimuli. For example, inspired by work showing that faces with direct gaze break into awareness faster, we used continuous flash suppression (CFS) to render invisible a group of geometric 'cone' shapes that pointed toward or away from the observers, and we measured the time that such stimuli took to break through interocular suppression. Just as with gaze, cones directed at the observer broke into awareness faster than did 'averted' cones that were otherwise equated -- and a monocular control experiment ruled out response-based explanations that did not involve visual awareness, per se. In another example, we were inspired by the "stare in the crowd effect", wherein faces with direct eye gaze are detected faster than are faces with averted gaze. We asked whether this same effect occurs when it is cones rather than eyes that are 'staring', and indeed it does: cones directed at the observer were detected more readily (in fields of averted cones) than were cones averted away from the observer (in fields of direct cones). These results collectively suggest that previously observed "eye contact" effects may be better characterized as "mind contact" effects. Firestone, C., & Scholl, B. J. (2017). Seeing physics in the blink of an eye. Talk given at the annual meeting of the Vision Sciences Society, 5/20/17, St. Pete Beach, FL. People readily understand visible objects and events in terms of invisible physical forces, such as gravity, friction, inertia, and momentum. For example, we can appreciate that certain objects will balance, slide, fall, bend or break. This ability has historically been associated with sophisticated higher-level reasoning, but here we explore the intriguing possibility that such physical properties (e.g. whether a tower of blocks will topple) are extracted during rapid, automatic, visual processing. We did so by exploring both the timecourse of such processing and its consequences for visual awareness. Subjects saw hundreds of block-towers for variable masked durations and rated each tower's stability; later, they rated the same towers again, without time pressure. We correlated these limited-time and unlimited-time impressions of stability to determine when such correlations peak -- asking, in other words, how long it takes to form a "complete" physical intuition. Remarkably, stability impressions after even very short exposures (e.g. 100ms) correlated just as highly with unlimited-time judgments as did impressions formed after exposures an order-of-magnitude longer (e.g. 1000ms). Moreover, these immediate physical impressions were *accurate*, agreeing with physical simulations -- and doing so equally well at 100ms as with unlimited time. Next, we exploited inattentional blindness to ask whether stability is processed not only quickly, but also spontaneously and in ways that promote visual awareness. While subjects attended to a central stimulus, an unexpected image flashed in the periphery. Subjects more frequently noticed this image if it was an unstable tower (vs. a stable tower), even though these two towers were just the same image presented upright or inverted. Thus, physical scene understanding is fast, automatic, and attention-grabbing: such impressions are fully extracted in (an exposure faster than) the blink of an eye, and a scene's stability is automatically prioritized in determining the contents of visual awareness. Lowet, A., Firestone, C., & Scholl, B. J. (2017). Seeing structure: Perceived similarity is driven by shape skeletons. Poster presented at the annual meeting of the Vision Sciences Society, 5/24/17, St. Pete Beach, FL. An intrinsic part of seeing objects is seeing how similar or different they are relative to one another. This experience requires that objects be mentally represented in a common format over which such comparisons can be carried out. What is that representational format? Objects could be compared in terms of their superficial features (e.g. degree of pixel-by-pixel overlap), but a more intriguing possibility is that objects are compared according to a deeper structure. Here we explore this possibility, by asking whether visual similarity is computed using an object's shape skeleton (in particular its medial axis) -- a geometric transformation that extracts an object's inferred underlying structure. Such representations have proven useful in computer vision research, but it remains unclear how much they actually matter for human visual performance. The present experiments investigated this question. Two spindly shapes appeared side-by-side, and observers simply indicated whether the shapes were the same or different. Crucially, the two shapes could vary either in their underlying skeletal structure (rather than in superficial features such as size, orientation, or internal angular separation), or instead in large surface-level ways (without changing overall skeletal organization). Discrimination was better for skeletally dissimilar shapes: observers could tell shapes apart more accurately when they had different skeletons, compared to when they had objectively larger differences in superficial features but retained the same skeletal structure. Conversely, observers had difficulty appreciating even surprisingly large differences when those differences did not reorganize the underlying skeletons. Additional experiments generalized this pattern to realistic 3D volumes whose skeletons were much less readily inferable from the shape's visible contours: skeletal changes were still easier to detect than all other kinds of changes. These results show how shape skeletons may underlie the perception of similarity -- and more generally how they have important consequences for downstream visual processing. Uddenberg, S., & Scholl, B. J. (2017). Angrier = Blacker?: The influence of emotional expression on the representation of race in faces, measured with serial reproduction. Talk given at the annual meeting of the Vision Sciences Society, 5/23/17, St. Pete Beach, FL. In principle, race and emotional expression are orthogonal dimensions of face perception. But psychologically, they are intertwined -- as when racially ambiguous faces are judged to be angrier when categorized as Black than when categorized as White. Does this reflect superficial judgmental biases, or deeper aspects of how faces are perceived and represented? We explored this using the method of serial reproduction, where visual memory for a briefly presented face is passed through 'chains' of many different observers. Here, a single face was presented, with its race selected from a smooth luminance-controlled continuum between White and Black. Each observer then completed a single trial, in which they reproduced that face's racial identity by morphing a test face along the racial continuum. Critically, both the initially presented face and the test face could (independently) have an Angry or Neutral expression, which the participant could not change. Within each chain of observers, these expressions were held constant, while the race of the face initially seen by each observer was determined by the previous observer's response. The chains reliably converged on a region well within White space, even when they started out near (or at) the Black extreme -- as observers' representations were pulled toward a 'default attractor' in the White region of the face space. Strikingly, however, there was a single situation when this pattern reliably reversed: when observers were shown an Angry face and tested on a Neutral face, chains converged instead on a region well within the Black region. This is exactly the pattern that is predicted if Angry faces are misremembered as Blacker than the equivalent Neutral faces (since the effect cancels out when both faces are Angry). These results illustrate how irrelevant stereotype-consistent information can influence face representations in a deep way, which may have important real-world implications. van Buren, B., & Scholl, B. J. (2017). Who's chasing whom?: Changing background motion reverses impressions of chasing in perceived animacy. Talk given at the annual meeting of the Vision Sciences Society, 5/20/17, St. Pete Beach, FL. Visual processing recovers not only seemingly low-level features such as color and orientation, but also seemingly higher-level properties such as animacy and intentionality. Even abstract geometric shapes are automatically seen as alive and goal-directed if they move in certain ways. What cues trigger perceived animacy? Researchers have traditionally focused on the local motions of objects, but what may really matter is how objects move with respect to the surrounding scene. Here we demonstrate how movements that signal animacy in one context may be perceived radically differently in the context of another scene. Observers viewed animations containing a stationary central disc and a peripheral disc, which moved around it haphazardly. A background texture (a map of Tokyo) moved behind the discs. For half of observers, the background moved generally along the vector from the peripheral disc to the central disc (as if the discs were moving together over the background, with the central disc always behind the peripheral disc); for the other half of observers, the background moved generally along the vector from the central disc to the peripheral disc. Observers in the first condition overwhelming perceived the central disc as chasing the peripheral disc, while observers in the second condition experienced the reverse. A second study explored objective detection: observers discriminated displays in which a central 'wolf' disc chased a peripheral 'sheep' disc from inanimate control displays in which the wolf instead chased the sheep's (invisible) mirror image. Although chasing was always signaled by the wolf and sheep's close proximity, detection was accurate when the background moved along the vector from the sheep to the wolf, but was poor when the background moved in an uncorrelated manner (controlling for low-level motion). These dramatic context effects indicate that spatiotemporal patterns signaling animacy are detected with reference to a scene-centered coordinate system. Yousif, S., & Scholl, B. J. (2017). The one-is-more illusion: Sets of discrete objects appear less extended than equivalent continuous entities in both space and time. Poster presented at the annual meeting of the Vision Sciences Society, 5/24/17, St. Pete Beach, FL. Our visual experience is populated by both discrete objects (e.g. people and posters) and continuous entities (e.g. long walls on which posters may be affixed). We tend to distinguish such stimuli in categorization and language, but might we actually see such stimuli differently? Here we report a vivid illusion wherein discrete 'objecthood' changes what we see in an unexpected way -- the *one-is-more illusion*. Observers viewed pairs of images presented simultaneously, and simply made a forced-choice judgment about which image looked longer (i.e. more spatially extended). One image was always a single continuous object (e.g. a long rectangle), and the other was a collection of discrete objects (e.g. two shorter rectangles separated by a gap). Across several types of images, observers perceived the continuous objects as longer than equated discrete objects, and this illusion was both large and exceptionally reliable (binomial test, p<.000001). In fact, observers often perceived the continuous objects as longer even when the discrete objects were in fact longer. Critically, the illusion persisted even when the images were equated for properties such as the number of intervening contours (e.g. when contrasting two rectangles vs. a single rectangle interrupted by a visible occluder). Moreover, this illusion extends beyond space, and also operates in time: when comparing two sequentially presented auditory stimuli, continuous tones were perceived as lasting longer than equated sets of discrete tones. Whereas previous work has emphasized the importance of objecthood for processes such as attention and visual working memory, these results often come out only in the statistical wash. In contrast, the one-is-more illusion provides a striking demonstration of how the segmentation of a display into discrete objects can change the perception of other visual properties in a way that you readily see (and hear!) with your own eyes (and ears!). |