The Relationship between Object Files and Conscious Perception
 
 
Here are some demonstrations of the various phenomena and manipulations discussed in the following paper:
Mitroff, S. R., Scholl, B. J., & Wynn, K. (2005). The Relationship Between Object Files and Conscious Perception. Cognition, 96(1), 67-92.
These demonstrations are provided as Quicktime movies, which can be downloaded or viewed directly in most web-browsers. These movies are a bit large and choppy, but they should be sufficient to illustrate the basic manipulations. If you have any trouble viewing the movies, downloading them and then playing them from your local hard-drive may help. As highly compressed versions of the original stimuli constructed for demonstration purposes, these movies may not preserve the precise spatial and temporal characteristics of the originals.  
 
Many aspects of visual perception appear to operate on the basis of representations which precede identification and recognition, but in which discrete objects are segmented from the background and tracked over time (unlike early sensory representations). It has become increasingly common to discuss such phenomena in terms of 'object files' (OFs) -- hypothesized mid-level representations which mediate our conscious perception of persisting objects -- e.g. telling us 'which went where'. Despite the appeal of the OF framework, no previous research has directly explored whether OFs do indeed correspond to conscious percepts. In this work we presented one case wherein conscious percepts of 'which went where' in dynamic ambiguous displays diverge from the analogous correspondence computed by the OF system. Observers viewed a 'bouncing/streaming' display in which two identical objects moved such that they could have either bounced off or streamed past each other. We measured two dependent variables: (1) an explicit report of perceived bouncing or streaming; and (2) an implicit 'object-specific preview benefit' (OSPB), wherein a 'preview' of information on a specific object speeds the recognition of that information at a later point when it appears again on the same object (compared to when it reappears on a different object), beyond display-wide priming. When the displays were manipulated such that observers had a strong bias to perceive streaming (on over 95% of the trials), there was nevertheless a strong OSPB in the opposite direction -- such that the object files appeared to have 'bounced' even though the percept 'streamed'. Given that OSPBs have been taken as a hallmark of the operation of object files, these experiments suggest that in at least some specialized (and perhaps ecologically invalid) cases, conscious percepts of 'which went where' in dynamic ambiguous displays can diverge from the mapping computed by the object-file system.  
 
Demonstration #1: Perceived Streaming (164 KB)  
On each trial, two discs initially appear above the center of the display, one to the left and one to the right. A single letter then briefly appears in each of the discs, after which the discs move downward on diagonal paths so that they fully overlap at the center of the display. Without any pause, the discs then continue downward on diagonal paths, stopping below the center with one to the left and one to the right -- such that it is ambiguous whether they streamed past or bounced off each other. Subsequently, a single letter appears in one of the two discs and observers respond as quickly as possible whether this target letter is the same as either of the two initial preview letters. In this first experiment, after each object-reviewing response, observers also reported via a keypress whether they consciously perceived that trial as bouncing or streaming. In fact we induced a strong bias to consciously perceive streaming, by using smooth, constant, and reasonably fast motion. The critical question we ask in this study is then whether object files will also stream, along with observers' conscious percepts. Surprisingly, despite the overwhelming bias to consciously perceive streaming, robust preview benefits were seen for bouncing.  
 
Demonstration #2: Unambiguous Translation (296 KB)  
Perhaps the results of Experiment 1 do not in fact reflect an object-specific preview benefit, but rather a side-specific preview benefit: observers are responding faster not when the preview and target letters occur on the same object per se, but when they occur on the same side of the display. Thus target letters on the right side engender faster responses when they match right-side preview letters, and likewise target letters on the left side engender faster responses when they match left-side preview letters. We sought to test this hypothesis, given the counterintuitive nature of the results of Experiment 1. In this experiment the two discs still moved along the corners a square, but their paths did not intersect: each disc translated horizontally -- one from the upper-right position to the upper-left, and the other from the lower-left to the lower-right. Observers responded faster when the target letter matched the preview letter presented in the same object, relative to when it matched the preview letter presented in the other object. Because side-of-screen and objecthood were in perfect opposition in this experiment, this reflects an object specific preview benefit which cannot be explained as a side-specific effect -- suggesting that the results of Experiment 1 do in fact reflect object-specific processing rather than side-specific processing.  
 
Demonstration #3: Rotated Perceived Streaming (288 KB)  
Though the previous experiment ruled out a simple alternate explanation based on 'side-specific' preview benefits, it also differed from Experiment 1 in another possibly-important way: the motion paths in Experiment 2 were completely unambiguous, whereas those Experiment 1 were ambiguous and gave rise to a correspondence problem. Thus it remains possible in principle that a side-specific bias could operate only under conditions of ambiguity. To test this 'ambiguity-induced side-specific processing' alternate explanation, we simply rotated the displays of Experiment 1 counterclockwise by 90 degrees. The results were identical to those of Experiment 1: observers perceived streaming, but the object-specific preview benefits were consistent with bouncing, suggesting that the results of Experiment 1 were not due to a simple left/right priming bias.  
 
Demonstration #4: Extended Unambiguous Translation (168 KB)  
In the remaining two experiments we addressed a second alternate explanation, based on distance. Because the objects in our bouncing/streaming displays always moved on diagonal paths to and from the corners of a square, the distance between the initial and final positions was always smaller for 'bouncing' paths than for 'streaming' paths. So perhaps the results of Experiment 1 do not reflect an object-specific preview benefit, but rather a distance-driven preview benefit: observers are responding faster not when the preview and target letters occur on the same object per se, but simply when they are presented closer to each other. We tested this possibility by replicating Experiment 2 (with unambiguous horizontal translation), with a horizontally extended display -- such that the distance traveled by the objects on their horizontal paths is greater than the vertical distance between the two paths. Observers responded faster when the target letter matched the preview letter presented in the same object, relative to when it matched the preview letter presented in the other object. Because objecthood was in direct opposition to proximity in this experiment, the response time differences must reflect an object specific preview benefit rather than a simple distance-based effect.  
 
Demonstration #5: Asymmetric Perceived Streaming (220 KB)  
Though the previous experiment ruled out an alternate explanation based on proximity, it also differed in another possibly-important way: as in Experiment 2, the motion paths were completely unambiguous, whereas those in Experiments 1 and 3 were ambiguous and gave rise to correspondence problems. To test this 'ambiguity-induced distance-driven bias' alternate explanation, we modified the bouncing/streaming display from Experiment 1 such that the two streaming-paths were of different lengths. Which diagonal had the shorter path was randomly varied across trials, such that the distance between the initial and final positions of the shorter streaming-path was smaller than the distance between the initial and final positions of either bouncing-path -- and all of the analyses in this experiment compared the bouncing paths to only the shorter streaming path (ignoring the longer streaming path). Thus, in contrast to the earlier experiments, the preview and target positions are now farther away in the bouncing interpretation than in the streaming interpretation. Observers responded faster on the Incongruent trials than on the shorter-distance Congruent trials, resulting in a significant object-specific preview benefit consistent with a bouncing interpretation. Because these Congruent trials involved a shorter distance than the Incongruent trials, persisting objecthood and proximity were in direct opposition, and the results could not have been driven by proximity. Rather, these results must reflect object-specific processing, and as such they provide further evidence for the divergence between object files and conscious perception.