CB Database: A change blindness database for objects in natural indoor scenes

Similar documents
Discriminating direction of motion trajectories from angular speed and background information

Haptic Cueing of a Visual Change-Detection Task: Implications for Multimodal Interfaces

Exploring body holistic processing investigated with composite illusion

TRAFFIC SIGN DETECTION AND IDENTIFICATION.

Modulating motion-induced blindness with depth ordering and surface completion

How Many Pixels Do We Need to See Things?

Enclosure size and the use of local and global geometric cues for reorientation

Orientation-sensitivity to facial features explains the Thatcher illusion

Wide-Band Enhancement of TV Images for the Visually Impaired

Perception. What We Will Cover in This Section. Perception. How we interpret the information our senses receive. Overview Perception

Consumer Behavior when Zooming and Cropping Personal Photographs and its Implications for Digital Image Resolution

CPSC 532E Week 10: Lecture Scene Perception

The Representational Effect in Complex Systems: A Distributed Representation Approach

Haptic control in a virtual environment

Human Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc.

The effect of rotation on configural encoding in a face-matching task

the human chapter 1 Traffic lights the human User-centred Design Light Vision part 1 (modified extract for AISD 2005) Information i/o

The Effects of Filters on Colour Vision

CS/NEUR125 Brains, Minds, and Machines. Due: Wednesday, February 8

Self-motion perception from expanding and contracting optical flows overlapped with binocular disparity

Limitations of the Oriented Difference of Gaussian Filter in Special Cases of Brightness Perception Illusions

Face Perception. The Thatcher Illusion. The Thatcher Illusion. Can you recognize these upside-down faces? The Face Inversion Effect

Vision: How does your eye work? Student Advanced Version Vision Lab - Overview

Stereoscopic Depth and the Occlusion Illusion. Stephen E. Palmer and Karen B. Schloss. Psychology Department, University of California, Berkeley

AGING AND STEERING CONTROL UNDER REDUCED VISIBILITY CONDITIONS. Wichita State University, Wichita, Kansas, USA

Häkkinen, Jukka; Gröhn, Lauri Turning water into rock

Chapter 6. Experiment 3. Motion sickness and vection with normal and blurred optokinetic stimuli

PERFORMANCE IN A HAPTIC ENVIRONMENT ABSTRACT

Factors affecting curved versus straight path heading perception

This is a repository copy of Thatcher s Britain: : a new take on an old illusion.

6.869 Advances in Computer Vision Spring 2010, A. Torralba

The Perceived Image Quality of Reduced Color Depth Images

Remote Sensing 4113 Lab 08: Filtering and Principal Components Mar. 28, 2018

Methods. Experimental Stimuli: We selected 24 animals, 24 tools, and 24

Beyond the retina: Evidence for a face inversion effect in the environmental frame of reference

DECISION MAKING IN THE IOWA GAMBLING TASK. To appear in F. Columbus, (Ed.). The Psychology of Decision-Making. Gordon Fernie and Richard Tunney

The vertical-horizontal illusion: Assessing the contributions of anisotropy, abutting, and crossing to the misperception of simple line stimuli

Visual computation of surface lightness: Local contrast vs. frames of reference

Photographic Memory: The Effects of Volitional Photo-Taking on Memory for Visual and Auditory Aspects of an. Experience

An Adaptive Kernel-Growing Median Filter for High Noise Images. Jacob Laurel. Birmingham, AL, USA. Birmingham, AL, USA

Learning relative directions between landmarks in a desktop virtual environment

Vision: How does your eye work? Student Version

Color and More. Color basics

The Appearance of Images Through a Multifocal IOL ABSTRACT. through a monofocal IOL to the view through a multifocal lens implanted in the other eye

Chapter 8: Perceiving Motion

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS

Overview Use in Multiple Visual Information Resolution Interfaces

Travel Photo Album Summarization based on Aesthetic quality, Interestingness, and Memorableness

Last but not least. Interaction of vision and movement via a mirror

ONYX White Paper DESIGNING WITH WHITE & SPECIALTY INK

Human Factors. We take a closer look at the human factors that affect how people interact with computers and software:

Image Distortion Maps 1

6. Methods of Experimental Control. Chapter 6: Control Problems in Experimental Research

The effect of illumination on gray color

Perceived Image Quality and Acceptability of Photographic Prints Originating from Different Resolution Digital Capture Devices

Chapter 3: Psychophysical studies of visual object recognition

AUGMENTED REALITY IN VOLUMETRIC MEDICAL IMAGING USING STEREOSCOPIC 3D DISPLAY

Experiment HP-23: Lie Detection and Facial Recognition using Eye Tracking

Salient features make a search easy

IOC, Vector sum, and squaring: three different motion effects or one?

Effects of distance between objects and distance from the vertical axis on shape identity judgments

Analyzing Situation Awareness During Wayfinding in a Driving Simulator

The Relationship between the Arrangement of Participants and the Comfortableness of Conversation in HyperMirror

COM 12 C 288 E October 2011 English only Original: English

Eye catchers in comics: Controlling eye movements in reading pictorial and textual media.

The Effect of Opponent Noise on Image Quality

This Photoshop Tutorial 2010 Steve Patterson, Photoshop Essentials.com. Not To Be Reproduced Or Redistributed Without Permission.

Perceived depth is enhanced with parallax scanning

Takeharu Seno 1,3,4, Akiyoshi Kitaoka 2, Stephen Palmisano 5 1

X-Eye: A Reference Format For Eye Tracking Data To Facilitate Analyses Across Databases

PupilMouse: Cursor Control by Head Rotation Using Pupil Detection Technique

Duplicate Layer 1 by dragging it and dropping it on top of the New Layer icon in the Layer s Palette. You should now have two layers rename the top la

Exercise 2: Hodgkin and Huxley model

MIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura

Simple reaction time as a function of luminance for various wavelengths*

Peripheral Prism Glasses for Hemianopia Giorgi et al. APPENDIX 1

Chapter 8. Using the GLM

Varilux Comfort. Technology. 2. Development concept for a new lens generation

S e a m l e s s C o l l a g e I m a g e

THE POGGENDORFF ILLUSION: THE PRESENCE OF ANOMALOUS FIGURE IN GENERATING THE EFFECT. Department of General Psychology, University of Padua, Italy

Chess Beyond the Rules

Perception. The process of organizing and interpreting information, enabling us to recognize meaningful objects and events.

Differences in Fitts Law Task Performance Based on Environment Scaling

The Mona Lisa Effect: Perception of Gaze Direction in Real and Pictured Faces

The original image. Let s get started! The final rainbow effect. The photo sits on the Background layer in the Layers panel.

What you see is not what you get. Grade Level: 3-12 Presentation time: minutes, depending on which activities are chosen

Vision. PSYCHOLOGY (8th Edition, in Modules) David Myers. Module 13. Vision. Vision

The reference frame of figure ground assignment

SUPPLEMENTARY INFORMATION

Adobe Photoshop CC 2018 Tutorial

INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR PROPOSING A STANDARDISED TESTING ENVIRONMENT FOR BINAURAL SYSTEMS

INTRODUCTION TO C-NAV S IMCA COMPLIANT QC DISPLAYS

Pro Digital ebooks. Making the. Paint Bucket Work! Les Meehan

Effect of expertise acquisition on strategic perception: The example of chess

Visibility based on eye movement analysis to cardinal direction

Visual Processing: Implications for Helmet Mounted Displays (Reprint)

IEEE Signal Processing Letters: SPL Distance-Reciprocal Distortion Measure for Binary Document Images

Vibrotactile Apparent Movement by DC Motors and Voice-coil Tactors

Supporting Online Material for

Designing with White and Specialty Ink

Transcription:

DOI 10.3758/s13428-015-0640-x CB Database: A change blindness database for objects in natural indoor scenes Preeti Sareen 1,2 & Krista A. Ehinger 1 & Jeremy M. Wolfe 1 # Psychonomic Society, Inc. 2015 Abstract Change blindness has been a topic of interest in cognitive sciences for decades. Change detection experiments are frequently used for studying various research topics such as attention and perception. However, creating change detection stimuli is tedious and there is no open repository of such stimuli using natural scenes. We introduce the Change Blindness (CB) Database with object changes in 130 colored images of natural indoor scenes. The size and eccentricity are provided for all the changes as well as reaction time data from a baseline experiment. In addition, we have two specialized satellite databases that are subsets of the 130 images. In one set, changes are seen in rooms or in mirrors in those rooms (Mirror Change Database). In the other, changes occur in a room or out a window (Window Change Database). Both the sets have controlled background, change size, and eccentricity. The CB Database is intended to provide researchers with a stimulus set of natural scenes with defined stimulus parameters that can be used for a wide range of experiments. The CB Database can be found at http://search.bwh.harvard.edu/new/ CBDatabase.html. * Preeti Sareen preeti.sareen@yale.edu * Jeremy M. Wolfe jwolfe@partners.org 1 2 Visual Attention Lab, Harvard Medical School, Brigham & Women s Hospital, 64 Sidney St., Suite 170, Cambridge, MA 02139, USA Cellular and Molecular Physiology, Yale University School of Medicine, New Haven, CT 06511, USA Keywords Change Blindness Database. Change detection database. Natural scenes. Indoor scenes. Mirror. Window Introduction People often fail to notice major changes in a scene (Rensink, O'Regan, & Clark, 1997; Simons & Levin, 1997). This phenomenon of Bchange blindness^ is commonly used in research on scene perception, attention, search, etc. and has helped inform theories of visual processing and awareness (e.g., Hollingworth, Williams, & Henderson, 2001; Mitroff, Simons, & Franconeri, 2002; Rensink, 2004; Simons & Levin, 1998; Simons & Rensink, 2005). However, creating natural scene stimuli for such experiments is time-consuming and no public database for such stimuli has been available. Here, we share such a database, the Change Blindness (CB) Database, of natural scenes in which random objects appear and disappear. We provide size and eccentricity properties for these changes. The main database is accompanied by specialized satellite databases of object changes seen in mirrors and through windows. More images will be added to the database from time to time, as they become available. Contents of the database The CB Database contains 130 colored natural scene images obtained from the SUN Database (Xiao, Hays, Ehinger, Oliva, & Torralba, 2010) and other World Wide Web sources. The changes were created using Adobe Photoshop and all the images are formatted to be 1024 768 in jpeg format. All images are of indoor scenes except

Behav Res two that have an indoor setting located outdoors. Each change consists of the removal of an object from the original scene. Change locations are pseudo-randomly distributed over the images in order to remove object location as an a priori cue to change location. All the images are also offered left-right reversed, providing a well-distributed set of change locations from which to pick and choose (Fig. 1). Each change is annotated with its size and eccentricity information and provided alongside the image database in spreadsheets. We also provide annotated masks created for each change to locate changes during the experiment. The mask is a binary image in which the critical object pixels are black while the background scene pixels are white. These masks can be incorporated into experiment code to check whether a mouse click, eye movement, etc. falls on the changing object. The main database is accompanied by two specialized sets of images: the Window Change Set (12 scenes) and the Mirror Change Set (24 scenes). Both of the satellite databases have been constructed from a subset of the main database. The Window Change Set consists of 12 scenes with two versions each. Both versions of each scene have the same disappearing object: in one version, the changing object is inside the room (Fig. 2 left image) and in the other, it is outside, seen through a window (Fig. 2 right image). Size, eccentricity, and reaction time data for the changes in the main database and the Window Change Database are shown below and provided as spreadsheets with the database. In the Mirror Change Set, there are two types of scene conditions (Mirror and Disjoint) crossed with two object change conditions (reflected and non-reflected) to give four variations on each of the 24 scenes. In the Mirror Condition, the critical object is visible both in the room and as a reflection in a mirror. For the room change, the object in the room disappears (but its reflection remains); for the mirror change, the reflection of the object disappears (but the object in the room remains). In the Disjoint Condition, the critical object is either visible in the room or visible as a reflection in the mirror, but is not simultaneously visible in both. In order to have controlled backgrounds and contexts for the four types of changes, the same scenes with the same critical objects are used across all four scene and change conditions. The details of size, eccentricity, and reaction time data are provided as spreadsheets with the database. They are also detailed in a study by Sareen, Ehinger, and Wolfe (2014) in which we found that the changes to objects in the mirror are harder to find than equivalent object changes in the room. Fig. 1 Exemplars of left-right reversed scenes from the Change Blindness Database. Each scene is horizontally flipped to get a left and a right version. The same critical object (in this case a speaker) disappears in both versions. The critical object is circled in green for illustration purposes only

Behav Res Fig. 2 Window Change Set exemplar. In the Window Change Set, the same object (a toy car) disappears either in the room (left) or outside the window (right). The critical object is circled in green for illustration only Change detection experiment with the main Change Blindness (CB) Database In order to document the properties of CB with these stimuli, we ran a basic CB experiment with the 130 images from the main CB database. Method Participants Thirteen observers (seven females, mean age = 29.9 years, range = 18 52 years) participated in the experiment. All observers passed the Ishihara test for color blindness (Ishihara, 1980) and reported normal or corrected-to-normal vision. All procedures were approved by the Partners Healthcare Corporation Institutional Review Board and accordingly all observers gave informed consent and were compensated for their time. between. The initial view of the scene (with or without the changed object) was chosen at random. Observers were asked to press a button as soon as they detected a change and then click at the location of the change on a static (non-changing) view of the image. This view was always the version that had the critical object present. A trial ended either when an observer pressed a button to confirm the presence of a change or after 60 s (time-out), whichever happened first. All trials contained a change. Reaction times were recorded at each key press and averaged for only the correct responses. All the responses with reaction times less than 200 ms were filtered out. Results and discussion Figure 3 shows the distribution of the size and location of the changes. Sizes were measured in terms of pixel area. Eccentricities of the targets were measured from the center Stimuli and procedure The stimuli were presented on a 24-in LCD monitor (resolution = 1,920 1,200) and subtended 30.8 23.4 of visual angle at a viewing distance of approximately 50 cm. The experiment involved detecting changes in scenes. 130 pairs of images were shown with changes at random locations and of random objects. Each scene could be left-right reversed. The scenes were randomly distributed between participants such that half the scenes had changes on the left and the other half had changes on the right. However, each participant only saw one version of the same scene, either left or right. A flicker paradigm adapted from Rensink et al. (1997) was used. On each trial, an image alternated with a modified version in which one object had been removed from the scene. A gray blank field appeared in between the two images. Each image was displayed for 240 ms with a 240-ms blank field in Fig. 3 Size and location of changes in the Change Blindness Database are depicted for the left and right versions of all the scenes. Each bubble is centered where the center of a changed object would be located on an image and the size of the bubble is proportional to the area of that object

of the image. Figure 4 shows the distribution of reaction times with respect to change size (Fig. 4a) and eccentricity (Fig. 4b) for the same changes on the left and right sides of horizontally flipped images. Trials that timed-out were not included. The same two images were shown to each participant for practice at the beginning of the experiment and are not included in the analysis. Multiple regression analysis was conducted to evaluate if reaction time was predicted by the change size and eccentricity. Results of the multiple regression indicated that neither of the two predictors, size (β =.00001, t(253) =.16, p =.87, 95 % CI [ 0.0002, 0.0001]) or eccentricity (β =.006,t(253) = 1.02, p =.31, 95 % CI [ 0.012, 0.004]), explained a significant portion of the variance in reaction time (R 2 =.004, adjusted R 2 =-.003,F(2, 253) =.56, p =.57). There was no significant difference in the reaction time for changes in the left versus the right hemifield, t(12) =.19, p =.9, d =.04, 95 % CI [ 1.63, 1.37]. The reaction times for the same change in corresponding horizontally flipped scenes were positively correlated, r(126) =.58, p <.0001 (Fig. 5). If the reaction time for a left version of a scene was low, it was also low for the right version of that scene. This is one indication that some changes were reliably easier to find than others. Detailed information about reaction time data for each image can be found in the database and can be used to assemble sets of easier and harder examples, as needed. It is interesting that there is no apparent effect of the distance of the change from the center of the image. O'Regan et al. (2000) found that observers were more likely to report a change when the eyes were fixated near the point of change. As they pointed out, this is reasonable based on acuity considerations alone. Note, however, that this was a measure of where the eyes happened to be when the change was found and not a measure of how long it took to stumble on the change. Our data suggest that, for our stimuli at least, there is not a strong effect on overall RT of position of the change in the image. Utochkin (2011) found a different effect of the position of a change. In his experiments, changes could be located at the most interesting spot in the image (determined separately), near that spot or far from that spot. Unsurprisingly, changes at the natural focus of attention were found first but, interestingly, the Near changes were harder to find than the Far changes. Utochkin proposed that there was a dead zone around the focus of attention. Taking these findings together with ours, the ability to detect a change in a scene is clearly (if somewhat complexly) related to position of the change relative to the Fig. 4 Reaction time versus size (a) and eccentricity (b) distributions of changes in the Change Blindness Database. Reaction-time (RT) data for the same changes in the left and right scene versions show a wellbalanced stimulus set with not much variation in reaction time based on scene sidedness (RT left vs. RT right), size (a), or eccentricity (b) of the change Fig. 5 Correlation of reaction time (RT) for left and right changes. RTs for the same change in the left and right scene versions are positively correlated (r =.58), providing an estimation of the difficulty level in detecting a change compared to other changes

deployment of the observer s eyes and attention. However, it may not be dependent on the position of the change in the image. Of course, ours is a negative finding that could be overturned by a more powerful study (perhaps by a lab making use of our stimuli!) but there is no obvious trend in our data. Change detection experiment with the Window Change set In an earlier study, we showed that changes to stimuli,seen in the mirror were found less effectively than changes placed elsewhere in a scene (Sareen, Ehinger, & Wolfe, 2014). We created a set of scenes that would allow us to ask the same question about changes that occur outside of a window. In this case, there does not appear to be a penalty. Method Participants Twenty-four observers (12 females, mean age = 28.5 years, range = 19 51 years) participated in the experiment. All observers passed the Ishihara test for color blindness (Ishihara, 1980) and reported normal or corrected-to-normal vision. All procedures were approved by the Partners Healthcare Corporation Institutional Review Board and accordingly all observers gave informed consent and were compensated for their time. Stimuli and procedure The experiment was carried out at two testing sites where the stimulus size varied slightly (Site 1: 24-in LCD monitor; resolution = 1,920 1,200; 30.8 23.4 visual angle at a viewing distance of approximately 50 cm; Site 2: 19-in monitor; resolution = 1,440 900; 32.5 24.6 visual angle at approximately 50 cm of viewing distance). There is no evidence that this difference had any effect on the results. Moreover, all the conditions were presented to each observer and the variations across site, if any, should be constant in all the conditions and, therefore, would not affect the final outcome. The change detection task used was the same as described above. 109 pairs of images were shown. Twelve of them had changes in the windows while 50 had windows with a view to outside in the scene but the change occurred elsewhere. The remaining images were fillers to prevent observers from noticing our particular interest in windows. For the images that had changes in the windows, there were corresponding versions of the same scenes that had the same object changing inside the room instead of outside the window, in order to keep controlled background and context for the changes in and out Fig. 6 Average reaction time (RT) for Window Change Set change detection. Changes to the same object inside a room (Room) and out a window (Window) are detected at the same rate. Means are plotted with SD as error bars. Individual participant data points are also shown the window. However, each observer only saw either the change in the room or out the window for the same scene while seeing both kinds of changes an equal number of times. None of the observers noticed any pattern in the location of the changes when asked at the end of the experiment. Results Due to missing data one observer was excluded, as they could not find even a single change when it happened out the window. More trials timed out when a change of the same object was present outside the window (12.3 %) than inside the room (9.6 %) but this difference was not significant, t(18) =.53, p = Fig. 7 Distribution of size and location of the changed objects in the Window Change Set. Objects both in the room and out the window are illustrated. Each bubble represents a manipulated object, with the size of the bubble proportional to the area of the object and its location corresponding to the location of the object on the image

.6, d =.2,95%CI[ 7.83, 13.09]. Timed out trials were not included in later analysis. We did not find any significant difference in the reaction times for detecting changes to the same object inside versus outside seen through a window, t(22) = 1.73, p =.1,d =.5,95 % CI[ 0.44, 4.90] (Fig. 6). There were no significant differences in change size, t(11) = 1.62, p =.1,d =.3, 95 % CI [ 117.9, 770.9], and eccentricity, t(11) =.39, p =.7,d =.1,95 % CI[ 85.1, 122.1], between the two conditions. Figure 7 shows the distribution of size and location of changes. Multiple regression analysis showed that none of the three predictors, condition, (β =.294, t(20) =.09, p =.9, 95 % CI [ 6.838, 6.251]), change size, (β =.002, t(20) = 1.32, p =.2, 95% CI[ 0.004, 0.001]), and eccentricity, (β =.011, t(20) =.89, p =.4, 95% CI[ 0.035, 0.014]), explained a significant amount of variance in reaction time (R 2 =.134, adjusted R 2 =.004,F(3, 20) = 1.03, p =.4). As noted, in our earlier study, we found that changes are detected more slowly if they occur in mirrors. We did not find this to be true for changes seen through windows even though in both cases the changes occur outside the room and at a greater depth than the changes inside the room. Summary We present a change blindness database with objects disappearing in natural scenes. The size, eccentricity, location, and reaction time information for these changes is also provided. The database is accompanied by two satellite databases, with images that have controlled background and context for the same object changes occurring in mirrors or windows versus indoors. There remain many questions about change blindness that are worth addressing. Our hope is we have lowered the barrier to conducting these experiments by providing sets of images that are well suited to the conduct of these experiments. Acknowledgments This work was supported by German Research Foundation (DFG) Grant No. SA2483/1-1 to P.S. and by Office of Naval Research (ONR) Grant No. N000141010278, National Eye Institute (NEI) Grant No. EY017001, and National Science Foundation (NSF) Science of Learning Center CELEST Grant to J.M.W. References Hollingworth, A., Williams, C. C., & Henderson, J. M. (2001). To see and remember: Visually specific information is retained in memory from previously attended objects in natural scenes. Psychonomic Bulletin & Review, 8(4), 761 768. Ishihara, I. (1980). Ishihara's Tests for Color-Blindness: Concise Edition. Tokyo: Kanehara & Co., LTD. Mitroff, S. R., Simons, D. J., & Franconeri, S. L. (2002). The siren song of implicit change detection. Journal of Experimental Psychology: Human Perception and Performance, 28(4), 798 815. O'Regan, J. K., Deubel, H., Clark, J. J., & Rensink, R. A. (2000). Picture Changes During Blinks: Looking Without Seeing and Seeing Without Looking. Visual Cognition, 7(1 3), 191 211. Rensink, R. A. (2004). Visual searching without seeing. Psychological Science, 15(1), 27 32. Rensink, R. A., O'Regan, J. K., & Clark, J. J. (1997). To see or not to see: The need for attention to perceive changes in scenes. Psychological Science, 8(5), 368 373. Sareen, P., Ehinger, K. A., & Wolfe, J. M. (2014). Through the looking glass: Objects in mirrors are less real. Psychonomic Bulletin & Review. doi:10.3758/s13423-014-0761-8 Simons, D. J., & Levin, D. T. (1997). Change blindness. Trends in Cognitive Sciences, 1(7), 261 267. Simons, D. J., & Levin, D. T. (1998). Failure to detect changes to people during a real-world interaction. Psychonomic Bulletin & Review, 5(4), 644 649. Simons, D. J., & Rensink, R. A. (2005). Change blindness: Past, present, and future. Trends in Cognitive Sciences, 9(1), 16 20. Utochkin, I. S. (2011). Hide-and-seek around the centre of interest: The dead zone of attention revealed by change blindness. Visual Cognition, 19(8), 1063 1088. Xiao, J., Hays, J., Ehinger, K.A., Oliva, A., & Torralba A. (2010). SUN Database: Large-scale Scene Recognition from AbbeytoZoo.In Proc. 23rd IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr. 2010.5539970