Eye-Tracking Data

The eye-tracking data is freely available for non-commercial scientific purposes. If you publish work based on this data, please refer to the following paper:

  • Kootstra, G., de Boer, B., and Schomaker L.R.B. (2011) Predicting Eye Fixations on Complex Visual Stimuli using Local Symmetry. Cognitive Computation, 3(1):223-240. doi: 10.1007/s12559-010-9089-5 link

Experimental Setup

We recorded human fixation data during an eye tracking experiment using the Eyelink head-mounted eye tracking system (SR research). Fixation locations were extracted using the accompanied software. The images were displayed full-screen with a resolution of 1024 by 768 pixels on an 18 inch crt monitor of 36 by 27 cm at a distance of 70 cm from the participants. The eye tracker was calibrated using the Eyelink software. The calibration was verified prior to each session, and recalibrated if needed.

The participants were asked to free view the images. We did not give the participants a task, since we are interested in the bottom-up components of visual attention. A task would give a strong top-down influence on the eye movements.

The experiment was carried out by 31 students of the University of Groningen. The participants ranged from 17 to 32 years old, among them 15 females and 16 males with normal or corrected-to-normal vision. The experiment was split up into sessions of approximately 5 minutes. Between the sessions, the experimenter had a short relaxing conversation with the participants, in order to get them motivated and focused for the next session. Before starting a new session, the calibration of the eye tracker was verified. After each presented images, drift was measured and corrected if needed using the Eyelink software.

The images

The file eyeTrackImages.tgz is a gziped tar-ball with all the images used in the experiment. The images have a resolution of 1024 x 768 pixels and are stored in png-format. The images are separated into five different categories:
Animals:
12 images containing animals
Automan:
12 images containing street scenes
Buildings:
16 images of buildings
Flowers:
20 images containing natural symmetrical forms, mainly flowers and plants. Mind that images 07 and 08 by mistake are the same.
Nature:
41 images containing natural scenes
The images are a selection from the McGill calibrated colour image database. Please note that the authors of the image database ask you to refer to the following paper:
  • Olmos, A., Kingdom, F. A. A. (2004), A biologically inspired algorithm for the recovery of shading and reflectance images, Perception, 33, 1463 - 1473.

Human Fixation-Distance Maps

eyeTrackFDMaps.tgz is a tar-ball with all the human fixation-distance maps. The file contains matlab files for every image. Each matlab file holds a matlab struct with the following fields:
fdMap
  • subject_01, ..., subject_31 (the individual fd maps)
  • all (the combined fd maps)
All maps are matrices of 256 x 192, so downscaled by a factor 4 with respect to the displayed images. The individual maps are calculated using the inverse distance transform. The distance transform gives for every pixel in the map the distance to the nearest fixation. By taking the inverse of the distance transform and normalizing the map so that all elements sum up to 1.0, the fixation distance maps are created. The maps give the highest value at the points of human fixation, with linearly decreasing values further from the fixation points.

The combined fixation distance maps are calculated by summing the individual maps and normalizing the result. They show the consensus among the participants.

NB, the fixation distance maps are calculated differently than the fixation density maps described in (Kootstra >em>et al. 2008).

The eye-tracking data

The file eyeTrackData.mat contains a Matlab struct with all the eye-tracking data. De struct has the following fields and subfields:
  • image categories (animals, automan, etc)
    • images (animals_00, animals_02, etc)
      • subjects (subject_01,...,subject_31)
        • correct (if the trial is correct. NB, some are not!!)
        • fixX (array with the x-position of every fixation)
        • fixY (array with the y-position of every fixation)
        • fixT (array with the onset time of every fixation in ms)
        • fixD (array with the duration of every fixation in ms)
        • category (image category)
        • image (information about the displayed image)
          • category
          • image (filename of the image)
        • nrDriftCorrect (number of drift corrections prior to image, normally 1)
        • driftCorrect (contains info about the drift correction(s))
        • blinkT (onset time of the eye blinks)
        • blinkD (duration of eye blinks)

Download