The Opengazer project
is supported by Samsung and the Gatsby Foundation and by the European Commission in the context of the AEGIS project - open Accessibility Everywhere: Groundwork, Infrastructure, Standards) |
Opengazer: open-source gaze tracker for ordinary webcamsOpengazer is an open source application that uses an ordinary webcam to estimate the direction of your gaze. This information can then be passed to other applications. For example, used in conjunction with Dasher, opengazer allows you to write with your eyes. Opengazer aims to be a low-cost software alternative to commercial hardware-based eye trackers. The first version of Opengazer was developed by Piotr Zieliński, supported by Samsung and the Gatsby Charitable Foundation. More detail about this version can be found [here]. Research for Opengazer has been revived by Emli-Mari Nel, and is now supported by the European Commission in the context of the AEGIS project and the Gatsby Charitable Foundation. | |
Current work | |
Head trackingThe previous version of Opengazer is very sensitive to head-motion variations. To rectify this problem we are currently focussing on head tracking algorithms to correct for head pose variations before inferring the gaze positions. All the software is written in C++ and Python. An example video of one of our head tracking algorithm can be downloaded [here]. On Windows the video can be viewed with the VLC player. On Linux it is best displayed using Mplayer Movie Player. The first version of our head tracking algorithm is an elementary one, based on the Viola-Jones face detector, that locates the largest face in the video stream (captured from a file/camera) as fast as possible, on a frame-by-frame basis. The xy-coordinates from tracking can already be used to type using Dasher. This can be done in 1D mode (e.g., from tracking just the y-coordinates), or in 2D mode. Although much better results can be expected after the release of our head-pose software, this software is already useful for fast face localisation. Our algorithm applies a simple autoregressive lowpass filter on the xy-coordinates and scale of the detection results from the Viola-Jones face detector, and also restricts the region of interest from frame to frame. The detection parameters have been determined according to our specific application (i.e., a single user working on his/her Desktop PC/laptop). The algorithm works best on 320x240 images, at a frame rate of 30 fps, and reasonable lighting conditions. Downloads[Head tracker version 0.0] This is the first version and includes installation instructions, and documentation on upcoming releases and current research. Notes to developers/collaboratorsWe welcome collaboration with other opensource developers. If eager developers flood us with patches we will consider hosting a repository. To send patches follow these instructions in addition to the installation instructions provided in the download. | |
Gesture switchA subproject of Opengazer involves the automatic detection of facial gestures to drive a switch-based program. This program has a short learning phase (under 30 seconds) for each gesture, after which the gesture is automatically detected. Many patients (e.g., patients with cerebral palsy) have involuntary head motions that can introduce false positives during detection. We therefore also train a background model to deal with involuntary motions. All the software is written in C++ and Python and will be available for download soon. An example video of our gesture switch algorithm can be downloaded [here]. On Windows the video can be viewed with the VLC player. On Linux it is best displayed using Mplayer Movie Player. Note that this video has sound. Three gestures have been trained to generate three possible switch events: a left smile, right smile, and upwards eyebrow movement all correspond to switch events. The background model, in this case, detects blinks, sudden changes in lighting, and large head motions. The first official release will be at the end of June 2012. | |
Previous work | |
OverviewThe first version of Opengazer has the following workflow:
| |
VideoThis short video shows opengazer in action, using a £50 Logitech QuickCam Pro 4000, with the resolution 640x480. The distance from the camera/screen to the user was about 50cm. The user first selects "load points", which loads and matches a previously selected set of point trackers on the face, which allows opengazer to extract the image of the eye and compute head orientation. Then, the "calibrate" routine displays a series of red points on the screen, at which the user is asked to look. As the calibration progresses, the current gaze estimate, represented by a small blue circle is getting better and better. Finally, the user selects "test", which displays a series of green points to test and show the accuracy of the gaze tracking. | |
DownloadThe first prerelease of opengazer is available to download. Note that this version is DEPRECATED, and the next version will not be released before December 2012. Download opengazer 0.1.2 for Linux (source) You will need to compile opengazer yourself. Consult the detailed instructions for external dependencies and usage. Opengazer is now stored in the subversion repository accessible from the sourceforge opengazer page. To check out the latest revision consult the opengazer instructions as well as the sourceforge subversion manual. Opengazer is licensed under GPL version 2. System requirementsTo run opengazer comfortably with other applications, you will need a Linux system with
Recommended webcamsThe webcam for opengazer development is Logitech QuickCam Pro 4000, but the software should work with any webcam. You just need to ensure that your webcam is capable of 640x480 or higher resolution, and that it works with v4l. | |
More infoMacsAlexander Kellett has the following experiences using opengazer on a Mac.Opengazer 0.1 successfully follows my gaze in this poorly lit room, via an iSight camera plugged into an iBook. Works just fine using everything from current darwinports/macports, together with the latest release of vxl, which unfortunately required "make -k install" to work on the mac. One thing that took a while to figure out was a few linking problems with the osx port of vxl, this forced me to add -lnetlib to the LINKER line. Finally, I had to remove the gtk-config call from the Makefile as this doesn't seem to be present on my system, but even without this it linked just fine. I've attached the Makefile in case the above bits are unclear. [see the note below] Took a good 20 minutes of fiddling with the distance of the cam, and learning to keep my head still, until eventually I realized that it would crash on save unless I'd placed enough good points. However, while it is noisy (poor lighting here), it gets the position of my eye on the screen with an x/y resolution of, i'd say, 16x16, which seems easily enough for dasher. Note: Alexander’s Makefile was for opengazer 0.1; I’ve changed it a bit to be compatible with opengazer 0.1.1, but haven’t tested it. Connecting opengazer and DasherUse the Edit|Preferences|Control|Input Device setting in Dasher. Set it to "Socket Input" and in "Options" set x/yminimum to 0, and x/ymaximum to the width/height of your screen. Maximize the Dasher window. Note that it is very, very difficult to keep your head still while concentrating on something else such as inputting text in Dasher. Using a head-rest might be a good idea. | |
HistoryOpengazer grew out of the Machine Intelligence Laboratory in Cambridge University Engineering Department. VIM (`visual inference machine') was developed by Ollie Williams and Roberto Cipolla. |