2009 03 25How Core Image Color Tracking works
To overlay an image on top of blob matching a given color, we need a location : the center of the blob.
If we have a set of 3D points, eg all vertices from a model, we add positions and divide by point count :
- add all coordinates (x, y) into one point
- divide by point count
- there's the center !
To work from an image, we could go about the same way : go over all pixels, find the matching ones, average them down. We can't do that in Core Image as it is limited to kernel functions : no explicit loops, no temporary variables — only a kernel taking inputs (images, colors, numbers) and outputting an image.
The trick that CIColorTracking
uses to average matching points is to go over ALL pixels of the image, repeatedly halving its size until it's 1x1. (This is done with a custom filter written in ObjC). It adds pixel coordinates and divides by area :
- compute a mask from target color : white indicates target color, black is empty
- convert to a coordinate mask : Red and Green store pixels' position (range is 0..1), Blue stores coverage (copied from the original mask)
- average that mask down to a 1x1 image
- divide the average position by the average coverage, this gives the location
SAMPLE QTZ Color Tracking.qtz You'll need to compile CIColorTracking first.
I guess this method will be obsolete in Snow Leopard, where OpenCL will hopefully simplify this down to one function.