Weekly Spring-Time Linkfest

The spring is here (unless you live below the equator, and somehow don’t fall off the face of the earth), and it brings some great links (and allergens) :

  • Librarian’s dream app – researchers from Miami University created an augmented reality meets mobile application to help keep books ordered on libraries’ shelves.
  • Beats me why the need the money, I always assumed they make millions, but Total Immersion gets USD $5,5M in funding led by Intel Capital (which, interestingly, also funded Layar).
  • Quimo from the University of South Australia, is like play-doh for augmented reality. This “deformable material” supports “freeform modeling in spatial AR environments” by embedding almost invisible AR markers.
  • The Witness” is a German half-movie-half alternative reality game that uses AR (or pseudo-AR) to move the plot forward (via @).
  • Comedian Ricky Gervais dismisses augmented reality as “a load of bollocks” (via @Layar).

This week’s featured video is coming to us from Microsoft, a company that develops stunning technologies just to see them later made into products and sold by the likes of Apple. Here they develop a “Photosynth Lite”, enabling users to create 3d models by taking a few pictures with their cellphones. I wonder where this technology can be applied:

You can read more about this on Technology Review.

Have a sunny week!

X-Ray Vision via Augmented Reality

The Wearable Computer Lab at the University of South Australia has recently uploaded three demos showing some of its researchers’ work to Youtube. Thomas covered one of those, AR Weather, but fortunately enough, he left me with the more interesting work (imho).
The next clip shows a part of Benjamin Avery’s PhD thesis, exploring the use of a head mounted display in order to view the scenery behind buildings (as long as they are brick-walled buildings). If understood correctly (and I couldn’t find the relavant paper online to check this up), the overlaid image is a three-dimensional rendition of the hidden scene reconstructed from images taken by a previously positioned camera.

The interesting thing here is that a simple visual cue, such as the edges of the occluding items, can have such a dramatic effect on the perception of the augmented scene. It makes one wonder what else can be done to improve augmented reality beyond better image recognition and brute processor power. Is it possible that intentionally deteriorating the augmented image (for example, making it flicker or tainted), will make a better user experience? After all, users are used to see AR in movies, where it looks considerably low-tech (think Terminator vision) compared with what we are trying to build today.

Anyway, here you can find Avery himself, presenting his work and giving some more details about it (sorry, couldn’t embed it here, even after several attempts)