Andrew Naylor

Google Glass

Today Google announced their augmented reality project Google Glass. Essentially a wearable Heads-display, which, according to their concept video will provide a seamless interface for Google services to improve your day-to-day life.

There were rumblings of such a project back in February, supposedly coming out of “Google[x]”, their far-future looking lab and it seems that everything mentioned then has been confirmed, though estimates of late 2012 availability are apparently ambitious.

I think the idea is neat and I look forward to seeing what they manage to produce. However, as many people have pointed out, concept videos rarely depict reality and the concept video really does seem too good to be true. Even setting aside the systems’ interpretation of the protagonists’ “Hrmm” and grunts as commands, many of the context driven actions such as informing him the subway was suspended and re-routing to the book store seem ambitious. Then again, I’m cynical and always seem to underestimate technological capability, so I expect to be at least a little surprised.

On the other hand I am apprehensive to say that I would want to buy one. As far as Google’s services go, I am finding myself increasingly questioning their practices. I saw a brilliant quote on twitter earlier:

Why would the largest advertising company in the world want to place a screen between my eyeballs and reality?

Assuming Apple were to release something similar I’m sure all the same walled garden arguments will apply that people use to compare iOS and Android. People, read: nerds, will want their AR displays to be open so they can hack on them. When Steve first unveiled the iPhone SDK, he said that they were being very restrictive on what was possible with the SDK because the phone needs to be reliable. People don’t want malicious, or badly written software to crash their phone.

I think a wearable HUD takes this to a different level. It’s not a life-dependent system, granted, but if I’m walking along and suddenly get bombarded by some form of audio/visual distraction directly in my field of view as I’m crossing a street that could put my life in danger. Perhaps this is a hyperbolic example, but on a device which, as I gather, is intended to be active the majority of the time - as opposed to a phone which will spend the majority of the time in a pocket - such possibilities are more likely to present themselves.

I’m excited about the technological potential, particularly how they are able to project an in-focus image onto the users retina. I have some ideas how they might to do this but I’ll be interested to see what they’ve done. I’m looking forward to trying the system out, one of my colleagues has already expressed interest in getting one to experiment with in the lab at work.

However if I were to buy one for my own use I’d rather pay appropriately for the experience from a company like Apple or Microsoft than have it funded by advertising. Plus, do I really want to look like something out of Minority Report? Perhaps. I’ll just have to wait and see.