The Next Web last week felt like a google glass meetup. All the tech hippies where there showcasing the latest toy from Google.
Let’s face it – this toy rox. Augmented reality has been in our minds for decades. And while head up displays have been around for a while, no one has ever managed to produce a device that got as close as google glass to becoming mass market. But they are not there yet. Set aside the price point which I’m sure will drop significantly since the bill of material is 150USD tops ( I haven’t checked the price of microdisplays in a while).
Yet as usually the challenge is not about the device, it’s about the services it will enable. I took a long close look at the Glass SDK and was pretty puzzled by how close the Glass ecosystem can be. Basically the interactions are limited to inserting “timeline” events that will display on glass and that can be bound to user’s location updates (every 10 mins or so). This sounds awesome but also very limited. I would like to be able to enrich Glass’s dictionary of voice commands to script stuff I often do online, I would like to be able to use the camera to read 1D or 2D codes and display ad1-hoc information or even better scan heads and listen to voice fingerprints to pull out the vcard and notes I have on someone … but all that can’t happen with such limited access to the system.
At this point I’m glad I didn’t throw away 1500USD on glass – set aside the brag factor I feel they’re plain useless – but I really do hope google will provide the tools to bring the device to the next stage and that it will hence be ready to bring value to the masses.