It’s been almost a month since Google announced Project Glass, the augmented reality eyewear out of the Internet and software corporation’s X Lab, and more updates keep coming
Company executive Sebastian Thrun wore a prototype during an interview with Charlie Rose that went online on Thursday, April 26. While most of the buzz surrounding the new product is the augmented reality aspect (superimposing digital images over your physical world), Thrun explained to Rose that this feature isn’t the best use of the product. Instead, he illustrated that Project Glass is best at doing exactly what a smartphone does – but hands-free.
Google Glasses Photo Taken of Charlie Rose
“The thing we like is picture taking,” Thrun told Rose. He then took a picture of Rose by pressing a button – the first-ever published photo taken by the glasses – and then nodded his head to share the photo with his followers on Google Plus. “I nod and the picture is now visible to [my friends].”
Experiments are also being conducted with calendar notifications, making phone calls with the device, having e-mails spoken to the user, and sharing live-streams of what a user is looking at. As Thrun added, “The hope is to get things out of the way. This is a display that’s with you all the time.”
What do we think? Do our hands need to be liberated from our technology?
Google’s research is moving fast – faster than I initially anticipated – but Thrun’s attitude towards the augmented reality aspect of the product – an essential aspect of the product’s “One Day…” video – is interesting. Granted, as I said before, the video was entitled “One Day…” for a reason, but Thrun’s interview confirms that the product is a matter of “when,” not “if.” While Google has contradicted an earlier New York Times report that the glasses would be available to consumers by the end of the year, Google is definitely continuing to build public momentum for the product’s launch – whenever that is.
First, the concept video; then Google co-founder Sergey Brin showed up at a charity event in San Francisco with a prototype on; and, now, we have video of the Glasses in action. What else does Google have up its sleeve?
However, there are some major issues the interview brought to light. First, many elements from the concept video have not yet been shown in action, most importantly voice commands, which drove all the actions in the video. Also, what did everyone think of the image quality compared to the iPhone 4S’s image quality?
What Google needs to focus on now is not just showing us that Glass works – we know it does – but to illustrate why Glass is better than the products we already have and why Glass is the next gadget we need versus the next gadget we want. Why not show us how Project Glass can be beneficial to fields such as medicine and science? Or how Glass will change the way we look at photography and livestreamed video outside just our updates for our Google+ friends? The product has serious potential, but is just being hands-free enough?