As I mentioned in my previous post, I really like the direction that Google and Motorola have taken with the Moto X. I think the new Touchless Control feature, which allows users to control certain things on their phones with just their voices, is very exciting. Since then, on September 10th, Apple announced the new iPhone 5S and, it too, includes a co-processing chip (for motion rather than voice). I'm thrilled with these development. Between the Moto X's always listening and the iPhones 5S's motion detection, we are starting to see smartphones emerge that look for cues rather than specific input. To be fair, at the moment, these capabilities are still more marketing gimmickry rather than killer feature. However, I believe that, with some contextual awareness, these features will become revolutionary in the way we work with our gadgets.
Let me explain.
At Google IO 2013 (video here - starts at the 2 hour mark), Google showed the beginnings of contextual search using terms like "Here" and "It". Being able to ask things like "How far is it from here?" makes interacting with my devices much more natural. I'm hoping to see that same kind of contextual awareness appear in the Touchless Control feature of the Moto X (and hopefully future Nexus phones). It would be great if Touchless Control could be integrated with Android intents so that we can say things to our phones like: "Bookmark this", "send this picture to my mom", "Keep this", "call her", "Navigate there", etc.
If the demonstration from Google IO is any indication, I think the capability for this kind of interaction is almost here. If so, I think it will usher in a much improved method of interacting with our phones and apps. That's why I'm really excited.
No comments:
Post a Comment