Google's engineers are known for tackling difficult problems, grand challenges like driverless cars. But the grandest challenge for Google has always been search. Fifteen years into its evolution as a company, Google's brain in the cloud is becoming much more conversational and smarter.
The best evidence is Google Glass, for which speech input is the primary interface. "Voice recognition is good enough that you can talk to these things," Google Chairman Eric Schmidt said about the Glass user experience. He believes that making conversational queries with Glass and other devices will make Google's search engine more of a helper, like a personal digital assistant. For example, Google's brain could whisper in your ear that you need to leave early for the airport because of traffic congestion or long security lines based on knowledge about your travel schedule.
Ideally, the interaction with Glass would be more of a back-and-forth conversation than a single response. The search engine can remember the context, so follow-up questions can be less specific.
As it has done with smartphones on Android, Google will work to develop its own devices and work with partners to expand the market for voice apps. "We will have lots of different kinds of devices. We are working with partners to do so," Schmidt said during an interview at the AllThingsD mobile conference in New York.
Topeka Capital Markets analyst Victor Anthony is betting that voice search will boost Google's stock price. In a recent report, Anthony initiated coverage of Google with a Buy rating and a $950 price target. "With Google's efforts in search and display well understood by investors, but perhaps under-appreciated, we focus on what we view as the next leg of long-term growth for Google - voice search," he wrote in his note. "We believe voice search will aggressively expand Google's influence beyond the confines of the desktop, and into virtually every function of human lives. Outside of voice, we see Google continuing to take share of advertising budgets globally, driven by search, display, and mobile."
Anthony said that Google is well positioned to capitalize on the shift to a natural language interface because of its infrastructure and services, as well as its global and mobile investments. "As humans become less operators who punch keys, speech input, voice recognition, and near real-time translation will become more important. Google is in a pole position because it has spent the better part of the last decade refining speech, and it has upper managment support," he said.
It also helps that Google's Android mobile platform and search engine are dominant. Schmidt said nearly 2 billion Android phones will be in use within a year or two, up from more than 750 million currently. "Android is by far the primary vehicle by which people are going to see smartphones. Our goal is to reach everybody, " Schmidt said.
Apple has also recognized the shift to voice applications, with its Siri service. But Anthony said that Siri voice is more of a "tack-on solution" compared to Google's Voice Search. "Google spent a lot of effort converting voice in every single language, and provides support for developers, so they have an advantage over Apple," he said.
Despite Google's progress in delivering a global personal digital assistant who you can talk to, it's still the early days, beyond embryonic but far from mature. Google's Knowledge Graph, a semantic network more than 570 million objects and 18 billion entity relationship connections, is a key element in the company's attempt to make voice search more conversational. Google co-founder and CEO Larry Page discussed Knowledge Graph during the fourth-quarter 2012 earnings call. "The perfect search engine would understand exactly what you mean and give you exactly what you want. Our Knowledge Graph brings that much closer," he said.
Page later elaborated on the state of Knowledge Graph: "I think we are still in the early stages of it. It's still 1 percent or something where we should be, so I am really excited about what we can do in the future."
In an interview with Slate's Farhad Manjoo, Google's search ranking chief Amit Singhal, offered a more optimistic view of the company's quest to emulate the Star Trek computer, a high-functioning artificial intelligence that "understands" you:
"Now we're trying to get it to a point where it passes the 'toothbrush test' of you using it twice a day." Singhal predicted that will happen in three years' time--by then, he says, Google's Star Trek machine will be so good that you'll ask it a question and expect a correct answer at least twice a day. "And in five years you won't believe you ever lived without it. You'll look back at today's search engine and you'll say, 'Is that really how we searched?'" Singhal says.
With the coming proliferation of glasses, watches, bands and other wearable devices, more people will have the necessity to converse with their machines. ABI Research predicts that the wearable device market will grow to 485 million annual shipments in five years.
It may be that two correct answers a day three years forward isn't enough to make voice search pervasive. But five years may be enough for more conversation, human-to-computer voice interaction and for Knowledge Graph to improve from Page's current assessment.