Voice AI technology is rapidly evolving, promising to transform enterprise operations from customer service to internal communications.
In the last few weeks, OpenAI has launched new tools to simplify the creation of AI voice assistants and expanded its Advanced Voice Mode to more paying customers. Microsoft has updated its Copilot AI with enhanced voice capabilities and reasoning features, while Meta has introduced voice AI to its messaging apps.
According to IBM Distinguished Engineer Chris Hay, these advances “could change how businesses talk to customers.”
AI speech for customer service
Hay envisions a dramatic shift in how businesses of all sizes engage with their customers and manage operations. He says the democratization of AI-powered communication tools could create unprecedented opportunities for small businesses to compete with larger enterprises.
“We’re entering the era of AI contact centers,” says Hay. “Every mom-and-pop shop can have the same level of customer service as an enterprise. That’s incredible.”
Hay says the key is the development of real-time APIs that allow for extremely low-latency communication between humans and AI. This enables the kind of back-and-forth exchanges that people expect in everyday conversation.
“To have a natural language speech conversation, the latency of the models needs to be around 200 milliseconds,” Hay notes. “I don’t want to wait three seconds… I need to get a response quickly.”
New voice AI technology is becoming accessible to developers through APIs offered by companies like OpenAI. “There’s a production-at-scale developer API where anybody can just call the API and build that functionality for themselves, with very limited model knowledge and development knowledge,” Hay says.
The implications could be far-reaching. Hay predicts a “massive wave of audio virtual assistants” emerging in the coming months and years as businesses of all sizes adopt the technology. This could lead to more personalized customer service, the emergence of new AI communication industries and a shift in jobs toward AI management.
For consumers, the experience may soon be indistinguishable from speaking with a human agent. Hay points to recent demonstrations of AI-generated podcasts through Google’s NotebookLM as evidence of how far the technology has come.
“If nobody had told me that was AI, I honestly would not have believed it,” he says of one such demo. “The voices are emotional. Now you’re conversing with the AI in real-time, and that will get better.”
AI voices get personal, literally
The major tech companies are racing to enhance their AI assistants’ personalities and capabilities. Meta’s approach involves introducing celebrity voices for its AI assistant across its messaging platforms. Users can choose AI-generated voices based on stars like Awkwafina and Judi Dench.
However, along with the promise comes potential risks. Hay acknowledges that the technology could be a boon for scammers and fraudsters if it falls into the wrong hands.
“You are going to see a new generation of scammers within the next six months who have got authentic-sounding voices that sound like those podcast hosts you heard, with inflection and emotion in their voice,” he warns. “Models that are there to get money out of people, essentially.” This could render traditional red flags obsolete, like unusual accents or robotic-sounding voices. “That’s going to be hidden away,” Hay says.
He likens the situation to a plot point in the Harry Potter novels, where characters must ask personal questions to verify someone’s identity. In the real world, people may need to adopt similar tactics.
“How am I going to know that I’m talking to my bank,” Hay muses. “How am I going to know that I’m speaking to my daughter, who’s asking for money? Humans are going to have to get used to being able to ask those questions.”
Despite these concerns, Hay remains optimistic about the technology’s potential. He points out that voice AI could significantly improve accessibility, allowing people to interact with businesses and government services in their native language.
“Think of things like benefit applications, right? And you get all these confusing documents. Think of the ability to be able to call up [your benefits provider] and it’s in your native language, and then being able to translate things—really complex documents—into a simpler language that you’re more likely to understand.”
AI voice technology continues to evolve, and Hay believes we’re only scratching the surface of potential applications. He envisions a future where AI assistants are seamlessly integrated into wearable devices like the Orion augmented reality glasses that Meta recently unveiled.
“When that real-time API is in my glasses, I can speak to that real-time as I’m on the move,” Hay says. “Combined with AR, that will be game-changing.” Though he acknowledges the ethical challenges, including a recent incident in which smart glasses were able to instantly discover people’s identities, Hay remains bullish on the technology’s prospects.
“The ethics will need to be worked out, and ethics are critical,” he concedes. “But I’m optimistic.”
LATEST COMMENTS
MC Press Online