The much-needed reinvention of the voice assistant is almost here
Apple and Amazon are revamping their voice assistants with generative AI to improve context and conversational capabilities.
Apple and Amazon are revolutionizing their voice assistants, Siri and Alexa, by incorporating generative AI for enhanced context and conversational capabilities. At the WWDC 2024 event, Apple introduced a new version of Siri equipped with improved language understanding and the ability to handle more complex, context-aware commands. Amazon, meanwhile, is also working on a next-gen Alexa powered by a new large language model (LLM) but is encountering some challenges in merging old functionalities with the new innovations.
Apple's upcoming Siri improvements focus on providing more natural interactions, such as understanding context within follow-up questions and altering commands mid-sentence. These updates aim to rectify the fundamental flaws of current voice assistants, which often require precise commands and fail to truly understand user context. However, the fully revamped Siri will initially launch in beta this fall and won't yet be available on Apple’s HomePod or Apple TV, indicating a slow and careful rollout.
Amazon's vision for Alexa includes understanding conversational phrases, multitasking commands, and improved smart home capabilities. Although demonstrated impressively last fall, the advanced Alexa is not yet ready for widespread release, as the company works to integrate its new capabilities with existing ones. Apple and Amazon's cautious approaches underscore the complexity of creating intelligent voice assistants that can safely interact with our devices and homes, highlighting the significant potential and challenges in this developing tech.