AYTHRASSE before: The race to define the future of portable technology is warmed up, with smart glasses emerging as the next main border. While Meta’s Ray-Ban collaboration has already made waves, technology giants like Apple, Samsung and Google quickly develop their own projects. The latest development comes from Google, which recently gave the public its most tangible look to date on the Android XR smart glasses during a live demonstration during the TED2025 conference.
Until now, Google’s Android XR glasses have only appeared in carefully organized teaser videos and limited practical previews shared with certain publications. These first glimpses suggested the potential to integrate artificial intelligence into everyday glasses but left persistent questions about the performance of the real world. This has changed when Shahram Izadi, Google’s Lead Android XR, took the Ted stadium – joined by Nishtha Bhatia – to demonstrate the prototypes of glasses in action.
The live demo presented a range of features that distinguish these glasses from attempts from previous intelligent glasses. At first glance, the device looks like an ordinary pair of glasses. However, it is full of advanced technologies, including a miniaturized camera, microphones, speakers and a high-resolution color display integrated directly into the goal.
The glasses are designed to be light and discreet, with support for prescription lenses. They can also connect to a smartphone to take advantage of its processing power and access a wider range of applications.
Izadi started the demo using glasses to display its speaker notes on stage, illustrating a case of practical and daily use. The real highlight, however, was the integration of the Gemini AI assistant by Google. In a series of live interactions, Bhatia has demonstrated how gemini could generate a haiku on demand, recall the title of a book seen a few moments earlier and locate a unit – throughout the vocal commands and visual processing in real time.
But the capacities of glasses extend far beyond these living room towers. The demo also included the translation on the fly: a sign was translated from English to Farsi, then passed in a transparent way in Hindi when Bhatia approached Gemini in this language – without any manual change.
The other demonstrated features included visual explanations of the diagrams, the contextual recognition of objects – such as the identification of a music album and the offer to play a song – and a head navigation with a 3D card superposition projected directly in the porter’s field of vision.
Unveiled last December, the Android XR platform – developed in collaboration with Samsung and Qualcomm – is designed as an open unified operating system for extended reality devices. It brings google applications familiar to immersive environments: YouTube and Google TV on large virtual screens, 3D Google photos, immersive google and chrome cards with several floating windows. Users can interact with applications via hand gestures, voice commands and visual indices. The platform is also compatible with existing Android applications, guaranteeing a robust ecosystem from the start.
Meanwhile, Samsung is preparing to launch his own smart glasses, Haean later named this year. The Haeen glasses were allegedly designed for comfort and subtlety, resembling regular sunglasses and the integration of orders based on gestures via cameras and sensors.
While the final specifications are still being selected, the glasses should include integrated cameras, a light frame and possibly the Snapdragon XR2 Plus chip from Qualcomm. The additional features considered include video recording, music reading and voice calls.