Nvidia's CEO on How Its New AI Models Work on Future Smart Glasses

Nvidia's CEO on How Its New AI Models Work on Future Smart Glasses

Advancements in artificial intelligence are significantly enhancing tech devices—ranging from smartphones and robots to self-driving cars—by enabling them to better comprehend their surroundings. This trend was prominently showcased throughout 2024 and gained further momentum at CES 2025. At the event, Nvidia introduced a cutting-edge AI model designed to interpret the physical environment, which won a CES award, along with a suite of large language models aimed at powering next-generation AI agents.

Nvidia’s CEO, Jensen Huang, is advocating for these foundational AI models as perfect solutions for robotics and autonomous transportation systems. Additionally, smart eyewear represents another category poised to benefit from improved environmental understanding. Devices like Meta’s AI-integrated Ray-Bans are rapidly emerging as sought-after gadgets, with Meta reporting over one million units shipped by November, according to Counterpoint Research.

Such smart glasses present an ideal platform for AI assistants—digital helpers capable of using cameras and processing both speech and visual data to assist users with various tasks beyond merely responding to queries.

While Huang did not confirm the imminent release of Nvidia-powered smart glasses, he elaborated on how the company’s latest model could support future smart eyewear if adopted by partners. “Integrating AI with wearable technology and virtual presence tools like smart glasses is incredibly exciting,” Huang remarked during a CES press Q&A when asked about the compatibility of their models with smart glasses. He highlighted cloud processing as a viable option, allowing Nvidia’s Cosmos model to handle complex queries remotely instead of on the device. This approach is commonly used in smartphones to reduce the processing burden of intensive AI tasks. However, if a manufacturer desires smart glasses that utilize Nvidia’s AI directly on the device, Huang explained that Cosmos could compress its capabilities into a more specialized, task-specific model.

Nvidia’s Cosmos model is being promoted as a platform for collecting real-world data to train AI systems for applications such as robotics and autonomous driving, analogous to how large language models are trained on extensive text data to generate responses.

“The pivotal moment for robotics, similar to ChatGPT’s impact on language, is approaching,” Huang stated in a press release.

In addition, Nvidia unveiled a new range of AI models built using Meta’s Llama technology, named Llama Nemotron, which aims to expedite the creation of AI agents. These advancements raise interesting possibilities for their integration into smart glasses as well.

Speculation about Nvidia’s potential entry into the smart glasses market was further fueled by a recent patent filing, although the company has not officially announced any forthcoming products in this domain. Concurrently, tech giants like Google, Samsung, and Qualcomm revealed last month their plans to develop a new mixed-reality platform for smart glasses and headsets, branded as Android XR, suggesting that smart glasses are set to gain more traction soon.

CES 2025 also featured a variety of new smart glasses models, including the RayNeo X3 Pro and Halliday smart glasses. The International Data Corporation projected in September that smart glasses shipments would increase by 73.1% in 2024, indicating robust growth in this sector. Nvidia’s latest initiatives are another development to monitor in this rapidly evolving landscape.

For additional coverage from CES, explore our reviews of top TVs, leading laptops, and the most innovative concepts unveiled at the event.

Retour au blog

Laisser un commentaire

Veuillez noter que les commentaires doivent être approuvés avant d'être publiés.