In this era of rapid technological advancement, you will soon no longer need to constantly look down at your phone. A simple pair of lightweight glasses will allow you to access information in real time, translate, perceive context, think, and even predict your needs. This isn't science fiction; it's a real-world experience brought about by the fusion of AI and AR technologies.
The integration of AI and AR technologies provides consumers with a more natural and convenient interactive experience, leading to a smart hardware revolution. Meta's CTO, Andrew Bosworth, once told the media: "Always-on AI experiences will allow smart glasses to replace smartphones"[1], and this future is accelerating.

01. AI Big Model: Technology for All and Personalized AI
The rapid development of AI technology is profoundly changing our world. The emergence of large-scale reasoning models has endowed AI with the ability to "think like humans." It can not only understand the surface meaning of text but also delve deeper into context and logic, generating responses that are closer to human thinking. This breakthrough capability has significantly improved the basic application level of AI and demonstrated enormous potential in people's daily work.
The emergence of DeepSeek is particularly noteworthy. By optimizing its algorithms and architecture, it significantly improves the pre-training efficiency and inference speed of large inference models, while drastically reducing training time and inference costs. This enables AI applications to be deployed more widely on various devices, including resource-constrained mobile devices, truly democratizing AI technology.

Some capabilities of large AI models
With the continuous enhancement of AI capabilities and the increasing abundance of tools, "how to use AI more efficiently and conveniently" has become a crucial issue that urgently needs to be addressed. However, everyone's needs are unique, such as different use cases for text, images, videos, and code generation. Multimodal large models that integrate different types of data can provide personalized services for each individual, meeting the diverse application scenario requirements.
Therefore, AI applications are evolving from being "smarter" to being "more understanding of you." By continuously analyzing behavioral data, AI systems can constantly optimize their performance and services, truly achieving "personalized AI," which will drive AI to penetrate into a wider range of application areas and deeper application scenarios.
02. An all-weather AI assistant, from "omnipotent" to "anytime, anywhere".
In AI applications, "input" and "output" are equally important. "Input" is what you tell the AI to analyze and execute, while "output" is the result the AI gives you, which can be text, audio, images, or video. Currently, the device people use most frequently for AI is still the PC; however, this is not the most "natural" way to interact with AI.
The "24/7 AI assistant" allows users to access information and interact anytime, anywhere, without relying on a mobile phone or PC. This convenience and naturalness are unmatched by traditional devices. Today, ChatGPT's voice mode is widely used and has become a frequently used input method. In other words, the flexibility and convenience of AI applications should no longer be limited by the form factor of the device.
Based on this concept, pin-style and handheld AI terminals such as AI Pin and Rabbit R1 have emerged, all aiming to improve the flexibility of AI usage. However, these dedicated AI devices have not received positive market feedback. In addition to the differences in user experience due to their physical form, a crucial factor is the "ease of interaction."
As AI becomes "omnipotent," an even more ideal way to interact with it is to issue commands and receive results "anytime, anywhere." For example, in the kitchen, when your hands are busy, you can look up recipes or set a virtual timer at any time. This means that an 24/7 AI assistant needs an "Always On" device to sense your needs.

Payment Scenarios - Rokid Glasses
From a convenience perspective, while TWS earbuds are currently widely used and perform well, smart glasses clearly have an advantage in terms of comfort during extended wear and functional integration, making them an ideal platform for AI technology. AR glasses not only have "Always On" capabilities but also provide a "visual display output window" for AI, enabling the integration of multimodal information such as vision and hearing, thereby providing a more natural interactive experience.
The cameras, microphones and other sensors equipped with smart glasses provide AI with a data dimension far exceeding that of smartphones, making it more powerful in supporting multimodal large models. As Meta CEO Mark Zuckerberg said, "(Smart glasses) are the ideal AI terminal, and their unique positioning allows you to see what you see and hear what you hear." [2]
03. AR+AI: Reshaping the Boundaries of Human-Machine Collaboration <br /> Not only that, AI makes AR glasses even more like your "personal assistant" and even your "second brain". It can provide personalized content and services based on your preferences or historical behavior. For example, it can recommend book lists or movies based on your reading habits or movie preferences. The Verge mentioned in its experience article on Android XR and smart glasses prototype devices: "In this hour, I felt like Tony Stark (Iron Man), and Gemini was my JARVIS"[3].
When AI is combined with the visual display and perception capabilities of AR glasses, it can also predict your needs in advance, bringing a more transformative experience. For example, when traveling, it will remind you of your hotel room number; when you look at a restaurant's foreign language menu, it will suggest whether you need a translation; when you are assembling furniture, it will highlight the tools used in each step and tell you the next steps.
However, the key challenge in making AR glasses the best platform for AI is to achieve a lightweight, ergonomic design that allows for all-day wear.
Achieving "Always On" functionality is no easy feat for a wearable device, especially "glasses." Every inch of space within the glasses is extremely valuable, and the optical display solution has the most significant impact on the form factor of AR glasses. Many AR glasses based on LCoS, DLP, or BirdBath optical solutions are not only bulky, hindering miniaturization and weight reduction, but also fail to meet the aesthetic requirements of everyday AR wearability.

Even G1 A [4]
MicroLED microdisplay solutions play a crucial role in providing users with "all-day AI assistants." With advantages such as small size, high brightness, and low power consumption, they have become the ideal optical solution for lightweight AR glasses. Currently, the smallest MicroLED light engine is only 0.15 cubic centimeters in size and weighs as little as 0.3 grams, easily helping terminal manufacturers create AR glasses that better suit everyday wearing habits. For example, AR products based on JBD MicroLED microdisplays, such as the Vuzix Z100 and OPPO Air Glass 2, not only look identical to ordinary glasses but also weigh as little as 30 grams, providing a perfect foundation for all-day wear.
Through groundbreaking innovations in ultra-lightweight design, MicroLED display technology is driving AR glasses into the mainstream. Meanwhile, the rapid iteration of AI technology, its 24/7 connectivity, and its natural and fluid interactive experience are propelling smart glasses into all aspects of people's lives at an unprecedented pace.
AR+AI not only reshapes the way we connect with the world and opens up a new generation of human-computer interaction experience, but will also evolve into the "second brain" of mankind, bringing more convenience and innovation to people and leading us towards a more intelligent and better future.