AR/VR/MR 2025 AI Glasses Panel (Also Apple AR Glasses Cancelled? & DeepSeek)

Introduction

January has always been a crazy month for me. I met with and/or saw presentations from over 29 companies at CES and 59 companies at AR/VR/MR and took about 2,000 between the two conferences. As discussed in SPIE AR/VR/MR 2025 Next Week (with comments on CES, Display Week, & AWE), CES is a logistics nightmare (and thus, I see less than half as much per day) where SPIE AR/VR/MR concentrates everything on one floor and is more efficient to see more companies in fewer days. CES is going to be showing more finished or (usually shown privately) prototype headsets. AR/VR/MR is concentrated on the optics and display components.

AR/VR/MR is by far the more fun, technically interesting, and collegial conference to attend. I have gone to AR/VR/MR every year since 2019 (it started in 2018). While AR/VR/MR includes “VR,” Optical See Through (OST – AR and Optical MR) clearly dominates the conference and is a focus of this blog. Other than a few optics vendors and testing equipment, there is very little Video See-Through (VST – VR with cameras passthrough) at AR/VR/MR.

I can’t possibly cover in any detail the nearly 90 companies I saw this month. I will have to pick and choose based on what I see as important and the trends I see.

Panel Discussion of AI Glasses

In addition to seeing many companies, I enjoyed being part of the AI Glasses Panel discussion. Edgar Auslander of Meta organized and moderated the panel, with the other panel members including Barry Silverstein of Meta, Paul Travers of Vuzix, and Bernard Kress of Google. The video of the panel should be available on the SPIE Publication Website (behind the SPIE paywall) in about a month.

The panel lasted about 50 minutes, and we were only able to touch upon a few key subjects. Almost all AR glasses claim to support some form of AI interaction. Edgar, Barry, and Bernard were wearing Meta Ray Ban Wayfarer glasses (audio only with cameras), and Paul was wearing Vuzix (including a display).

Many companies have or are about to go to market with a combination of AR glasses with displays and audio. Most of these AR glasses use JBD’s green (only) MicroLED. Almost all are being touted as having AI, although in most cases, this means that they connect to a cell phone that can access Chat GPT or similar AI in the cloud. A few claim to do some or all of the “AI” locally on the phone. The common phrase is “Like Meta Ray Ban Wafers, but with a display.”

What was Discussed

Below is an outline of some of the points discussed and my recollections and thoughts about them. As I don’t have the video to review, these are my interpretations of what was said and likely include my thoughts on the subjects that were not stated. I have also expanded on some of the points below.

  • The case for a display with AI Glasses – Audio is more fleeting and easier to miss. Text can reinforce what is heard, can be easier to comprehend, and works in situations where audio can’t. There is also content that can only be expressed visually.
  • Use cases for industrial applications are becoming clearer. If the glasses can improve a worker’s functionality by even a small amount, they will quickly justify their expense. AI and other software combined with camera inputs will help catch mistakes.
    • Towards the end of the panel, Paul said that he had discreetly received about 10 messages with his AR glasses.
  • Use case for consumer – The compelling use case for consumers is more hazy. The ones being shown include translation, which, while useful, doesn’t make a compelling case for a broad consumer market.
    • Barry pointed out that glass frames are as much a fashion statement. He also pointed out that this would lead to massive manufacturing and retail complexity in dealing with many SKUs.
    • Paul suggested a future where the glasses seamlessly do what you want them to do. The glasses would learn what you want them to do and do it without you having to ask.
  • Input (voice, control ring, visual gesture, EMG) – There is a general recognition that input can be problematic. Inherently, most AI/AR applications want the hands to be free, and the environment in terms of sound and lighting will be highly variable. Voice input can jump directly to what you want to be done but is not discreet. Control devices tie up the user’s hands and are one more thing to keep charged and not loose. Gestures (camera, EMG wristbands, or other) also tie up a user’s hands and may not be reliably detected and cause unintended input. Camera-based gestures require cameras to see the user’s hands.
  • Camera and privacy – There was general agreement that camera input was nearly essential for the usefulness of AI glasses. I commented that the Google Glasses “Glass-holes” problem was more of a scapegoat than a major reason they failed. And while people are more than another decade desensitized to cameras being everywhere, perhaps a bigger concern to users will be that everything they do will be tracked and seen by the “AI.” The (perhaps scary) conclusion seems to be that, just like with the internet, people will likely give up privacy for the advantages of having AI everywhere.
  • AI Locally or in the Cloud – Most AI glasses today perform most or all of the AI in the cloud, which results in lag and (because connections are not always great) unreliable. The question then becomes how much “AI” can be done locally.
  • Monochrome or color—We didn’t discuss this much during the panel. I expect that green can work for many industrial, medical, and military applications where the user is “paid to use. ” Consumers will likely have more of a problem with green only.
    • Side Note: If green-only is acceptable, I wonder why we have not seen green-only LCOS projectors. These could be almost as small as a green-only MicroLED projector, as bright or brighter, but much less expensive and about the same power even with low average pixel value (AVP) content. LCOS would also have much better inherent uniformity and could support higher resolutions more economically. Field sequential color (FSC) breakup would not be a problem, and slower-switching liquid crystals could be used that can have extremely high contrast. I suspect that monochrome LCOS is overlooked because companies using LCOS expect color.
  • Supercomputer in your pocket – I made the point that the glass size and weight form factor are going to require most of the computer power to come from a supercomputer in your pocket, most likely a smartphone. Note in the next section about Bloomberg’s report that Apple may have stopped their AR glasses program in part because an iPhone does not have the processing power for acceptable performance, so maybe even a smartphone may not be up to the task.
  • Weight below 50 grams for AI eyeglasses – There seemed to be broad agreement that somewhere around 50 grams are about as heavy AI/AR headsets in an eyeglasses form factor should be. This weight limit puts constraints on how much can be included.
  • FOV 30 Degrees is the “sound barrier” for AR in an eyeglasses form factor – I said on the panel “the speed of light” when the sound barrier analogy may be a better analogy. The point is that while waveguides or other optics might support greater than 30 degrees, by the time you support the data bandwidth, processing, and subsequent power requirements, the headset will exceed the practical size and weight of a true glasses-like form factor. As a wider FOV is supported, other features creep into the design, further spiraling it; or as I wrote in my 2019 article, Starts with Ray-Ban®, Ends Up Like Hololens
  • Do they have to be a glasses form factor? – This is sort of corollary to the 30-degree FOV limit. A point I made is that, at least for non-consumer companies, they should consider a non-eyeglasses form factor. If the feature set is necessary for the application, then trying to impose something bulky and poorly weighted device that looks vaguely like eyeglasses is counterproductive. It would be better to holistically design the heads with a form factor that better distributes the weight.
  • Outdoor use – There seemed to be general agreement that for AR/AI to become ubiquitous, it needs to be able to be used indoors and outside in daylight. The typical dynamic range outside, where you can see a mix of full sun and shade at the same time. In this environment, one would want more than 2,000 nits to the eye in addition to any dimming circuitry. If there is only, say, a few hundred nits from the display to the eye, then the outside world will have to be darkened too much.

Bloomberg: Apple Scraps Work on Mac-Connected Augmented Reality Glasses

While Meta showed up in force at AR/VR/MR, with many people giving presentations, on panels, and even in a small booth, the Apple people were fewer in number and did not give any presentations.

I no sooner got home from AR/VR/MR when, on January 31, Bloomberg’s Mark Gurman reported that Apple had scrapped their long-rumored AR glasses program. Quoting from the Bloomberg article:

The decision to wind down work on the N107 product followed an attempt to revamp the design, according to the people. The company had initially wanted the glasses to pair with an iPhone, but it ran into problems over how much processing power the handset could provide. It also affected the iPhone’s battery life. So the company shifted to an approach that required linking up with a Mac computer, which has faster processors and bigger batteries.

But the Mac-connected product performed poorly during reviews with executives, and the desired features continued to change. Members of Apple’s Vision Products Group, which worked on the device, grew increasingly concerned that the project was on the rocks. Sure enough, the final word came this week that the effort was over.

I want to emphasize that this is a rumor of a cancellation of a project that was itself a rumor (i.e., a rumor on a rumor). But if true, it is interesting that Apple would think that a smartphone or even a Mac computer does not have enough processing power to work to Apple’s satisfaction. The processor in a high-end Apple smartphone has more processing power than can fit into an eyeglasses form factor.

China’s DeepSeek – AI With Less Processing Power?

While AR/VR/MR was going through it, news broke about China’s DeepSeek AI software finding success and taking much less processing power than US-based programs. Caution should be applied to reading too much into the report from China, particularly the costs and how it was achieved. The YouTube video DeepSeek – How a Chinese AI Startup Shook Silicon Valley by Patrick Boyle goes into the cautions, pros, and cons of the recent news.

An interesting point Boyle makes is that if there really is a breakthrough in reducing the computing requirement for AI, it should be good news for the industry as it will lower the cost and power consumption of hardware.

Another point Boyle makes in the video is whether AI will be a proprietary or a commodity technology. I made a similar point about whether AI will become a proprietary “walled garden” in AR Roundtable Video Part 3: Meta’s Orion, Wristband, Apps, & Walled Garden at 2:13. If many companies get in with similar technology and they can’t wall off people switching, then it becomes a commodity. It’s not always clear what will lock people into a given product line and how big a barrier to switching is required. Take the original IBM PC: One would have expected IBM to have the barrier, but it turned out to be Microsoft and, to a somewhat lesser extent, Intel. Google took a different path to dominating internet search.

Conclusion

SPIE’s AR/VR/MR remains my favorite conference. Bernard Kress and his team put on a very welcoming show.

The AI panel would have taken hours to discuss all the challenges and recommendations for AI/AR glasses. We only had time to scratch the surface of some of the most obvious issues. In preparing for the Panel, I jotted down a list of about 20 issues we could discuss, and during the show, I added to my list.

Karl Guttag
Karl Guttag
Articles: 297

8 Comments

  1. The main stage talks were again quite variable. On the one hand there was interesting material from Avegant, Applied and Snap. On the other there was the usual content-free buzzword salad from Porotech. SPIE – you have to do better.

    • I would tend to agree on about half of the Main Stage presentations I saw. Note, I missed many of them in order to see companies at the Expo and some private meetings.

      I agree that Avegant was interesting, particularly the part about having disparity correction. I also thought Xreal’s presentation was good, but the “flat prism” optics they showed is not in the new Xreal One Pro. Meta was interesting in terms of understanding what they thought was important, but it covered old ground.

      I think part of the problem is that they only had 10 minutes on the main stage for the non-Plenary presentations. Many of the presentations turned into company overviews and “see our booth” with no time to got into any detail. Avegant was one of the few that presented that didn’t have a booth or a private room. The 20 minute Technical Presentations of the “Technical Program” on Monday had a lot more detail and they were primarily about technology rather than the companies. I particularly liked Meta’s presentation on Lissajous Scanning LBS, even though I didn’t believe any of it and they skipped most of the severe problems.

      The panel on Tuesday on Visual Human factors I found to be interesting and informative. I also thought out panel on AI & AR went well.

      What did you like at the conference.

  2. Dear Karl,

    What exactly do they mean by flat-prism optics? They are still advertising it on their website for the Xreal OBE Pro—if they haven’t actually used it, what was their presentation about? Was it focused on future innovations in this category, or is it actually a freeform-based optical design?

    Either way, I’m excited and looking forward to your insights on the topic. When can we expect your analysis?

    • I have a ton of things on my plate having met with or seen material from over 70 companies in January.

      At first look, I thought Xreal was using freeform optics, but based on their presentation and some further analysis it is not a classic freeform. It looks like it works similarly to Ant-Reality (also going by AntVR and which was acquired by Google in 2024) design. See my 2022 AWE Video (https://www.youtube.com/watch?v=-_JQHzNo1HY&t=3156s) which shows Ant-Reality double display version. Ant Reality also had a single display variation that works similarly but with only one display. By using a TIR bounce they can use a polarizing beam splitter at a shallower angle than 45 degrees and thus a thinner beam splitter. Based on Xreal’s presentation at AR/VR/MR 2025, the Curved “birdbath” mirror and all the optics encased rather than being in free air.

  3. Any chance this panel discussion Kress, yourself, Travers and the Meta guys will be uploaded soon? Would love to listen to it

  4. The points about input seem to only relate to Meta RayBans and the like? I consider those glasses the fidget spinners of the AI era, as the use cases hardly make up for wearing them long term. If we were talking about replacing the smartphone with glasses, I also see no great solution for input anytime soon, if ever.

    For consumers, I only see two big markets for glasses/HMD: VR is great for immersive gaming and related entertainment. Controllers are effective input devices there. Second and much bigger market is replacing the laptop, monitors, keyboard and mouse with Xreal glasses, Samsung Dex and a ring on each hand. It provides a huge monitor, is portable, provides the same level of productivity, and leverages the smartphone that is anyway at hand. (I personally invest in this scenario).

    Certainly many smaller, valid markets exist across B2B and B2C, each requiring tailored input solutions. However, if history serves as a guide, (input) technologies developed for dominant markets will likely be adapted for these niche applications.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading