Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

January has always been a crazy month for me. I met with and/or saw presentations from over 29 companies at CES and 59 companies at AR/VR/MR and took about 2,000 between the two conferences. As discussed in SPIE AR/VR/MR 2025 Next Week (with comments on CES, Display Week, & AWE), CES is a logistics nightmare (and thus, I see less than half as much per day) where SPIE AR/VR/MR concentrates everything on one floor and is more efficient to see more companies in fewer days. CES is going to be showing more finished or (usually shown privately) prototype headsets. AR/VR/MR is concentrated on the optics and display components.
AR/VR/MR is by far the more fun, technically interesting, and collegial conference to attend. I have gone to AR/VR/MR every year since 2019 (it started in 2018). While AR/VR/MR includes “VR,” Optical See Through (OST – AR and Optical MR) clearly dominates the conference and is a focus of this blog. Other than a few optics vendors and testing equipment, there is very little Video See-Through (VST – VR with cameras passthrough) at AR/VR/MR.
I can’t possibly cover in any detail the nearly 90 companies I saw this month. I will have to pick and choose based on what I see as important and the trends I see.
In addition to seeing many companies, I enjoyed being part of the AI Glasses Panel discussion. Edgar Auslander of Meta organized and moderated the panel, with the other panel members including Barry Silverstein of Meta, Paul Travers of Vuzix, and Bernard Kress of Google. The video of the panel should be available on the SPIE Publication Website (behind the SPIE paywall) in about a month.

The panel lasted about 50 minutes, and we were only able to touch upon a few key subjects. Almost all AR glasses claim to support some form of AI interaction. Edgar, Barry, and Bernard were wearing Meta Ray Ban Wayfarer glasses (audio only with cameras), and Paul was wearing Vuzix (including a display).
Many companies have or are about to go to market with a combination of AR glasses with displays and audio. Most of these AR glasses use JBD’s green (only) MicroLED. Almost all are being touted as having AI, although in most cases, this means that they connect to a cell phone that can access Chat GPT or similar AI in the cloud. A few claim to do some or all of the “AI” locally on the phone. The common phrase is “Like Meta Ray Ban Wafers, but with a display.”
Below is an outline of some of the points discussed and my recollections and thoughts about them. As I don’t have the video to review, these are my interpretations of what was said and likely include my thoughts on the subjects that were not stated. I have also expanded on some of the points below.
While Meta showed up in force at AR/VR/MR, with many people giving presentations, on panels, and even in a small booth, the Apple people were fewer in number and did not give any presentations.
I no sooner got home from AR/VR/MR when, on January 31, Bloomberg’s Mark Gurman reported that Apple had scrapped their long-rumored AR glasses program. Quoting from the Bloomberg article:
The decision to wind down work on the N107 product followed an attempt to revamp the design, according to the people. The company had initially wanted the glasses to pair with an iPhone, but it ran into problems over how much processing power the handset could provide. It also affected the iPhone’s battery life. So the company shifted to an approach that required linking up with a Mac computer, which has faster processors and bigger batteries.
But the Mac-connected product performed poorly during reviews with executives, and the desired features continued to change. Members of Apple’s Vision Products Group, which worked on the device, grew increasingly concerned that the project was on the rocks. Sure enough, the final word came this week that the effort was over.
I want to emphasize that this is a rumor of a cancellation of a project that was itself a rumor (i.e., a rumor on a rumor). But if true, it is interesting that Apple would think that a smartphone or even a Mac computer does not have enough processing power to work to Apple’s satisfaction. The processor in a high-end Apple smartphone has more processing power than can fit into an eyeglasses form factor.
While AR/VR/MR was going through it, news broke about China’s DeepSeek AI software finding success and taking much less processing power than US-based programs. Caution should be applied to reading too much into the report from China, particularly the costs and how it was achieved. The YouTube video DeepSeek – How a Chinese AI Startup Shook Silicon Valley by Patrick Boyle goes into the cautions, pros, and cons of the recent news.
An interesting point Boyle makes is that if there really is a breakthrough in reducing the computing requirement for AI, it should be good news for the industry as it will lower the cost and power consumption of hardware.
Another point Boyle makes in the video is whether AI will be a proprietary or a commodity technology. I made a similar point about whether AI will become a proprietary “walled garden” in AR Roundtable Video Part 3: Meta’s Orion, Wristband, Apps, & Walled Garden at 2:13. If many companies get in with similar technology and they can’t wall off people switching, then it becomes a commodity. It’s not always clear what will lock people into a given product line and how big a barrier to switching is required. Take the original IBM PC: One would have expected IBM to have the barrier, but it turned out to be Microsoft and, to a somewhat lesser extent, Intel. Google took a different path to dominating internet search.
SPIE’s AR/VR/MR remains my favorite conference. Bernard Kress and his team put on a very welcoming show.
The AI panel would have taken hours to discuss all the challenges and recommendations for AI/AR glasses. We only had time to scratch the surface of some of the most obvious issues. In preparing for the Panel, I jotted down a list of about 20 issues we could discuss, and during the show, I added to my list.
The main stage talks were again quite variable. On the one hand there was interesting material from Avegant, Applied and Snap. On the other there was the usual content-free buzzword salad from Porotech. SPIE – you have to do better.
I would tend to agree on about half of the Main Stage presentations I saw. Note, I missed many of them in order to see companies at the Expo and some private meetings.
I agree that Avegant was interesting, particularly the part about having disparity correction. I also thought Xreal’s presentation was good, but the “flat prism” optics they showed is not in the new Xreal One Pro. Meta was interesting in terms of understanding what they thought was important, but it covered old ground.
I think part of the problem is that they only had 10 minutes on the main stage for the non-Plenary presentations. Many of the presentations turned into company overviews and “see our booth” with no time to got into any detail. Avegant was one of the few that presented that didn’t have a booth or a private room. The 20 minute Technical Presentations of the “Technical Program” on Monday had a lot more detail and they were primarily about technology rather than the companies. I particularly liked Meta’s presentation on Lissajous Scanning LBS, even though I didn’t believe any of it and they skipped most of the severe problems.
The panel on Tuesday on Visual Human factors I found to be interesting and informative. I also thought out panel on AI & AR went well.
What did you like at the conference.
Dear Karl,
What exactly do they mean by flat-prism optics? They are still advertising it on their website for the Xreal OBE Pro—if they haven’t actually used it, what was their presentation about? Was it focused on future innovations in this category, or is it actually a freeform-based optical design?
Either way, I’m excited and looking forward to your insights on the topic. When can we expect your analysis?
I have a ton of things on my plate having met with or seen material from over 70 companies in January.
At first look, I thought Xreal was using freeform optics, but based on their presentation and some further analysis it is not a classic freeform. It looks like it works similarly to Ant-Reality (also going by AntVR and which was acquired by Google in 2024) design. See my 2022 AWE Video (https://www.youtube.com/watch?v=-_JQHzNo1HY&t=3156s) which shows Ant-Reality double display version. Ant Reality also had a single display variation that works similarly but with only one display. By using a TIR bounce they can use a polarizing beam splitter at a shallower angle than 45 degrees and thus a thinner beam splitter. Based on Xreal’s presentation at AR/VR/MR 2025, the Curved “birdbath” mirror and all the optics encased rather than being in free air.
You got your wish. I just posted an article on how the new Xreal Optics works.
Karl
Any chance this panel discussion Kress, yourself, Travers and the Meta guys will be uploaded soon? Would love to listen to it
Last year the presentations where published on SPIE’s conference website in March 12th and April 15th. Last year they recorded both the Main Stage (which included panels) and the “technical” presentations on Monday. This year, I think they only recorded the main stage and this may affect how long before the videos get published. They will be behind SPIE’s paywall.
The likely link (there is nothing there now) will be: https://www.spiedigitallibrary.org/conference-presentations?conference=AR%2c_VR%2c_MR&startYear=2025&endYear=2025.
The points about input seem to only relate to Meta RayBans and the like? I consider those glasses the fidget spinners of the AI era, as the use cases hardly make up for wearing them long term. If we were talking about replacing the smartphone with glasses, I also see no great solution for input anytime soon, if ever.
For consumers, I only see two big markets for glasses/HMD: VR is great for immersive gaming and related entertainment. Controllers are effective input devices there. Second and much bigger market is replacing the laptop, monitors, keyboard and mouse with Xreal glasses, Samsung Dex and a ring on each hand. It provides a huge monitor, is portable, provides the same level of productivity, and leverages the smartphone that is anyway at hand. (I personally invest in this scenario).
Certainly many smaller, valid markets exist across B2B and B2C, each requiring tailored input solutions. However, if history serves as a guide, (input) technologies developed for dominant markets will likely be adapted for these niche applications.