Meta Ray-Ban Display Part 1 (Lumus Waveguide, OmniVision LCOS, and Goertek Projection Engine)

Introduction – Busy Studying the Meta Ray-Ban Display and Helping with Teardowns

As I was about to be away for nearly a month in Europe, including presenting at the AR/VR/MicroLED Connect conference in Eindhoven, news broke about the Meta Ray-Ban Display Glasses. I immediately noticed that the glasses used a Lumus Geometric waveguide (see: Meta Ray-Ban AR Glasses Show Lumus Waveguide Structures in Leaked Video and the image below, left). However, an X-ray-like view of the glasses was subsequently published (below right), revealing that the optical engine didn’t resemble any previous Lumus optical engine. While in Eindhoven during the “networking time,” I heard that it used an OmniVision LCOS microdisplay, and that Goertek designed the optical engine and manufactured the glasses.

If you look at the Lumus engine (right), you will see a long “integrating rod” homogenizer that is not seen in the X-Ray view of the MRBD (above right, pointed at by red arrow)

There have been several teardowns of the Meta Glasses. I helped iFixit with their teardown, which was published on October 8th. iFixit worked with Lumafield to get an intact CT scan of the glasses. This turned out to be a very good thing because, as the optical engine was being assembled, it required disassembling it to inspect its components. iFixit has also provided me with the engine components for further analysis.

I’m also working with Radu Reit’s Display Training Center on his teardown of the Meta Display Glasses (he has one part published on YouTube as of this writing). Radu managed to remove the Lumus Waveguide while keeping the rest of the engine intact and running, allowing us to compare the projector’s output with that via the Waveguide. He was also able to reassemble it with the waveguide.

The combination of the iFixit teardowns and parts, Radu’s partial teardown, and my non-open unit should provide a comprehensive picture of how the Meta Ray-Ban Display Glasses (hereafter, MRBD) display and optics work.

I would also like to thank David Bonelli of Pulsar Solutions for his assistance in analyzing the design. David thinks he could improve on the Goertek optical engine.

In Part 1 of this multi-part series, I will focus on the display and optical path, primarily based on iFixit’s Teardown and the components they lent me for further evaluation.

Partnership with Display Training Center Videos

Radu Reit, formerly with Apple, working on the Apple Watch, iPhone, Vision Pro, and more, has founded The Display Training Center and its YouTube channel. Radu and I first met at the MicroLED Connect and ARVRMR Connect in Eindhoven, and agreed to team up. Our first joint video on the Meta Ray Band Display was published today. There is a free 30-minute cut-down version, as well as the full two-hour video (available behind a Patreon paywall), which provides more detail and covers additional issues.

Radu Reit, a former Apple display engineer, and I have teamed up to do a video and podcast series on display devices. We first met at the MicroLED and AR/VR Connect conference in Eindhoven in September and decided to collaborate on developing a new video and audio series.

The first video features the Meta Ray-Ban Display glasses. A free 30-minute version (https://lnkd.in/giZD8rm8) and a 2-hour “Director’s cut” (https://lnkd.in/gBDpTJ3W via Patreon for $20/month) are available. We plan to release about two videos per month.

Meta Spending ~1.5B/Month & Everything is Off the Shelf

What I find fascinating is that, with all the money Meta is spending on R&D and all the papers they have published, when they actually had to make a product to sell, they went with LCOS rather than MicroLEDs that Meta has heavily invested in; Lumus geometric waveguide, not silicon carbide or even glass diffractive waveguides; and optics are designed by Goertek (a very well-known company for optical design and assembly) and not some in-house design.

I can understand the argument that MicroLEDs are not yet ready for prime time when it comes to full-color displays. Still, I would emphasize that there may be reasons LCOS will remain a better solution for some time, at least for full color (a discussion for another day). OmniVision’s LCOS appears to be a solid choice. I also understand why Meta chose Goertek for the glasses’ design, though I am not a big fan of Goertek’s optical engine (more on that below).

What really stands out is their use of Lumus’s geometric waveguide. There must have been a mountain of NIH (not-invented-here) to overcome not using an in-house diffractive waveguide design and to go with Lumus. I understand that it has technical advantages in terms of efficiency, significantly less eye glow, and better color uniformity, but Meta still has many diffractive waveguide designers. From an intellectual property perspective, many companies are developing LCOS and MicroLED microdisplays; dozens of companies have diffractive waveguide technology, but only one has developed good geometric waveguides. To top it off, Lumus has achieved waveguides with 70+° FOVs in glass rather than the very expensive Silicon Carbide.

Lumus Geometric (Reflective Waveguide)

While Meta has not said and Lumus has not (yet) admitted to it, the waveguide in the MRBD glasses is clearly a variation on Lumus’s Z-Lens Geometric waveguide (see: Meta Ray-Ban AR Glasses Show Lumus Waveguide Structures in Leaked Video).

Lumus waveguides are typically 3-7 times more efficient (for the same FOV/eyebox), have vastly better color uniformity, and a small fraction of the eye glow when compared to diffractive waveguides. The knock on Lumus has been manufacturability and cost, so the MRBD glasses will be a big test, given the volumes needed to drive process and yield improvements. Historically, Lumus waveguides have been used in higher-end military, medical, and industrial applications.

Rivet Industries is clearly using Lumus waveguides for such applications (see: Exclusive: Rivet Industries Using Lumus Waveguides for Military & Industrial AR). I also suspect that Anduril’s new Eagle Eye is also using Lumus Waveguides. Anduril took over Microsoft’s IVAS contract this year. In September, the Army awarded $159 million to Anduril and $195 million to Rivet for the development of AR glasses.

I can’t see any of the distinctive Lumus Z-Lens “slats,” like in Rivit Pictures on Anduril’s website or when Palmer Luckey went on Joe Rogan’s Podcast and showed Anduril’s Eagle Eye glasses (above left), because the Anduril website pictures and the Video (with compression) are too low in resolution. The beauty of video is that you get to see optics/waveguides from many different angles, which reveal the telltale attributes. But there was no telltale diffractive waveguide eye glow, or what I call “diffractive waveguide passive glint,” when Palmer removed the darkening “shield” and gave them to Joe Rogan to try on. What I call “diffractive glint” is where you see colors reflecting from the exit grating due to external light (see the HoloLens 2, Magic Leap 1, and WaveOptics examples of diffractive waveguide glint below).

With all the studio lighting and the various angles of the glasses relative to the camera, if they had used diffractive waveguides, the glint would surely be visible coming off the glasses. By a process of elimination, either Palmer Lucky was showing dummy glasses, or they were using Lumus Waveguides. Still, they certainly were not using diffractive waveguides like in Microsoft’s HoloLens IVAS glasses (plus they look nothing like what Microsoft developed for IVAS).

Front Projection (Eye Glow) “Controversy” with the MRBD

In Radu Reit’s Meta Ray-Ban Display (MRBD) teardown video and Navaneeth Tejasvi M N’s LinkedIn post, both discussed finding “eye glow” despite most influencers stating that it did not exist. While there is eye glow, I measured it at only about 1.5% of the brightness reaching the eye, and the light is directed down. Compared to diffractive waveguides, which typically have eye glow that is 50% to 100% of what the user sees, the MRBD eye glow is much less noticeable in typical use, where the glasses’ ambient light sensor adjusts the brightness, which is likely why most reviewers didn’t notice it. The eye glow can be visible when brightness is set very high in low ambient lighting, as occurs during more extensive evaluations.

Shown below are some pictures I took indoors with the brightness set to maximum. You can see the eye box projected on my eye and see how much range there is to see the image. This is much brighter than one would want for the given ambient light; this is what 5,000 nits projected at your eye looks like indoors. You can then see the eye glow at maximum brightness when viewed from below. With normal brightness levels for a given ambient light, the eye glow is barely noticeable, even at an optimum angle. Also, notice that the image is broken up when viewed from this direction, making it impossible to recognize the content (which, in this case, was a test pattern with a photograph).

Image Quality, Brightness, and FOV

I will go into the image quality and specifications of the MRBD in more detail in future articles, as well as on the 2-hour Display Training Center video. I did extensive evaluations of the image quality, which will be the subject of an upcoming article. I also worked with Radu Reit, who removed the Lumus waveguide and projected the image directly from the glasses projector to help ascertain the source of various issues.

Overall, I would say the image quality is very good for waveguide-type glasses. The color uniformity is very good (compared to diffractive waveguides), while not perfect. My one complaint about the image quality is that it’s a bit “soft.” I think the softness is due to the glasses digitally resampling (for more on the problems with resampling, see: Simplified Scaling Example – Rendering a Pixel Size Dot) and to some softness in the optics.

Based on Radu’s testing without the waveguide, the image is soft when it leaves the projector, not due to the waveguide (I plan to go through the details of this issue in a future article and discuss it briefly in the 2-hour Display Training Center video). The effective resolution of the glasses is closer to 400×400 pixels rather than the stated 600×600 pixels. With the glasses having 40 pixels per degree, and when viewed against the real world (as opposed to a black background in my testing), this is not necessarily a significant problem.

Radu measured about 1 lumen being output from the projector at full brightness, and I was able to confirm Meta’s claim that the display outputs about 5,000 nits (cd/m2), so it looks like they are getting about 5,000 nits per projector Lumen to the eye.

The field of view, compared to many other AR/Smart glasses, is considered small at 20 degrees (diagonal with square aspect ratio. Interestingly, the glasses almost never use the full 20 degrees except in some very special cases. For most cases, the display shows about 16 degrees or less of the field of view. Below are several examples where I changed the font size and toggled “bold” to see if it would use up more of the FOV (it does not; it just wraps text). The Orange square in the pictures below indicates the full 20-degree FOV.

From the battery life, he calculated a power draw of approximately 0.38W for the glasses (including the LED and display) with the display and audio running. Perhaps surprisingly, running the camera to take photos more than doubles the power draw to 1W, and taking videos increases it to 1.7W or about 1.3W more than the display plus audio alone.

Color Uniformity

The color uniformity of the MRBD glasses is significantly better than that of the diffractive waveguides, although it is not perfect (see the right image). On the right is a white picture that has been expanded to fill the FoV (use a “zoom” mode, which loses resolution, but does not matter for a solid white image).

No way to load images into the glasses except via WhatsApp

The only way I have found to load pictures into the MRBD is to use WhatsApp. Unfortunately, when opening a WhatsApp picture, only about 16 degrees or approximately 464 pixels are used, and even then, the image has been resampled, which reduces its sharpness (more on this in future articles). There is an option in the Meta AI phone application to “magnified images” with a triple-tap. Still, while this fills the FoV with the image, it also performs a software zoom, which compromises the resolution of the original image (and how I obtained the white image to cover the whole FoV above).

OmniVision’s LCOS

The information I obtained in Eindhoven about the source of the LCOS device proved to be true, with an OmniVision part number (OPO3010) stamped right on the device’s flex. This device utilizes the same LCOS device as OmniVision’s catalog part, OPO3011, but it is mounted on a different flex board to better fit into the MRBD. Meta says they have a 600×600-pixel display, so 48 pixels in each direction are not being used to display information. Unfortunately, the optical engine is too soft for me to confirm exactly how many pixels the glasses are using.

OmniVision is best known for its camera technology, which is used in numerous cell phones and other devices. They have leveraged some of that technology in their LCOS designs, integrating all LCOS control, frame buffer memory, and MIPI receiver onto their LCOS’s silicon backplane (see right). This integration not only reduces size and power consumption but also improves performance. It’s also worth noting that the MIPI receiver reduces the number of wires that must cross the hinge where the glasses fold.

Contrast

OmniVision claims their LCOS has a 1000:1 on-off contrast ratio, and I measure about 600:1 across the whole system, which will include contrast losses in the optics. This level of contrast is more than good enough for most applications when the display brightness is controlled by the ambient light sensor (ALS).

LCOS has gotten a rather bad rap for the “picture frame effect,” much of it stemming from Google Glass in 2013, which appeared to have a contrast ratio of less than 100:1 — comparatively poor even then. The human visual system has an instantaneous dynamic range of approximately 1,000 to 16,000 to 1, and a total dynamic range (from day to night) of over 1 million to 1. So if you crank up the brightness in a dim-to-dark environment, you will see the gray “frame” of the display. Generally, when the ALS is working, the frame is barely noticeable, if at all.

Field Sequential Color Breakup

While Ommivision’s LCOS supports up to 120 frames (360 R-G-B color fields) per second, the MRBD has only 90 frames/second. This results in a slightly more noticeable field-sequential color (FSC) breakup, as noted in Radu’s video (a still at 8:36 below). The cause of the FSC breakup is that the various color fields appear at different times; the longer the time between the first and last color fields, the less likely the human visual system is to align them properly, and the larger the breakup will appear. The extent to which a person notices the FSC breakup varies dramatically across the population. I’m hoping that with OmniVision’s level of integration, they will be able to go to higher field sequential rates and reduce FSC breakup.

No MMU FSC Breakup Correction

There are two known ways to reduce FSC breakup: the first is to reduce the time (increase the field rate) between the first and last color field, and the second is to use motion feedback and image warping/reprojection to align the later color fields to the earlier ones, as Snap (among others) uses in their Spectacles 5 (see: Cameras, 6DOF, and Reprojection to Reduce LCOS Field Sequential Color Breakup). Integrating everything on the backplane, as Omnvision has done, supports increasing the color field rate without significantly increasing power, but does not help with motion warping (at least until warping processing can be supported on the backplane).

Goertek Projection Engine

Unfortunately, the way the MRBDs are assembled, it is not possible (or at least not easy) to get a good picture of the optical engine as a whole. Below are the various components arranged in a rough “exploded view” in the same order as they occur in the projector engine.

iFixit had Lumafield make a CT scan of the glasses before their teardown (below left). I have drawn in red the components that were not visible on the CT scan. For comparison, I have included a 2017 article featuring a “conventional LCOS engine” for a much brighter front projector, published by Polyfractal (I have added larger labels).

The MRBD, not needing to be as bright, places the red and blue LEDs next to each other on the same ceramic substrate, thus requiring only a two-way combination with a single dichroic mirror, unlike the three individual LEDs and two dichroic mirrors used in the larger pico projector. The two-way combination of the glasses’ engine eliminated the need for a “correcting lens” because all three LEDs are at the same distance from the rest of the engine. Both designs use a microlens fly-eye homogenizer to mix the various colors and generate a square/rectangular illumination pattern.

There is one major structural difference between the two designs: a very different quarter-wave plate (QWP) and a Concave mirror at the bottom of the polarizing beam splitter (PBS). I have taken a close-up photo of the PBS with these optics attached (below left) and drawn arrows showing the direction of the light flow. The result of this structure is that the light exits the PBS at right angles to the LCOS device, whereas in a conventional LCOS engine (above right), it exits directly opposite the LCOS device. Knowing that Goertek likely designed the optics, I conducted a quick patent search and found a Goertek patent application that shows this same unusual optical configuration (parts 62, 61, and 6).

The different configuration was likely implemented to improve the fit of the optics when connected to waveguides in AR glasses. Optically, it appears worse because there is a direct path from the incoming bright light to the output, which relies on very good polarization control of the pre-polarizer and PBS to block it. In a more conventional configuration, there is no such path. The optics also send the image back into the PBS, which is another source of possible ghost images.

I’m unsure if this unusual optical path is the cause of the projector’s softness, but it has to sacrifice something for the form factor advantage. I’m not an optics designer (my degrees are in electrical engineering), but on the surface, I’m not a fan of this configuration, as there are multiple paths to image quality issues, including ghosting and contrast loss. Still, the engine’s on-off performance appears reasonably good, with a measured contrast ratio of 600:1, even if the image is a bit soft.

Fly-Eye Homoginizer

Fly-Eye Homoginizers have been used for decades within both LCOS and LCD projectors. A fly eye homogenizer has microlens arrays on two sides. The left image is a series of close-up pictures of the glasses homogenizer showing the lenses on both sides.

For educational purposes, I have included pictures (right) of how the fly-eye affects an oblong spot of light from a cheap laser pointer (aimed above the fly-eye, as shown on the right). The fly-eye with two sets of micro-lenses produces a uniform square light for illuminating the LCOS device, and it does so even when the input light is non-uniform, thereby homogenizing and shaping red, green, and blue light. The homogenizer has some impact on the étendue (light randomness vs area), but using two sets of lenses is much less than, say, a simple diffuser would have.

The alternative to a fly-eye homogenizer is an integrating rod homogenizer. Integrating rod homogenizers are more commonly used with DLP projectors (some DLP projectors use fly-eye homogenizers). Lumus is the only company I know of that has used an integrating rod homogenizer with LCOS, which led to my suspecting that while Lumus designed the Waveguide, they likely did not make the optical engine when I saw the x-ray view in the Introduction above.

Conclusion

When push came to shove and they needed to get a set of glasses out today, Meta chose more or less off-the-shelf technology. The use of the Lumus waveguide stands out, as most have opted for diffractive waveguides to date. I assume it was to achieve the standout characteristics of the Lumus waveguides, which include improved efficiency, enabling higher brightness, and significantly reduced eye glow (to the point that most reviewers didn’t even notice it).

I’ve heard rumors that Meta is working on binocular display glasses using diffractive waveguides (glass, as Silicon Carbide, is still far from practical). I can’t see why Meta would go backwards in terms of efficiency (=brightness) and eye glow.

It appears that there is an issue in the optical engine that affects resolution. Both in terms of what I see of the design and the image results, I’m not a fan of Goertek’s LCOS optical engine in the MRBD, despite Goertek being one of the go-to companies in high-volume headsets.

Omnivision’s LCOS appears to be a solid choice, and I appreciate the level of integration between the driver and frame buffer within the LCOS device. I’m hoping they’ll increase frame rates in the future to reduce field-sequential color breakup.

Karl Guttag
Karl Guttag
Articles: 296

11 Comments

  1. OV pixel pitch is 3.8um, any idea why an even smaller 2.6um or 3um pixel wouldn’t be beneficial? Also isn’t Avegant’s LCOS solution even smaller?

    • Omnivision makes smaller 3.0 micron pixels in their slightly newer OP03050 1560 x 1200 device. I think it is simply a matter that they already had the 648×648 design with the 3.8 micron pixel on the shelf.

      The downside of a smaller pixel could be a bit less efficiency and contrast.

      I think Avegant’s 20-degree (and even their 30-degree) engines are likely smaller. Their design is more “aggressive” than the Goertek engine. Size is not everything, there are other variable such as image quality, volume manufacturing, and business relationships that could have been a factor.

    • I’m not exactly sure what “count’s” as “geometric waveguides.” They are in a different class from the Lumus Geometric/Reflective waveguides.

      • What do you mean by “a different class?”
        Don’t they have a reflective Waveguide as well?

      • Oorym is a bit different. It is not as thin and there are only a few total internal reflection (TIR) bounces, versus the thinner waveguides with many TIR bounces. They are not doing the 2-D pupil expansion.

  2. How important do you think having an integrated dimmer technology (e.g. electrochromic) is for maintaining contrast in brighter ambient lighting vs. cranking up the projector power? It should result in power savings but would there be other benefits for imaging quality?

    • The issue is when you go outdoors. Grass in sunlight is about 2.5 nits and concrete can as much as 10K nits. If you want something to readable you would like to have at least 2:1 contrast which means being as bright as what you are looking against. To be able to tell colors reasonably well, you would like to have about 8:1 contrast or about 7x brighter than the background. Automotive HUDs want to have about 17K nits to be readable.

      From a point of view of comfort, dimming the real-world seems like a better solution in terms of comfort than simply cranking up the brightness. There are also as you suggested, issues of power consumption and heat dissipation that have knock on effects in terms of size, weight, and battery life.

  3. Thank you for your in-depth LCoS analysis post.
    I understand that Korean company Raontech recently announced LCoSP13 as the world’s best specification and is trying to develop a new AR glass by supplying samples to global companies.
    If you can do a professional analysis of Laontech’s LCoS, we look forward to a new post.

    • In terms of display brightness, transparency, and lack of eye glow in a “glasses form factor” they are probably the best, but it depends on what you want to do with them. They are NOT for watching a movie for example, something like one of the birdbath designs (ex. Xreal) are better. The biggest limitation is in the sparsity of applications. There are only a very few apps by Meta only. They also are only glasses plus AI and don’t have any SLAM capability.

      What is it you want to do with them?

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading