Magic Leap 2 at SPIE AR/VR/MR 2022

Magic Leap 2 Presentation Was the Hit of the Conference

Kevin Curtis (Curtis), Magic Leap’s VP of Optical Engineering, presentation “Unveiling Magic Leap 2’s Advanced AR Platform and Revolutionary Optics” at the SPIE AR/VR/MR conference created the most buzz of the whole conference. The presentation did a great job laying out Magic Leap 2 (ML2) display and optics technology at a high level and some of the reasons behind the decisions. I don’t think they gave away any “secret sauce” that could not be easily figured out by tearing down a unit, but it was still a refreshing change.

There was some speculation (I have no sources) before and during the conference that Magic Leap is in the process of being sold (the name most floated is Google as they are already an investor). Sometimes a very open presentation is given by a company trying to sell the company. They also can happen when a company has been sold on letting the world know what they have accomplished. Or it could be that Magic Leap is trying to gain back confidence. One thing is for sure, Magic Leap has gone from being one of the biggest “startups” to a comparatively small company in AR compared to Meta, Apple, Google, Amazon, Microsoft, and Samsung.

I will be concentrating on the display and optics revealed in the presentation. This article is based on pictures I took (edited and enhanced) as a member of the press and my memory of the talk. The SPIE should be publishing the whole video in about a week from this writing, and I would highly recommend watching the video.

Picked LCOS over MicroLEDs and Laser Beam Scanning

Curtis explained the decision process for the display technology. While many think MicroLEDs may be the future, they were clearly were not ready for a production color headset. Curtis stated that while laser scanning was in Magic Leap’s “DNA” (see: Magic Leap Fiber Scanning Display (FSD) – “The Big Con” at the “Core”), despite spending millions, they could not meet their display requirements with laser scanning. See my articles on the many problems with Hololens 2’s laser beam scanning display.

Magic Leap’s 2013 Application

After evaluating all the alternatives, Magic Leap decided LCOS was the best display technology for their application. As I plan to cover other articles about this conference, LCOS is the display of choice for most new waveguide-based designs that need either color or higher resolution. Both Avegant and Digilens are making new design pushes with very small and very bright LCOS designs with waveguides. The “word on the street” is that Snap has an LCOS-based design in the works.

High Resolution (1440 x 1760 Active) LCOS Display (Omnivision LCOS rumored)

Curtis claimed that there are advantages of having the display be taller than it is wide. As shown (right), the ML2 “active display” display is 1440 pixels wide by 1760 pixels tall. They are holding 96 by 96 pixels in reserve for the alignment of the displays to the eyes.

Magic Leap One (ML1) didn’t have these reserve pixels and instead required two models to cover different interpupillary distances. Some mistakenly thought there were different ML1 models based on head sizes. The presentation pointed out that the ML2 would only have one model for all users. These reserve pixels may also come in handy with other display to eye alignment issues as the eye moves (more on this later).

While not mentioned by Magic Leap, the word as the conference is that ML is using an unannounced Omnivision’s LCOS device. It would not be a surprise as the ML1 used an Omnivision device. The rumor mill seems to think that Snap may also be using Omnivision for their new headset despite Snap’s recent acquisition of Compound Photonics and their current porotype using TI’s DLP. Most LCOS companies today are capable of making such a device with a 3.8 micron pixel pitch.

On the figure below, I have added estimates of the FOV along with number of pixels in each direction. Both the FOV area and the number of pixels have doubled over the ML1, so the number of pixels per degree is about the same. I should not that while Hololens 2 falsely claims 47 pixels per degree of resolution, its effective (measurable) resolution is closer to 15 pixels per degree, or about half in each direction of the ML2. The HL2 is famous for it horrible image quality. I fully expect that ML2 will blow away the HL2 in resolution and just about every other measurable aspect of image quality except for the transparency of the real-world (more on that later).

ML2’s Compact LCOS Optical Design

Lumus Maximus LCOS Engine

The presentation shows some other approaches to LCOS that they state (incorrectly) are limited in FOV. They are correct that most LCOS designs (left) use a color combining stage (X-cube or a series of dichroic mirrors) and a polarizing beam splitter. They also mentioned Himax’s Front-Lit. But I have seen several more innovative compact designs with large FOVs, including Lumus’s Maximus (right).

While it used a beamsplitter, the ML1 optical path (right) did not have an X-cube or other color combining optics. Instead, the ML1 had separate color paths to each of the waveguides. Because the ML1 supported dual focus planes, it required six LEDs and six waveguides (two sets of r, g, & b). Because the ML2 does not support dual focus planes, there is only one set of LEDs and waveguides.

Supporting only a single focus plane enables other simplifications. It should dramatically improve the image quality of the ML2 over the ML1.

The ML2 has a somewhat innovative approach to reducing the size of the overall optics using LCOS. They start with separate LED illumination for each of the separate color waveguides (this time only three), as with the ML1. Interestingly, they then send the illumination light through the waveguide and projection lenses to illuminate the LCOS device and avoid needing a beamsplitter.

The left-hand figure below shows the combined red, green, and blue light paths (I combined/overlaid three of the presentation slides). Magic Leap uses a “tricky” combination of circular polarizers to control to and from light paths. This folded path with circular polarizers seems similar to pancake optics used with a few of the newer, more compact VR headsets (and expected in the Meta Cambria). The design is fairly compact, as shown in the figure below-right from the presentation.

While eliminating the beam splitter removes some weight, it is not a large part of the overall weight. What is more important is that it reduces the size of the optics and lets the LCOS display be optically closer to the projection optics, which can help simplify said optics.

While many non-designers worry about the battery life, a much bigger design concern in a headset design is the heat management from power consumption. ML2 claims to be >12x more efficient (#4 above-right) when factoring in FOV and eyebox, which is likely true. But then the ML1 was pretty inefficient with its support of the dual focus planes (there is no free lunch). The “single SKU” is due to having “reserved” pixels discussed previously.  

2,000 Nits Peak Brightness, a Significant Accomplishment with a 70° FOV

Curtis said the ML1 was about 150 nits (which roughly agrees with my measurements), whereas the ML is supposed to go to 2,000 nits (see #5 above-right). Achieving 2,000 nits with a diffractive waveguide with an ~70° FOV is a significant accomplishment. For comparison, the HL2 claims 500 nits but only does so in the center of the HL2’s very non-uniform image as I have measured it. Lumus Maximus is expected to deliver >4,000 nits for a 50-degree FOV per Watt of LED power. ML has not yet specified the power consumption of the ML2 at 2,000 nits.

Having the led light go through the projection optic to illuminate the LCOS and avoid needing a beamsplitter may seem strange, but ML is not the only company trying this approach. While at the conference, I saw Avegant’s new, very compact light engine. Avegant uses a waveguide structure to combine the r, g, &b LEDs and the, like the ML2, sends the light through the projection optics toward the LCOS (see right). Unlike the ML2, Avegant is designed for one to three-layer waveguides that don’t have spatially separated input gratings. Avegant demonstrates their current prototype with a single layer Dispelix single-layer waveguide (I plan to cover Avegant’s new designs in an upcoming article).

ML2’s Optical Stack

ML2’s optical stack (after the projector) is shown on the right. On the surface, it looks a lot like the stack of the Hololens 1 other than the dimmer layer. Nothing very surprising. The ML2 uses very high index 2.0 glass (only recently available), which helps support a wider FOV than the Hololens 2 without resorting to the complex and image compromising “butterfly” design of the Hololens 2.

  • Aside: The whole laser scanning display and butterfly waveguide of the Hololens 2 seem to be the very definition of “a research project that escaped the lab.” The ML2 also has some “researchers having fun” in it such as with the dimmer function (more on that in a bit).

The depolarizing film (far left) reduces the problems with viewing typically polarized LCD monitors. The “eyepiece” is the set of r, g, & b waveguides with protective covers and coatings.

22% Transmission or blocking 78% of the real world light – Best Case

First, the elephant in the room is the 22% light transmission stated on Magic Leap’s slide. They block 78% of the real world’s light in its most transmissive state, effectively medium-dark sunglasses. And frankly, I think the 22% is a likely theoretical number that they probably can’t achieve.

A typical “high transmissivity” reflective polarizer will block about 60% of unpolarized light, 50% for polarization, and another ~10% in efficiency loss. The dimmer structure has another polarizer, which will lose ~10% more. The various films and structures of the dimmer should lose ~15% or perhaps more. The stack of three diffractive waveguides will typically lose 25-35%. Throw in another ~10% for all the other films, coatings, and lenses, and I get a best case of about 22%, and likely it is worse. I have gone into a lot more detail about the light-blocking problems with segmented dimming in Magic Leap 2 (Pt. 3): Soft Edge Occlusion, a Solution for Investors and Not Users.

As a practical matter, rooms are not lit five (5) times brighter than people want. In the example Magic Leap’s CEO gave of an operating room being so bright in her CNBC interview, it is bright for a reason, and they will not want to make it five times brighter still.

Below are shown ML1 with 85% blocking, Avegant’s prototype with Dispelix waveguides that are highly transparent (likely >85% transparent), Lumus Maximus prototype with about ~85% transparency, and Hololens 2 with 40% transparency. ML2 is going to block about twice as much real world light as the Hololens 2 and about 4 to 5X more light than Dispelix and Lumus.

Magic Leap didn’t say anything about “front projection,” the glowing eyes effect seen with many AR glasses. Magic Leap One and Hololens 1 & 2 are notorious for large amounts of front projection (see CNET ML1 picture in upper-left corner above). So I am guessing that Magic Leap isn’t mentioning it, then it must not be good. My rule of thumb is that if a company does not talk about an obvious issue, then the answer is likely bad. In contrast, with Avegant/Dispelix glasses, I was hard-pressed to see any forward projection from any angle, and they are proud to say they are down to about 1% forward projection.

The “LED layer” is for the IR LEDs illuminating the eye. The ML2 requires inserts for vision correction, but these are different from those for the ML1 as they require cutouts (left-lower-corner) for the eye-tracking cameras. Quite a few people a the conference commented on the logistical nightmare that these inserts will cause users.

Diopter (focus adjusting) lens pair

As discussed in my prior article Magic Leap 2 for Enterprise, Really? Plus Another $500M, ML1 built in a diopter adjustment into the exit grating of the waveguide as shown in the figure (right) from Bernard Kress’s (excellent) “Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets” on page 164. The book points out that ML1’s exit-grating diopter adjustment method tends to degrades the image quality.

The ML2 uses a front and rear lens pair to adjust the focus point like Hololens 1 & 2 as I speculated in my prior article. The collimated light exiting most thin waveguides is focused near infinity. The lens nearest the eye moves the focus to a closer distance of about 1.5 to 2 meters. The lens on the world side of the waveguide compensates to keep the focus of the real world from changing. The lens method should help the ML2’s image quality over the ML1.

Dynamic and Segmented (aka Soft Edge Occlusion) Dimming

In October 2021, I explained the inherent problems with Magic Leap’s segmented dimming, known in the industry as “soft edge occlusion.” I explained that the technique of using an LCD shutter to globally dim or a pixelated array dim areas is both well known and has not been used because it loses over 70% of the incoming real-world light, as more briefly discussed above.

Beyond the loss of light, the segmented (soft edge) dimming is extremely imprecise/blurry, and trying to dim a single pixel could also dim thousands of surrounding pixels. While segmented dimming might seem like LCD televisions with local dimming, the segmented dimming approach can be orders of magnitude less precise. Additionally, the segmented dimming pixels will cause diffraction problems with the real world as point out in Magic Leap patents (see the October article).

Magic Leap had just side discussing the concept without details and just a few low-resolution pictures (below). It likely will not work very well based on Magic Leap’s patents and my analysis. Note the dimming starts by blocking 80% of the real-world light.

Visual Comfort and VAC – Focus Planes Gone

Magic Leap Application Outlining VAC Issues

For nearly a decade, Magic Leap has been touting the issues with Vergence Accommodation Conflict and their solution of using multiple sets of waveguides with different focus planes. In his presentation, Curtis came right out and said that ML2 had dropped the feature. In Magic Leap 2 for Enterprise, Really? Plus, Another $500M in October 2021, I speculated that it had been dropped in favor of better image quality.

Curtis explained that there might be more important issues that affect visual comfort than VAC, as shown in the slide on the left. Curtis made the case that many of the problems are related to rendering issues rather than VAC and that many of these issues can be improved by more accurate eye tracking. Curtis explained that Magic Leap has improved eye-tracking and rendering.

A side effect of the much-improved eye tracking was the necessity to have cutouts so the camera can have an obstructed view of the user’s eyes (left).

The presentation also gave some interesting information on the need to correct binocular alignment due to even small mechanical movements. The ML2 includes sensors/cameras for detecting the bending of the headset so the binocular alignment can be corrected (below).

Image Quality – Hard to Tell

The presentation had one slide with “through the lens picture” with three pictures much reduced to fit on a single 1080p resolution slide, and I took the picture below the projected slide with its losses. All the images show severe vignetting (circular darkening at the outsides). Still, it is impossible to know if this was an intended effect, a problem with photography not being inside the eyebox or a problem with the ML2.

All three images in the slide have highly saturated colors, making it hard to tell whether the colors are accurate and if the colors are shifting. I would prefer to see pictures of people and some pure white content. Interestingly, the text color was solid green rather than white, which would hide any color shifting across the waveguide. These images only show that the ML2’s image quality is better than the horrible Hololens 2, but it is impossible to compare them with displays and waveguides from other companies.

I do not doubt that the ML2 has vastly better image quality than the ML1 or Hololens 1 and 2, but that is a very low bar. We will have to wait and see until an objective analysis can be made of the image quality.

Conclusions

While it seems obvious that ML2 will blow away the HL2 in terms of image quality and brightness, the HL2 sets a very low bar and is not important when the ML2 on so many other aspects of the design, as I outlined in Magic Leap 2 for Enterprise? Plus Another $500M. The key list of misses relative to the HL2 includes having a cord, not enough eye relief to support normal glasses, and a lack of a flip-up screen.

Blocking ~80% of the light is an unrecoverable mistake. As I have previously written, ML2 looks like a product designed for the consumer market, but it was seen that it would be far too expensive for that market, it was recast as an “enterprise” device.

The more I think back about the presentation and other aspects not covered in this article, such as their ability to design and manufacture their own waveguides, the more I agree with the people that think the presentation was more of a “For Sale” sign.

Karl Guttag
Karl Guttag
Articles: 260

18 Comments

  1. They “spend millions and could not solve the scan line problems”. Maybe they have not good enough engineers and should have hired Microvision for that?

    What is the reason for the “cord”? No included battery or computer or just too high power consumption compared to Microsoft Hololens 2?

    • They “spend millions and could not solve the scan line problems”. Maybe they have not good enough engineers and should have hired Microvision for that?

      Microsoft hired a bunch of Microvision people for the Hololens 2, spend 10’s if not 100’s of millions, and the image looks like crap. LBS has been a big failure as a display device.

      The cord on the ML2 goes to the combined computer and battery pack like it did on the ML1. It was a design decision to make the headset lighter and look more like glasses. The ML2 is about double the display area, should have about 4 times the measurable resolution, and is more than 4x brighter than the Hololens 2.

  2. The whole industry is moving to dimmable or black color rendering of images using various methods. Magic Leap 2’s is just one such example of this trend. On that zoom call you were on with Tilt 5 last fall they showed their method using a black tablecloth as a background. In Miami, all the Lightform2 geeks are projecting images onto scrim fabrics in darkened rooms. It’s amazing how what was impossible just two years ago is now so commonplace that it’s not worth mentioning to newbies since they don’t know what came before (the marketing folks are still living in the past though so they need a refresher).

  3. Still not the hyped Fiber Scanning Display of course, since that’s impossible vaporware and the CEO ran away shamefully after lying to everyone and pocketing gullible investors’ cash.

    Magic Leap is irrelevant, I don’t know why they continue showing their lackluster tech.

  4. Nice observations Karl, just one thing – you can’t really compare luminance without including eyebox (although of Nits stems from cd/m^2). As you know the eyebox on HL2 is very big, light is spread out over larger area, so luminance drops off – hence the use of lasers to try and give more light (you might have already made a similar point in the past). A somewhat overlooked point here is *why*. Microsoft spoke to potential enterprise users (as did one or two other AR companies) and found that they want their workers to be able to wear their own glasses underneath the AR device – they absolutely do not want prescription inserts. ML2 does not offer this capability, and is sticking with inserts, so it will be interesting to see how it will fair, especially also given the lack of Azure and other enterprise features that come with HL2. With that in mind, integration with Google cloud might seem an apt fit, which would support the rumour mill. However, when you consider up to 60% of a typical workforce wears glasses, this still seems like a gamble.

    • Yes, allowing daily glasses under the headset is a key advantage of HoloLens, and I agree the prescription lens insert could kill the user case of MG2, even though I like their effort. MagicLeap just over looked on the fact that adding the insert is so cumbersome, no user like extra steps during each use.

  5. Ops sorry suddenly spotted you do refer to the lack of eye relief for glasses… maybe I need a new prescription… (although worth pointing out it was such a big requirement that forced Microsoft down such an awkward design path / solution)

    • Many people/companies can design waveguides, and they all think they are the “best” or have some unique advantage. Magic Leap was developing waveguides before Displex was public. Magic Leap hired many people to design theirs and developed their own waveguide manufacturing capability. They saw it as a core strength. Also, understand that there are many different aspects of what makes a good waveguide.

      Unfortunately, there is insufficient information to objectively know if Dispelix’s waveguide is better than Magic Leap’s. I will say that I was impressed by the very low amount of “forward projection” (glowing eyes) with Dispelix’s waveguides in the Avegant glasses. Dispelix is also is said to have 1,000 nits/lumen at 30-degrees which also seems like a very good spec.

    • That is not necessarily true. At the AR/VR/MR conference in 2019, Dispelix said they could go to about 80-degree FOV with a 3-layer/color waveguide, which is wider than the ML2 with a 3-layer waveguide. Dispelix has single-layer waveguides that go up to 50-degrees.

      You also have to factor in the index of glass that is available. As the glassmakers push the index of the glass larger, it becomes possible to do wider FOVs.

  6. I would suggest inquiring – what they can offer NOW – not what is possible in theory or has been demonstrated in modelling software.

  7. I’d love to see you rank which waveguide companies in your opinion make the best waveguides. In my humble opinion it seems like Vuzix and Lumus are currently the top two in that regard, even though Lumus has been very quiet in the last 8 months or so.

    • Thanks for the video link.

      I think the bigger change was to the waveguide. Pictures of the latest military prototypes show that they changed the waveguides.

      They could probably easily stretch the width of the display by changing the static horizontal curved mirrors in the optics. https://kguttag.com/2020/07/17/hololens-2-display-evaluation-part-4-lbs-optics/.

      Rumors were that Hololens was looking at going back to LCOS for the non-military Hololens 3, but about the middle of last year, everything seemed to have stopped.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading