304 North Cardinal St.
Dorchester Center, MA 02124
304 North Cardinal St.
Dorchester Center, MA 02124
Kevin Curtis (Curtis), Magic Leap’s VP of Optical Engineering, presentation “Unveiling Magic Leap 2’s Advanced AR Platform and Revolutionary Optics” at the SPIE AR/VR/MR conference created the most buzz of the whole conference. The presentation did a great job laying out Magic Leap 2 (ML2) display and optics technology at a high level and some of the reasons behind the decisions. I don’t think they gave away any “secret sauce” that could not be easily figured out by tearing down a unit, but it was still a refreshing change.
There was some speculation (I have no sources) before and during the conference that Magic Leap is in the process of being sold (the name most floated is Google as they are already an investor). Sometimes a very open presentation is given by a company trying to sell the company. They also can happen when a company has been sold on letting the world know what they have accomplished. Or it could be that Magic Leap is trying to gain back confidence. One thing is for sure, Magic Leap has gone from being one of the biggest “startups” to a comparatively small company in AR compared to Meta, Apple, Google, Amazon, Microsoft, and Samsung.
I will be concentrating on the display and optics revealed in the presentation. This article is based on pictures I took (edited and enhanced) as a member of the press and my memory of the talk. The SPIE should be publishing the whole video in about a week from this writing, and I would highly recommend watching the video.
Curtis explained the decision process for the display technology. While many think MicroLEDs may be the future, they were clearly were not ready for a production color headset. Curtis stated that while laser scanning was in Magic Leap’s “DNA” (see: Magic Leap Fiber Scanning Display (FSD) – “The Big Con” at the “Core”), despite spending millions, they could not meet their display requirements with laser scanning. See my articles on the many problems with Hololens 2’s laser beam scanning display.
After evaluating all the alternatives, Magic Leap decided LCOS was the best display technology for their application. As I plan to cover other articles about this conference, LCOS is the display of choice for most new waveguide-based designs that need either color or higher resolution. Both Avegant and Digilens are making new design pushes with very small and very bright LCOS designs with waveguides. The “word on the street” is that Snap has an LCOS-based design in the works.
Curtis claimed that there are advantages of having the display be taller than it is wide. As shown (right), the ML2 “active display” display is 1440 pixels wide by 1760 pixels tall. They are holding 96 by 96 pixels in reserve for the alignment of the displays to the eyes.
Magic Leap One (ML1) didn’t have these reserve pixels and instead required two models to cover different interpupillary distances. Some mistakenly thought there were different ML1 models based on head sizes. The presentation pointed out that the ML2 would only have one model for all users. These reserve pixels may also come in handy with other display to eye alignment issues as the eye moves (more on this later).
While not mentioned by Magic Leap, the word as the conference is that ML is using an unannounced Omnivision’s LCOS device. It would not be a surprise as the ML1 used an Omnivision device. The rumor mill seems to think that Snap may also be using Omnivision for their new headset despite Snap’s recent acquisition of Compound Photonics and their current porotype using TI’s DLP. Most LCOS companies today are capable of making such a device with a 3.8 micron pixel pitch.
On the figure below, I have added estimates of the FOV along with number of pixels in each direction. Both the FOV area and the number of pixels have doubled over the ML1, so the number of pixels per degree is about the same. I should not that while Hololens 2 falsely claims 47 pixels per degree of resolution, its effective (measurable) resolution is closer to 15 pixels per degree, or about half in each direction of the ML2. The HL2 is famous for it horrible image quality. I fully expect that ML2 will blow away the HL2 in resolution and just about every other measurable aspect of image quality except for the transparency of the real-world (more on that later).
The presentation shows some other approaches to LCOS that they state (incorrectly) are limited in FOV. They are correct that most LCOS designs (left) use a color combining stage (X-cube or a series of dichroic mirrors) and a polarizing beam splitter. They also mentioned Himax’s Front-Lit. But I have seen several more innovative compact designs with large FOVs, including Lumus’s Maximus (right).
While it used a beamsplitter, the ML1 optical path (right) did not have an X-cube or other color combining optics. Instead, the ML1 had separate color paths to each of the waveguides. Because the ML1 supported dual focus planes, it required six LEDs and six waveguides (two sets of r, g, & b). Because the ML2 does not support dual focus planes, there is only one set of LEDs and waveguides.
Supporting only a single focus plane enables other simplifications. It should dramatically improve the image quality of the ML2 over the ML1.
The ML2 has a somewhat innovative approach to reducing the size of the overall optics using LCOS. They start with separate LED illumination for each of the separate color waveguides (this time only three), as with the ML1. Interestingly, they then send the illumination light through the waveguide and projection lenses to illuminate the LCOS device and avoid needing a beamsplitter.
The left-hand figure below shows the combined red, green, and blue light paths (I combined/overlaid three of the presentation slides). Magic Leap uses a “tricky” combination of circular polarizers to control to and from light paths. This folded path with circular polarizers seems similar to pancake optics used with a few of the newer, more compact VR headsets (and expected in the Meta Cambria). The design is fairly compact, as shown in the figure below-right from the presentation.
While eliminating the beam splitter removes some weight, it is not a large part of the overall weight. What is more important is that it reduces the size of the optics and lets the LCOS display be optically closer to the projection optics, which can help simplify said optics.
While many non-designers worry about the battery life, a much bigger design concern in a headset design is the heat management from power consumption. ML2 claims to be >12x more efficient (#4 above-right) when factoring in FOV and eyebox, which is likely true. But then the ML1 was pretty inefficient with its support of the dual focus planes (there is no free lunch). The “single SKU” is due to having “reserved” pixels discussed previously.
Curtis said the ML1 was about 150 nits (which roughly agrees with my measurements), whereas the ML is supposed to go to 2,000 nits (see #5 above-right). Achieving 2,000 nits with a diffractive waveguide with an ~70° FOV is a significant accomplishment. For comparison, the HL2 claims 500 nits but only does so in the center of the HL2’s very non-uniform image as I have measured it. Lumus Maximus is expected to deliver >4,000 nits for a 50-degree FOV per Watt of LED power. ML has not yet specified the power consumption of the ML2 at 2,000 nits.
Having the led light go through the projection optic to illuminate the LCOS and avoid needing a beamsplitter may seem strange, but ML is not the only company trying this approach. While at the conference, I saw Avegant’s new, very compact light engine. Avegant uses a waveguide structure to combine the r, g, &b LEDs and the, like the ML2, sends the light through the projection optics toward the LCOS (see right). Unlike the ML2, Avegant is designed for one to three-layer waveguides that don’t have spatially separated input gratings. Avegant demonstrates their current prototype with a single layer Dispelix single-layer waveguide (I plan to cover Avegant’s new designs in an upcoming article).
ML2’s optical stack (after the projector) is shown on the right. On the surface, it looks a lot like the stack of the Hololens 1 other than the dimmer layer. Nothing very surprising. The ML2 uses very high index 2.0 glass (only recently available), which helps support a wider FOV than the Hololens 2 without resorting to the complex and image compromising “butterfly” design of the Hololens 2.
The depolarizing film (far left) reduces the problems with viewing typically polarized LCD monitors. The “eyepiece” is the set of r, g, & b waveguides with protective covers and coatings.
First, the elephant in the room is the 22% light transmission stated on Magic Leap’s slide. They block 78% of the real world’s light in its most transmissive state, effectively medium-dark sunglasses. And frankly, I think the 22% is a likely theoretical number that they probably can’t achieve.
A typical “high transmissivity” reflective polarizer will block about 60% of unpolarized light, 50% for polarization, and another ~10% in efficiency loss. The dimmer structure has another polarizer, which will lose ~10% more. The various films and structures of the dimmer should lose ~15% or perhaps more. The stack of three diffractive waveguides will typically lose 25-35%. Throw in another ~10% for all the other films, coatings, and lenses, and I get a best case of about 22%, and likely it is worse. I have gone into a lot more detail about the light-blocking problems with segmented dimming in Magic Leap 2 (Pt. 3): Soft Edge Occlusion, a Solution for Investors and Not Users.
As a practical matter, rooms are not lit five (5) times brighter than people want. In the example Magic Leap’s CEO gave of an operating room being so bright in her CNBC interview, it is bright for a reason, and they will not want to make it five times brighter still.
Below are shown ML1 with 85% blocking, Avegant’s prototype with Dispelix waveguides that are highly transparent (likely >85% transparent), Lumus Maximus prototype with about ~85% transparency, and Hololens 2 with 40% transparency. ML2 is going to block about twice as much real world light as the Hololens 2 and about 4 to 5X more light than Dispelix and Lumus.
Magic Leap didn’t say anything about “front projection,” the glowing eyes effect seen with many AR glasses. Magic Leap One and Hololens 1 & 2 are notorious for large amounts of front projection (see CNET ML1 picture in upper-left corner above). So I am guessing that Magic Leap isn’t mentioning it, then it must not be good. My rule of thumb is that if a company does not talk about an obvious issue, then the answer is likely bad. In contrast, with Avegant/Dispelix glasses, I was hard-pressed to see any forward projection from any angle, and they are proud to say they are down to about 1% forward projection.
The “LED layer” is for the IR LEDs illuminating the eye. The ML2 requires inserts for vision correction, but these are different from those for the ML1 as they require cutouts (left-lower-corner) for the eye-tracking cameras. Quite a few people a the conference commented on the logistical nightmare that these inserts will cause users.
As discussed in my prior article Magic Leap 2 for Enterprise, Really? Plus Another $500M, ML1 built in a diopter adjustment into the exit grating of the waveguide as shown in the figure (right) from Bernard Kress’s (excellent) “Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets” on page 164. The book points out that ML1’s exit-grating diopter adjustment method tends to degrades the image quality.
The ML2 uses a front and rear lens pair to adjust the focus point like Hololens 1 & 2 as I speculated in my prior article. The collimated light exiting most thin waveguides is focused near infinity. The lens nearest the eye moves the focus to a closer distance of about 1.5 to 2 meters. The lens on the world side of the waveguide compensates to keep the focus of the real world from changing. The lens method should help the ML2’s image quality over the ML1.
In October 2021, I explained the inherent problems with Magic Leap’s segmented dimming, known in the industry as “soft edge occlusion.” I explained that the technique of using an LCD shutter to globally dim or a pixelated array dim areas is both well known and has not been used because it loses over 70% of the incoming real-world light, as more briefly discussed above.
Beyond the loss of light, the segmented (soft edge) dimming is extremely imprecise/blurry, and trying to dim a single pixel could also dim thousands of surrounding pixels. While segmented dimming might seem like LCD televisions with local dimming, the segmented dimming approach can be orders of magnitude less precise. Additionally, the segmented dimming pixels will cause diffraction problems with the real world as point out in Magic Leap patents (see the October article).
Magic Leap had just side discussing the concept without details and just a few low-resolution pictures (below). It likely will not work very well based on Magic Leap’s patents and my analysis. Note the dimming starts by blocking 80% of the real-world light.
For nearly a decade, Magic Leap has been touting the issues with Vergence Accommodation Conflict and their solution of using multiple sets of waveguides with different focus planes. In his presentation, Curtis came right out and said that ML2 had dropped the feature. In Magic Leap 2 for Enterprise, Really? Plus, Another $500M in October 2021, I speculated that it had been dropped in favor of better image quality.
Curtis explained that there might be more important issues that affect visual comfort than VAC, as shown in the slide on the left. Curtis made the case that many of the problems are related to rendering issues rather than VAC and that many of these issues can be improved by more accurate eye tracking. Curtis explained that Magic Leap has improved eye-tracking and rendering.
A side effect of the much-improved eye tracking was the necessity to have cutouts so the camera can have an obstructed view of the user’s eyes (left).
The presentation also gave some interesting information on the need to correct binocular alignment due to even small mechanical movements. The ML2 includes sensors/cameras for detecting the bending of the headset so the binocular alignment can be corrected (below).
The presentation had one slide with “through the lens picture” with three pictures much reduced to fit on a single 1080p resolution slide, and I took the picture below the projected slide with its losses. All the images show severe vignetting (circular darkening at the outsides). Still, it is impossible to know if this was an intended effect, a problem with photography not being inside the eyebox or a problem with the ML2.
All three images in the slide have highly saturated colors, making it hard to tell whether the colors are accurate and if the colors are shifting. I would prefer to see pictures of people and some pure white content. Interestingly, the text color was solid green rather than white, which would hide any color shifting across the waveguide. These images only show that the ML2’s image quality is better than the horrible Hololens 2, but it is impossible to compare them with displays and waveguides from other companies.
I do not doubt that the ML2 has vastly better image quality than the ML1 or Hololens 1 and 2, but that is a very low bar. We will have to wait and see until an objective analysis can be made of the image quality.
While it seems obvious that ML2 will blow away the HL2 in terms of image quality and brightness, the HL2 sets a very low bar and is not important when the ML2 on so many other aspects of the design, as I outlined in Magic Leap 2 for Enterprise? Plus Another $500M. The key list of misses relative to the HL2 includes having a cord, not enough eye relief to support normal glasses, and a lack of a flip-up screen.
Blocking ~80% of the light is an unrecoverable mistake. As I have previously written, ML2 looks like a product designed for the consumer market, but it was seen that it would be far too expensive for that market, it was recast as an “enterprise” device.
The more I think back about the presentation and other aspects not covered in this article, such as their ability to design and manufacture their own waveguides, the more I agree with the people that think the presentation was more of a “For Sale” sign.