Magic Leap 2 (Pt. 2): Possible Answers from Patent Applications

Introduction – “Surely, they were not so desperate” — but they appear to be!

This article is a follow-up to the last article, Magic Leap 2 for Enterprise, Really? Plus Another $500M. When I first saw that Magic Leap was making a big deal out of dimming, I did a very quick search of the latest Magic Leap patent applications. The first thing that turned up was Magic Leap Patent Application 20210141229 (‘229) published on May 13, 2021, and titled “AMBIENT LIGHT MANAGEMENT SYSTEMS AND METHODS FOR WEARABLE DEVICES.” My first thought was, “surely they are not so desperate as to put an LCD panel for area dimming in front of the waveguide. Putting an LCD in front of AR glasses is a very old and common idea and fraught with problems. Companies file patents all the time on these types of things just in case they become possible in the future. So I proceeded to write the article based on the CNBC interview and Magic Leap CEO Peggy Johnson’s, Letter.  

But after writing the first article, I decided to look at the Magic Leap’s recent applications on dimming more seriously and found three more related patents (US20210048676, US20210003872, and US20200074724) with further details. The level of specificity in the patents combined with Magic Leap putting in front and center in the CNBC Interview, suggests that Magic Leap is indeed putting a pixelated LCD dimmer in the ML2. It turns out they were that desperate.😁 Explaining why pixelated LCD dimming is problematic will take some time and will be the subject of my next article on Magic Leap.

Following the trail of dimming-related patents, let’s see some tidbits about other things Magic Leap Might be doing. I came across other things that seem interesting, different, or out of place. I have not done an exhaustive search as Magic Leap has filed 631 patents in just the last three years (and nobody has paid me to do it). So I may have missed something else that is important (let me know).

Is the FOV Taller Vertically? – Maybe Not

Magic Leap CEO Johnson’s article includes a picture comparing the FOV of the ML2 to the ML1. On the drawing below, I have added Hololens 1 and 2, plus the FOV has shown in the application (in green). The released drawing shows the FOV to be much taller vertically, which was different from every other bi-ocular headset and did not make technical sense. But application US20210141229 (filed in November 2020) has a figure showing a FOV of 55 wide by 45 tall.

At the same time, I have not come across any patent applications that talk about the FOV being taller than wide. It does make me wonder whether the drawing by Magic Leap was misdirection or rotated to make it more visually obvious the FOV was larger than ML1. No confirmation but a clue that the ML2’s FOV may be different than what Magic Leap released.

Single Waveguide with Front and Back “Lens Assemblies”

Last time, I speculated that with the wider FOV and “better image quality,” Magic Leap had likely gone to a single focus depth waveguide rather than the dual ones of the ML1. Waveguides requires the image entering them to be focused at infinity and thus if not modified will exit the waveguide focused at infinity. The ML1 had two sets of waveguides, one with an exit grating that made the light appear to be about 2 meters and a second set that made the light appear to come from about 0.5 meters away to reduce what is known as “Vergence Accommodation Conflict” (VAC).

Hololens 1 and 2 use a fixed lens between the waveguide and the eye to move the apparent focus to about 2 meters. Hololens then adds a second lens in front of the waveguide to cancel out the effect of the first lens so the focus and magnification of the real world does not change. Most of the ML dimming applications show a “Back Lens Assembly” (BLA) and “Front Lens Assembly (FLA) configured like the Hololens. Below I am showing Fig 15.10 from Bernard Kress’s “Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets” (left) compared to the ‘229 application Fig. 27.

Variable Focus (“Adaptive”) and Vergence-Accommodation (?)

But most of the patents related to diming show “adaptive” lenses, which they use synonymously with variable focusing lenses. The variable focusing would be used to support Vergence-Accommodation Conflict (VAC) with a single waveguide. Quoting Application 2021/0141229 (with my bold emphasis):

A variable focal element 2704 may comprise a depth plane variation/switching element between the eye 2700 and the eyepiece 2702 to act on the virtual display. In some embodiments, variable focus element 2704 is a back lens assembly (BLA) 2706. The BLA also invariably acts on the world light and therefore a front lens assembly (FLA) 2708 is added to cancel the impact on the world display

The ‘229 application in later figures describes the same type of BLA and FLA elements as being “adaptive” without saying they are variable focusing. In describing Fig. 28, the application describing the same structures as “adaptive BLA” and “adaptive FLA” says:

The OST-HMD may also include an eyepiece 2806, an adaptive BLA 2808, an adaptive FLA 2810, and an EPE 2840, to display light 2836 to the user’s eye 2800, as described herein.

What seems to be missing is an “out” for what happens if they don’t have an adaptive BLA and FLA combination. Frankly, from what I know to date about what it takes to make variable focus lenses, I’m skeptical that the ML2 has them. So this leaves three options:

  1. Magic Leap under that was failing with the ML1 and under great financial pressure and laying off about 2/3rds of their workforce (including the people working on the lens) developed adaptive lenses that are vastly better than anyone has done before (more on what it appears they were trying to make below).
  2. Magic Leap tried to develop adaptive lenses but abandoned them to get the ML1 out the door. They either had a fixed set of lenses and a single focus distance like Hololens or no lens at all and focused at near infinity. Optionally, they could have models with different focus distances and or clip-in or -on optics
  3. They could be sticking with the ML1 type dual waveguides and no BLA or FLA lenses.

Of the three options above, I tend to think some variation of #2 is most likely. Below I will go through the options for supporting adaptive/focusing optics and why I think #1 is unlikely.

Transmissive Variable Focus Devices

Magic Leap patents have discussed a variable focus element in some of their earliest applications and discussed in this blog’s 2016 article Magic Leap – Separating Magic and Reality. There are several known ways to make a variable focus element (VFEs). The most common techniques are moving lenses, shape changing lens, and liquid crystal type lenses. Additional techniques such as deformable mirrors can act as VFEs, but they would not work between the waveguide and the eye.

Motor Moving One or More Lenses

The most common way to support variable focus is to use a motor to move one or more lenses like a camera’s focusing mechanism, but would be big and heavy with the size of lenses required to match the size of waveguide.

Optical Fluid with Variable Pressure on a Membrane

ADLens, and Optotune among others, vary the pressure of fluids between flexible membranes to vary the focus. Optotune (left) is well known as a supplier of liquid adjustable lenses. ADLens has eyeglass products use a manually operated “pump,” and have also been working on electrically driven variable focus liquid lenses such as patent US11,086,132 (right).

As discussed above, there is the need for an FLA to exactly compensate for the BLA with membranes that would respond exactly oppositely. While a liquid filled lens would be highly transmissive, the accuracy, durability, and the bulk associated with the fluid pumping are drawbacks. I also have found no evidence in the recent patent literature of Magic Leap concerning fluid lenses.

Liquid Crystal Electronic Lens

The fact that liquid crystals can make variable focus lenses is well known in the industry. I first saw a working device with Deep Optics being used with a Lumus waveguide at CES 2018 (see: CES 2018 Part 1 – AR Overview). This blog has also covered Facebook/Oculus’s 2017 “Focus Surfaces” R&D effort that used a phase-modulated liquid crystal to produce variable focus.

Most pertinent here is that Deep Optics has a flat, transparent liquid crystal lens (and pictured below). Deep Optics has started selling 32°N polarized sunglasses with electrically controlled focus (shown below left and demonstrated in a short video).

Quoting from the DeepOptics website:

“DEEP OPTICS Pixelated Liquid Crystal (LC) lenses:  The LC layer is split into millions of tiny pixels, capable of rotation of the LC molecules at every point across the panel. This creates an unlimited number of dynamic, high-quality lenses that can be changed at any moment.

With only one physical lens, different optical lenses can be realized, simply by driving different voltages

IPD: The distance between centers of active lenses can be controlled and changed according to the user’s inter-pupilary-distance (IPD)”

Something extremely interesting about Deep Optics LC technology is that the LC optics only act on one polarization of light. If the incoming light is polarized in one direction and the display light is polarized the opposite way, the lens will only change the focus of the display’s image but not the real world.

There is no need to undo the correction for the real world with Deep Optic’s technology. As I am about to show, Magic Leap has many patent applications that also use LC lenses, but they require optical correction for the real world. Also, several Magic Leap applications show using multiple cells to achieve different degrees of focus change where Deep Optics can control the amount by voltage.

Like the Magic Leap applications, the Deep Optics approach requires the real-world light to be polarized. Generally, this results in about a 60% loss of light (50% due to polarization plus about 10% other losses). As will be discussed, the approaches used in the Magic Leap would block much more light.

Magic Leap LC Switchable Lens Patent Applications

Magic Leap has been looking at liquid crystal (LC) switchable optics for VAC. Below are some selected figures from Magic Leap patent applications 2021/0231986, 2021/0328556, 2021/0041703, and 2021/0132394, all filed in 2019. Unlike DeepOptics technology that only modifies the virtual image, the Magic Leap applications have both an LC switchable “front” lens (FLA) and an LC switchable back lens (FLA).

These LC-type lenses require that the light be polarized for them to work. Assuming these are after the dimmer, which also requires polarized light, the light is likely polarized. But I would be concerned about whether the waveguide and other optical layers have at least partially depolarized the light.

Patent application US2021/0132394 shows using multiple layers for each of the BLA and FLA adjustable lenses to select different amounts of focus change. They also show the option combining switchable lenses with traditional optical lenses.

Finally, the “Kitchen Sink” application 2020/0201026 included two waveguides and two dimmers with front and back adaptive lenses. This structure has so many layers to it with so much happening that only a very small amount of light, I would guess less than 10%, from the real world would make it to the eye if both dimmers were in their most transmissive state.

Light Losses – Death by Many Cuts

Each LC cell with glass, LC, alignment material, two ITO electrodes, and other films/layers will block about 10% of the light. Most of the Magic Leap patents show using many cells for each adaptable BLA and FLA structure, not to mention dimming. So a single BLA or FLA might block between 10% and 40% of the light, assuming the light is already polarized.

Typically a “high throughput” polarizer (as opposed to one for “high contrast”) when polarizing will block about 60% of the light (50% for polarization and blocking >10% of the desired polarization). When passing pre-polarized light, the polarizer will still block about 10%.

Multiplying 40% (100%-60% of the polarizer) times a multiple of ~90% transmissions (100%-10% for other components) results in a very low percentage of the light getting through.

Conclusion and Next Time a Deeper Dive Into Dimming

There is a lot of evidence that Magic Leap is doing pixelated dimming. Next time, I will detail the many patent applications I have found in this area. Furthermore, Magic Leap has made it clear that their device has some form of dynamic dimming.

Whether the FOV is taller or wider, it is more likely that the ML2’s FOV is taller, as they have shown. But the drawing was only in an artist’s conception, and there is some evidence that the FOV might be wider. I would say there is at least a 30% chance to put a number on it.

Magic Leap has seriously looked at making switchable/variable-focus optics/lenses using liquid crystals. As addressing VAC has been presented by the company since 2013 as a reason for them to exist, it was important for Magic Leap to find a way to support it on the ML2. While the applications prove that significant time and money was spent, none of what I have seen looks practical, particularly when combined with LC-based pixelated dimming. I think there probably was an “ideology meet reality moment” in the development of the ML2

I also think that Magic Leap might have made a point about VAC in their recent interview if they were still addressing it. As I wrote above, they may have some add-on approaches, but I doubt they are doing it with adaptive lenses,

Karl Guttag
Karl Guttag
Articles: 240

7 Comments

  1. Hi Karl, it’s been hard to find an answer for this, but this post made me more curious. Why are waveguides so inefficient? I’ve read elsewhere that it’s the in-coupling. I notice you said light has to be coupled in at infinity. Is that why? LED has a very lambertian focus. But if so, how do lasers perform then? What exactly makes waveguides of all types such inefficient combiners? Love your blog and keep up the great work.

    • Those are great questions and would take quite a while to explain. I keep wanting to write it up, and I will try and give a short version below:

      Both LCOS and DLP in AR headsets start with very small LEDs (sub-millimeters). Thus, even though they are roughly Lambertian emitters, the etendue (a function of area and the light angles emitted) is low enough to get decent coupling efficiency. But if you use, say, an OLED or even a MicroLED, you get into very serious coupling problems as their etendue emits from the whole area of the display, which is many millimeters in size. They can put microlenses over the pixel to help, but the area is a killer for coupling efficiency.

      The next big loss is the size of the in-coupling area to the size of the out-coupling area. The area difference is typically on the order of 50 to 1.

      Then you have the diffraction order losses, which can lose from about 3x to 10x the light. Note on a Hololens Headset the waveguide projects almost as much light forward as it does to the eye.

      Lumus’s reflective waveguides are typically 3x to 10x more efficient but still lose about 95% of the light with the semi-reflective coating.

      Lasers, as we have seen with Hololens (see: https://kguttag.com/2020/07/17/hololens-2-display-evaluation-part-4-lbs-optics/), “solved” the etendue issue by using lasers, but then created a whole bunch of other problems. They have to “pupil expand” the laser image to work with the waveguides, which is either very inefficient or takes a lot of optics. Laser scanning itself is very power inefficient because the lasers have to be turned on and off in a few nanoseconds causes massive power surges. They never turn the lasers “off” but only to a “sub-threshold” because it would take too long to turn on from being fully off. Microsoft just traded one problem for bigger problems going to laser scanning. Their scanning is about 4x too slow to support the resolution they claim (see: my prediction based on the math: https://kguttag.com/2019/02/27/hololens-2-first-impressions-good-ergonomics-but-the-lbs-resolution-math-fails/ and the images: https://kguttag.com/2020/07/08/hololens-2-display-evaluation-part-2-comparison-to-hololens-1/)

      Simply put, AR is really hard to do.

  2. […] Some optics are required to couple the laser scanning into a waveguide, making display engine optics bigger than direct laser scanning designs. The waveguide designs are also no longer “Maxwellian” or focus-free, and they will focus at infinity unless other optics are added on both sides of the waveguide (see: Single Waveguide with Front and Back “Lens Assemblies”). […]

Leave a Reply

%d bloggers like this: