Magic Leap 2 (Pt. 3): Soft Edge Occlusion, a Solution for Investors and Not Users

Introduction

In my last article, Magic Leap 2 (Pt. 2): Possible Answers from Patent Applications, I wrote that I had found so far four patent applications by Magic Leap discussing pixelated dimming; 20210141229 (‘229), US20210048676 (‘676), (US20210003872 (‘872), and US20200074724 (‘724). As I wrote in the first article in this series, Magic Leap 2 for Enterprise, Really? Plus, Another $500M, both in the CNBC interview of Magic Leap’s CEO Peggy and Ms. Johnson’s op-ed article, made a big deal out of electronic dimming.

This article will dig into how ML2’s pixelated dimming works and the many problems with the concept. As I go through explaining ML2’s dimming concept, I will jump between figures from the four patents cited above as some show various concepts and problems better than others.

As shown below, the dimming method Magic Leap appears to be using is so impractical that one has to ask why they did it? I will try to answer this question in the conclusion (and hinted at in the title).

Soft Edge Occlusion (Pixelated Dimming)

It is a very common idea to put a pixelated liquid crystal display/shutter on the front of AR glasses to dim ambient light. It is so common an idea that it even has a name, soft-edge occlusion. Quoting from Bernard Kress’s book (a concise encyclopedia of AR/VR/MR) “Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets,” page 215 Section 19.3 Pixelated Dimming, or “Soft-Edge Occlusion.”

While hard-edge pixel occlusion needs be processed over a focused aerial image, soft-edge occlusion can be done over a defocused image, for example, through a pixelated dimming panel on a visor. Such pixelated dimmers can be integrated as LC layers, either as polarization dimmers (only acting on one polarization, from 45% down to 0%) or as an amplitude LC dimmer, based on dyed LC layers (from 50% down to 5% dimming, typically).

There are good reasons why nobody has made it to market with a product (at least that I have seen) despite the broad concept being widely known. While it may seem I am picking on Magic Leap, I’m simply using their patents as a specific example to point out the major problems with pixelated dimming.

Hard-edge occlusion means blocking light on an image pixel by image pixel level. While hard-edge occlusion is trivial for pass-through AR (VR with cameras), it becomes infinitely complex to solve the general case with optical AR. I discussed some of these issues in Hard Edge (Pixel) Occlusion – Everyone Forgets About Focus, including concepts from Arizona State University licensed by Magic Leap.

Interestingly, Magic Leap applications filed as far back as 2015 (ex., US 2015/0178939) discuss (impractical) soft edge and hard edge occlusion concepts. It seems old concepts live on a Magic Leap.

In theory, soft-edge occlusion is used to selectively dim the real world in large areas so the virtual image will stand out without dimming the whole of the real world. Hard-edge occlusion can further let virtual objects look like they are in front of or behind objects in the real world.

Magic Leap’s Soft-Edge Occlusion Basic Concept

The basic idea seems simple, put “dimming pixels” in the form of a transmissive LCD panel in front of the waveguide to block light in the real world. But as we will see, if it were this easy, it would have been done many time buy now.

Figure 7B from the ‘676 patent (below right) shows more detail of the structure of the pixel dimming array. The structure uses a common polarization-based LC (ex., twisted Nematic). Working from the outside (bottom of Fig. 7B), the real-world light is polarized. The outer retarder, typically a quarter waveplate, is used to slightly retard (in layman’s terms rotate) the light for best effect with the alignment of the LC. Then there is the outer glass or plastic forming the LC cell. The LC follows a common electrode, and the inner glass has formed the pixelated electrode and thin-film transistors for controlling each pixel electrode. The outer retarder “rotates” polarization from the LC so it will pass through the inner polarizer. Likely, when the LC is “off,” they would want the stack to be at its most transmissive, and the retarders tweak the polarized in and out of the LC to improve transmission. In common display applications, retarders are used to ensure that light is blocked to improve contrast, so the blocking state is most important.

ThThe dimming pixels shown in Fig 6B are gigantic compared to display pixels, while they may not be to scale. They are on the order of 100,000 times or more the size of a virtual pixel.

Problems Viewing of LCD Displays

Operating & Surgery Room | St. Mary's Health System | Maine

The LCDs, the dominant technology in everything from computer monitors, most cell phones, and most equipment with color displays, output polarized light (OLEDs typically don’t output polarized light). If AR glasses have a polarizer in the optics between the eye and the real world, they will dim and shift colors in the output of these LCDs monitors to a greater or lesser extent.

The example given by Magic Leap on the need for diming in the CNN interview was hospital operating rooms. But these rooms (right) are filled with LCD monitors around the room and in the equipment.

Simple Optics Physics Problems with Pixelated Dimming

Before diving into the Magic Leap applications, going through some of the basic issues will be helpful.

A lens collects light and focuses it

The figure on the right shows the basic concept of what a lens does in three cases.

  1. Without a lens and a wide aperture (top case), the light from the red dot (say a tiny red LED) will form just a blur on the image plane.
  2. If the aperture is reduced to a very small “pinhole,” only a small bundle of light rays will form a ret dot at the image plane. Up to a point, the smaller the hole, the sharper but dimmer the image. If the hole gets too small, diffraction will cause blurring and blocking of light, and the brightness is roughly proportional to the size of the hole.
  3. A lens “collects” light from a wide aperture and focuses it down to a point. The larger the aperture, the brighter the image, roughly proportional to the area of the aperture. The whole lens collects light for every point in the image. If the aperture is made smaller with the same lens, the image area stays the same size, and it’s not cut off; the whole image gets dimmer. Additionally, if the aperture gets smaller, the depth of focus gets larger

A dimming pixel the size of a virtual pixel has almost no effect

Now let’s assume that we put a pixel size black dot on a piece of glass just 20mm from the eye (roughly where it might be in front of the ML2’s waveguide. Doing some math and assuming a typical 1.5 arcminute/pixel, a pixel size dot on the glass will be about 0.087mm across.

The Magic Leap application examples commonly use a 4mm diameter pixel. Since it is easier to work with numbers than formulas, a pupil size of 4mm will be using my examples (it varies based on the brightness and from person to person). The pixel size black dot area is 0.00007615mm² versus the area of a 4mm pupil is ~12.566mm². The difference is that the area is about 165,000 to 1, and it will also affect approximately 165,000 pixels. A pixels size dot has a near-zero effect on a very large number of pixels. This example is just the beginning of the massive problems for hard-edge optical occlusion.

Soft-edge dimming pixels are huge compared to virtual pixels

With soft-edge edge occlusion, it is better to use dimming massively bigger pixels than the pixels they cover. As shown later, even relatively large dimming pixels will be massively blurred out in their effect.

The ‘872 application mentions that the dimming pixel diameter is about 500 microns (0.5mm), while the earlier ‘229 application discusses a 200-micron (0.2mm) dimming pixel. While these may not be the ML2’s dimming pixels diameter, I will use the 0.5mm dimming pixel to work with as an example. The figure on the right shows a 4mm pupil with a 0.5mm dimming pixel is drawn to scale with a tiny red dot the size of 1.5 arcminutes at 20mm from the eye. The inset enlarges the dimming and virtual pixel by 6x so you can see the relative size of a dimming pixel to the virtual image pixels. In this example, the dimming pixel is about 2,100 times bigger in area than the virtual image pixels.

Assuming a dimming pixel is black and a 0.5mm square and a 4mm diameter pupil, it is going to block about 2% ( (0.5*2) / (2^2*pi()) ) of the light from about 2,100 pixels in the virtual image. About 98% of the light goes around the dimming pixels and is collected by the cornea and lens and then focused on the retina. This is also why a person does not see every little speck on their glasses.

A more detailed example from the Magic Leap Application

The examples above are simplified cases where the light blocking and the pixels(s) being blocked are reasonable centers on the pixel. The example below is from the ‘299 application, and I have colored in three cases for light coming from 3 points, one each of red, blue, and one green, in the real world. We only care about the light that would make it into the lens, and in this example, it is assumed the light is coming from far away. The bundle of light from each point in the real world is focused on the retina.

A light blocker of diameter “h” is introduced a distance “d” from the pupil. The red bundle is centered on the eye and the obstruction. The percentage of light blocked for the red point is simply the area of the light blocker to the area of the pupil. Next, we have light from a blue point that is only partially blocked by the light blocker, so less of the light from the point will be blocked. Finally, the green point is such an angle relative to the light blocker and the pupil that the light blocker does not block any green light that would reach the eye.

Figure 24 above shows what happens if the diameter of the light blocker “h” is varied in size at a distance of 17mm from the pupil. The scale on the bottom is in angular degrees. Added at the top are scales at 40 degrees/pixel (1.5 arcminutes/pixel) and the area covered in terms of pixels (assuming the linear dimension if the radius of a circle).

For example, if the light blocker was exactly the pupil’s diameter (assumed 4mm) and the point in the real world was centered on the pupil (the red case above), then the red point and only the red point would be completely blocked. Some light from the blue and green points will make it around the light blocker, and about 200,000 pixels would be affected. Figure 24 shows curves for 4, 3.5, 3, 2, and 1-millimeter diameters for “h” as the angle of the point in the field of view varies.

Figures 25 (right) from the ‘229 shows the blurry fall-off effect (2502) of a 200-micron (0.2mm) dark spot against a bright background. The single background area pixel at the center will only be slightly dimmed, while massive numbers of pixels around it are also affected. While this is called “soft-edge occlusion,” the occlusion edge is extremely soft.

Amount light blocking required for various ambient light conditions

The table on the right was created from various sources, and the amount of real-world light in nits (cd/m²) reached the eye in various conditions. The human eye can see over a very wide range of brightness. Typically in well-lit rooms, the things you look at are in the 20 to 150 nits range. Outdoors, much of what you see lit by sunlight, is between 500 and 10,000 nits. At night or in a very dimly lit room, a person with time to adapt their eyes can easily see things at well less than 0.1 nits.

Simple contrast is the ratio of lightest to darkest. When using AR glasses, the light of the real world adds to both the dark and the lightest. With most are glasses, the real world’s brightness (I_world) is dimmed by the transmissivity of the glasses.

I_back = I_world X transmissivity,

Where I_back is the net light from the real world at a given pixel area that reaches the eye. And this contrast is given by

contrast = (I_display + I_back) / I_back

At about 1.5:1 contrast, the text is barely readable in an image.  At 2:1, the text is more readable, but the colors are extremely washed out. At 8:1, colors become moderately saturated. To watch a movie, one would like more than 100:1. There are two ways to improve contrast, 1) brighten the display and 2) reduce the background. Flat panels like monitors and smartphones have screens with light-absorbing characteristics to greatly reduce ambient light, but then they are not trying to mix the real world and a virtual world optically.

The Magic Leap applications deal with this subject, but they have a figure of merit, “V” for visibility. I could not make sense out of the formula for “V,” but it works out that the lines for a constant V=0.7 in Fig. 22 below are the same as a contrast ratio of ~7.667. Fig. 22 below has projector brightness versus transmission with both axes on a log scale. The diagonal lines show selected constant contrast of 7.667:1 for ambient conditions and projector brightness.

For example, assuming you are outdoors looking at white concrete with a virtual image of 800nits, the ambient needs to be dimmed by 1-1.2% or 98.8% to get a contrast ratio of 7.667. With the same display brightness of green grass in the sun, the real world has to be dimmed by 94% to give the same contrast. For comparison, typical dark sunglasses block only about 75-80% of the light.

These numbers show why using AR outdoors is such a challenge. A display of more than 4,000 nits such as Lumus makes and way off the scale Magic Leap’s application’s chart would end with more reasonable amounts of dimming. But if the virtual display is less than 1,000 nits, ridiculous amounts of dimming will make it impossible to see anything in the shadows.

ML2’s best real world light transmission looks to be a big problem

Looking at Fig. 7B from the ‘676 application shows the basic light path. Even a “high Looking at Fig. 7B from the ‘676 application shows the basic light path.

Even a “high transmission” polarizer blocks about 60% of unpolarized/random light, ~50% due to polarization, and about 10% is absorbed/lost. Even if the light is well polarized and rotated to pass, a polarizer will still block about 10%. The optical retarders probably lose 1-2%, and the LC will lose a few percent. The two highly (but not perfectly) transmissive electrode layers are likely to each lose 3-5%. Then there is the light-blocking by the transistors, wiring, and blackout material associated with the pixel dimming that will be on the order of 5-10%.

The three layers of the diffractive waveguide are going to block about 30 to 40%. Then you have front covers, even if they seem clear, will likely lose another 2-5%. Then you have any lenses, the inner and outer protective plastic shield, and the light, which will likely lose another 10-15%. All of these losses are multiplicative.

Multiplying the losses suggests that the transmissivity could easily be less than 10% (blocking more than 90%). By comparison, the ML1 transmitted only ~15% of ambient light, making it one of the worst of any optical AR headset.

Aligning the dimming with the eye

Another issue is how the dimming pixels will be aligned objects/light in the real world. The distance from the Another issue is how the dimming pixels will be aligned with objects/light in the real world. The distance from the dimming pixel to the eye is very small, say about 20mm, but many things in the real world will be meters away.

The first problem is parallax based on the movement of the eye relative to the dimming device. I have shown the eye in 3 positions (red, black, and green) and where the dimming would need to be centered based on eye movement.

Even more of an issue discussed at length in the Magic Leap patents is that as the eye rotates, the image will fall on a different part of the retina. Fortunately, as discussed previously, the dimming is so blurry that exact precision is not required.

The application puts quite a bit of effort into describing how the effective center of eye rotation varies based on the lighting. The application states:

 “Because cones are more sensitive to light in high light conditions and rods are more sensitive to light in low light conditions, as the detected ambient light decreases (e.g., the global light value), the origin of the gaze vector may be adjusted from a center position of the retinal layer corresponding to a high density of cones outward to one or more points along an annulus corresponding to a high density of rods.

Then it goes on to state in reference to Fig. 11, “high light conditions, for example, outdoor ambient light having 5000 NITS” and “FIG. 12 illustrates the same techniques illustrated in FIG. 11 but in low light conditions, for example outdoor ambient light  having 100 NITS.” It then goes on to talk about how the center varies because when the rods dominate vision.

These statements make no technical sense. First, photopic vision is where the retina’s cones that support both high visual acuity and color vision occurs down to about 5 to 10 nits, not at 100 nits. As an example, a typical computer monitor used indoors has between 100 and 200 nits. Secondly, as shown previously, the dimming has such a broad effect that precision as to whether the rods or cones are being used is unnecessary.

The application also describes changing the dimming pattern in addition to adjusting for the center of rotation (illustrated in Figs. 14 and 15 above). Once again, as a practical matter, this will make little difference.

Diffraction Problem

Application ‘872 is dedicated to a problem Magic Leap appears to be trying to solve concerning diffraction. Light going through small openings in a grid is going to suffer diffraction. Fig. 7A-C shows the effect on light with no grating, a square grating, and a curved grating. The main point of the patent is that the curve grating causes less obvious diffraction problems.

I would also like to note that this application talks about a 0.5mm diming pixel pitch versus the ‘229 application talking in terms of a 0.2mm pitch. A 0.5 pitch would have much less diffraction and let significantly more light through due to fewer transistors and metal traces blocking light. And once again, the soft-edge occlusion has such a large blur region that the diming pixel resolution will make little difference for practical purposes.

Conclusion – Impressing Investors Rather Than Users

In Magic Leap 2 for Enterprise, Really? Plus, Another $500M, I assumed that Magic Leap was only using a dimming “shutter” with no dimming pixels. Even a simple shutter is fraught with most of the problems above.  But as I dug into the patents and with input from multiple sources, it appears that the ML2 will have soft-edge occlusion as described above.

The way Magic Leap addresses soft-edge occlusion reminds me of ML1’s approach to addressing Vergence Accommodation Conflict (VAC). Both are real issues, but the approaches they took are impractical and cause worse problems. It feels like VAC and Dimming are “hooks” to sell investors on why to invest more than solving the most important problems for the user.  

It appears that once again, Magic Leap has concentrated considerable resources on a gimmick they could market. While not a very good solution, it will only make the problems worse for most serious applications.

Karl Guttag
Karl Guttag
Articles: 260

16 Comments

  1. Thank you for a really interesting article!

    I just have to step in and correct you on one thing, you are wrong about the LCD screen color shift artifacts when using LC-based dimming. If you take for example Sonys FX-line professional video cameras they are known for their good variable LC-based dimming (vari ND). One of LC-based ND-filters strenghts are that they don’t suffer from color shift on LCD-screens and blue skies. Analogue polarizer-based ND-filters on the other hand does suffer from the problems you describe.

  2. Hi Karl,
    Appreciate your articles.

    Have an amateur question on LCoS – is there a reason LCoS has to be illuminated externally?? i.e. why can it not be illuminated from backplane??
    This will essentially be a TFT LCD backlit with a 3m nits uLED. i.e. CMOS part of LCoS will be replaced with a bright uLED.

    • You have a reasonable question that has a lot of factors to it, and I will try and give you some answers of the top of my head.

      The transistors need to be made on a silicon I.C to make the transistors small enough for a microdisplay. The transistors and LCD are made of thin films on glass and are about 100 times bigger in the area. Then you have the wires for power and control. All the transistors and “wires” are in a layer behind the reflective top mirror with LCOS. Also silicon is very opaque.

      You might want to look at this article I wrote that talks about the difference in pixel sizes with different technologies: https://kguttag.com/2017/06/07/gaps-in-pixel-size This article also talks about other factors such as the size of transistors.

      High-temperature polysilicon (HTTPS) devices are made on glass by companies such as Epson, but even their pixels are bigger. You lose a lot of light going through the “wires and transistors. Kopin developed a “transparent LCOS” where the processing transistor is on one substrate and then “lifted” them off and put them on glass. But once again, the light has to pass through the wires and transistors, limiting how small they may be pixels before they block too much light. HTTPS and Kopin’s Lift-Off LCOS dominated things like camera viewfinders when the resolution was low. HTTPS is used in “LCD projectors” and most automotive Heads Up Displays (HUD). The automotive HUD has a very powerful LED behind them and a huge heat sink.

      Another big factor is that the LCOS used in high-resolution displays is “Field Sequential Color,” where HTTPS and Kopin transmissive use color filters, meaning they need color (RGB) subpixels. Because LCOS is reflective, the L.C. can be half as thick for the same polarization change effect. And it turns out that the switching speed of L.C. tends to grow as the square of the thickness. So all things being equal, LCOS switches about four times fast. LCOS uses this faster switching speed to have a single mirror/pixel provide red, green, and blue sequentially. This gives another factor of three in resolution.

      I’m sure I left some things out, but I think those are the big reasons.

      • Thanks for explaining in detail with references to HTPS and Kopin.

        I came across a paper on transparent transistors made from ITO/ASZO i.e. same ITO as used in LCoS.
        https://www.nature.com/articles/s41598-017-01691-7
        In this case pixel pitch and transparency do not appear to be an issue as compared to conventional TFT.

        Does this look like a viable solution for backplane illumination in your perspective?
        Wondering if Kopin’s transparent LCoS has anything in common with ITO/ASZO pathway.

  3. Solution implies LC layer sandwiched between ITO and ITO/ASZO both of which are fully transparent. This opens doors to multiple “backlight on silicon” options with integrated display driver ic.

  4. Thanks for the great writeup as always, Karl.

    Magic Leap claims ML2 at 20–2,000 nits, but it wasn’t clear to me if this is output to the eye or the brightness of the projector. I assumed the former, but maybe wrong?

    • Ben,
      I’m pretty sure that is to the eye because Kevin Curtis said at that point in the presentation that the ML1 was 150 nits.

      It appears your article was based on the published photographs and you have not seen the whole talk or slides. I was at the conference.

      Karl

  5. […] A typical “high transmissivity” reflective polarizer will block about 60% of unpolarized light, 50% for polarization, and another ~10% in efficiency loss. The dimmer structure has another polarizer, which will lose ~10% more. The various films and structures of the dimmer should lose ~15% or perhaps more. The stack of three diffractive waveguides will typically lose 25-35%. Throw in another ~10% for all the other films, coatings, and lenses, and I get a best case of about 22%, and likely it is worse. I have gone into a lot more detail about the light-blocking problems with segmented dimming in Magic Leap 2 (Pt. 3): Soft Edge Occlusion, a Solution for Investors and Not Users. […]

  6. Hi, Karl. How do you estimate the ambient transmittance of the three layers of the diffractive waveguide? In you estimation, the three layers of the diffractive waveguide are going to block about 30 to 40%. I think that different types of grating out-couplers make a big difference on ambient transmittance. Thanks!

    • I think a “typical” 3-layer diffractive waveguide blocks more like 15-25%. The visor then increases the total light blockage (even untinted, and most end products have some tint, it probably loses about 5%). Then you have many layers of other “stuff,” including films and diopter focus shift lenses. When you are all done like a Hololens, it blocks about 40% of the light. And 40% light blockage is roughly the starting point for Magic Leap, where they add other stuff.

  7. Hi Karl. Do you know if the «Tlens» from the Norwegian company, Polight, is used in the Magic Leap 2?

  8. Hello,
    I came across your blog post related to the display industry and would like to thank you for sharing your insights. I noticed that TN mode and VA mode are commonly used for dimming in displays, but I was curious to know if ML2 uses VA mode as well. I’m not very knowledgeable about this field, so any clarification would be greatly appreciated.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading