Magic Leap CSI: Display Device Fingerprints

Introduction

I have gotten a lot of questions as to how I could be so sure that Magic Leap (ML) was using Micro-OLEDs in all their “Through Magic Leap Technology” and not say a scanning fiber display as so many had thought. I was in a hurry to get people to the conclusion. For this post, I am going to step back and show how I knew. When display devices have video and still pictures taking whit a camera, every display type has its own identifiable “fingerprint” but you have to know where to look.

Sometimes in video it might only be a few frames that give the clue as to the device being used. In this article I am going cropped image from videos for most of the technologies that capture their distinctive artifacts as captured by the camera, but for laser scanning the distinctive artifacts are best seen in the whole image so I am going to use thumbnails size images.

This article should not be new information to this blog’s readers, but rather it details how I knew what technology was in the ML through “the technology” videos. For the plot twist at the end, you have to know to parse ML’s words, as in “the technology” is not what they are planning on using in their actual product. The ML “through the technology videos” are using totally different technology than what they plan to use in the product.

Most Small Cameras Today Use a Rolling Shutter

First it is important to understand that cameras capture images much differently than the human eye. Most small cameras today, particularly those in cell phones, have a “rolling shutter.” Photography.net has a good article describing a rolling shutter and some of its effects. A rolling shutter captures a horizontal band of pixels (the width of this band varies from camera to camera) as it scans down vertically. With “real world analog” movement this causes moving objects to be distorted. This happens very famously with airplane propellers (above right). With the various display technologies they will reveal different effects.

OLEDs (And color filter LCDs)

When an object moves on a display device the same object in the digital image will jump in its location between the two frames displayed. If the rolling shutter is open when the image is changing, the camera will capture a double image.  This is shown classically with the Micro-OLED device from an ODG Horizon prototype. The icons and text in the image was moving vertically and the camera captured contend from two frames. Larger flat panel OLEDs work pretty much the same way as can be see in this cropped image from a Meta 2 headset at right.

From a video image artifact point of view, it is hard to distinguish the artifacts with a rolling shutter camera between OLED and color filter (most common) LCDs. Unlike old CRTs and scanning systems, OLEDs and LCD don’t have any “blanking” where there is no image. They just simply quickly row by row change the RGB (and White sometimes) sub-pixels of the image from one frame to the next (this video taken with a high speed camera demonstrates how it works).

Color Field Sequential DLP and LCOS

DLP and LCOS devices used in near eye displays use what is known as “field sequential color” (FSC). They have one set of “mirrors” and in rapid sequence display only the red sub-image and flash a red light source (LED or laser) and then repeat this for green and blue. Generally they sequence these very rapidly and usually repeate the red, green, and blue sub-images multiple times so the eye will fuse the colors together even if there is motion. If the colors are not sequenced fast enough (and for many other reasons that would take too long to explain), a person’s eye will not fuse the image and they will see fringing of colors in what is known as  “field sequential color breakup,” also known pejoratively as “the rainbow effect”. Due to the way DLP and LCOS works, LCOS does not have to sequence quite as rapidly to get the images to fuse in the human eye which is a good thing because they can’t sequence as fast as DLP.

In the case of field sequential color when there is motion, the camera can capture the various sub images individually as seen above-left of the Hololens that uses FSC LCOS. As seen it looks sort of like print were the various colors are shifted. IF you study the image you can even tell the color sequence.

Vuzix uses FSC DLP and has similar artifacts but they are harder to spot. Generally DLPs sequence their colors faster than LCOS (by about 2x) so it can be significantly harder to capture them (that is a clue to if it is DLP or LCOS). On the right, I have captured two icons when sill and when they are moving and you can see how the colors separate. You will notice that you don’t see all the colors because the DLP is sequencing more rapidly that the Hololens LCOS.

DLP and LCOS also have “blanking” between colors where the LEDs (and lasers maybe in the future) are turned off while the color sub-images are changing. The blanking is extremely fast and will only be see using high speed cameras and/or setting a very fast shutter time on a DLSR.

DLP and LCOS for Use with ML “Focus Planes”

If you have a high speed camera or other sensing equipment you can tell even more about the differences between the way in which DLP and LCOS generate field sequential color. But a very important aspect for Magic Leap”s time sequential focus planes is that DLP an sequence fields much faster than LCOS and thus support more focus planes.

I will be getting more into this in a future article, but to do focus planes with DLP or LCOS, Magic Leap will have to trade repeating the same single color sub-images for different images corresponding to different focus planes. The obvious problem for those that understand FSC, that the color field rates will become so low that color breakup (the rainbow effect) would seem inevitable.

Laser Beam Scanning

Laser scanning systems are a bit like old CRTs, they scan from top to bottom and then have a blanking time while the scanning mirror retraces quickly to the top corner. The top image on the left was taken with DSL at a 1/60th of a second shutter speed that reveals the blanking roll bar (called a roll bar because it will be in a different place if the camera and video source are not running at exactly the same speed).

The next two images were taken with a rolling shutter camera of the exact same projector. The middle image shows a dark wide roll bar (it moves) and the bottom image shows a thin white roll-bar. These variations from the same projector and camera are due to the frame rates generated by the image and/or the camera’s shutter rate.

Fiber Scanning Display (FSD) Expected Artifacts

FSD displays/projectors are so rare that nobody has published a video of them. Their scan rates are generally low and they have “zero persistence” (similar to the to laser scanning) and they would look horrible in a video which I suspect is why no one has published a video of them.

I they were video’ed I would expect a circular blanking effect similar to the laser beam scanning but circular. Rather than rolling vertically it would “roll” from center to the outside or vice versa. I have put a could of very crudely simulated whole frame images at left.

So What Did the Magic Leap “Through The Technology” Videos Use?

There is a obvious artifact match between the artifacts in all the Magic Leap “Through the Technology” videos and OLEDs (or filter LCD which are much less common in near eye displays). You see the distinctive double image with no color breakup.

Nowhere on any frames can be found field sequential color artifacts. So this rules out FSC DLP and LCOS.

In looking at the whole frame videos you don’t see any roll-bars effects of any kind. So this totally rules out both laser beam scanning and fiber scanning displays.

We have a winner. The ML through the technology videos could only be done with OLEDs (or color filter LCDs).

But OLEDs Don’t Work With Thin Waveguides!!!

Like most compelling detective mysteries there is a plot twist. OLEDs unlike LCOS, DLP, and Laser Scanning output wide spectrum colors and these don’t work with the thin waveguides like the Photonic Chip that Rony Abovitz, ML CEO, likes to show.

This is how it became obvious that while the “Through The Magic Leap Technology” videos were NOT using the same “Magic Leap Technology” as Magic Leap is planning to use for their production product. And this agrees with the much publicized ML Article from “The Information.”

Appendix – Micro HTPS LCD (Highly Unlikely)

I need to add, just to be complete, that theoretically they could use color filter HTPS LCDs illuminated by either LEDs or lasers to get a narrow spectrum and fairly colliminated light that might work with the waveguide.  They would have similar artifacts to those seen in the ML videos. EPSON has made such a device illuminated by LEDs that was used in their earlier headsets, but even EPSON that is moving to Micro-OLEDs for their next generation. I’m not sure the HTPS could support frame rates high enough to support focus planes.  I think therefore that using color filter HTPS panels while theoretically possible is highly unlikely.

Karl Guttag
Karl Guttag
Articles: 247

9 Comments

    • I have not seen it in person, but they have some “through the optics” videos up on it and I have of course seen pictures of the outsides.

      I don’t know their reasons, but I would suspect it has to do with the optical path and image contrast. OLED is simpler and and result in better image quality as a result. It is also not Field Sequential so you won’t get color breakup. The downside is that the pixel are generally bigger and it is usually much more expensive than LCOS.

      On image quality alone, I think the ODG Horizon is likely to look much better than anything Hololens or ML is doing. A little of that has to do with the Micro-OLED, and a lot has to do with not cramming the image through a waveguide.

      For image quality,

      • Karl, if projection technology based on OLED source (like ODG) is much better than waveguide optics with LCOS, why MSFT build waveguide and ML also want to build gadgets based on waveguide optics?

        Does it possible to build more focus planes if we used Oled source and projection?
        Right answer is not? or we can build it with some tricks?

        Thank you.

      • I think the simple answer is that “Waveguides look sexier” and give the product a more “sunglasses look”. But this look is totally ruined if you need to put an outer shield to protect the waveguide both mechanically and from light; if you put a helmet with a shield on over the sunglasses, who cares what the sunglasses look like?

        The problem with waveguides is that they all seem to sacrifice image quality, in terms of resolution, chroma aberrations, and stray light (waveguide glow), compared to simpler solutions. They are also much more expensive. They are “over-weighting” the final lens in the overall product equations. By the time you add all the sensors and electronics stuff plus batteries if you are not going to have a cable (or life will suck with a cable snagging on everything in the real world), it is not going to be that sleek. What difference does it make whether you use waveguides or a tilted beam splitter like ODG?

        I’m not totally clear on what you are asking about focus planes but I will give it a shot:

        I think focus planes are a dead end. The problem of “vergence” and accommodation will be solved in different ways (the problem is real, but focus planes is not the “right” solution). To support focus planes you have to have the ability to display full resolution at a series of focus depths. If you want 6 planes, then you have 6x the image information to display (still much less than true light fields). You either have to have one display per focus plane or times sequential multiplex multiple images onto a single display. OLEDs can just about get to 120 frames per second so you can just about get 2 focus planes out of an OLED, if you want 3 or 4 you need a 2nd OLED and that gets VERY complex and expensive optically.

        Next you have the issue of all the processing and the power associated with it. You have a lot more to process which adds cost and consumes power. Also consider that if you “spend” the 120Hz on two focus planes you could have spent it on having faster frame update rates for less lag and smoother image motion; you don’t get something for nothing. So it looks like they put vergence and accommodation ahead of other human factor issues.

        Focus planes might make great demos but it is a dumb way to go long term. I think vergence and accommodation will become a sensing (of the eye) and processing solution rather than having physical focus planes or light fields (looking way out in time, maybe light fields will happen someday). My opinion, is that focus planes are neither here nor there. They have a lot of optical and processing complexity over not having them and they don’t really solve the problem unless you add so many that they become totally impractical.

        Fo

  1. Hey Karl given your rather intimate knowledge of the ML patents can you do a post on their computer vision/positional tracking solution? The most disappointing part of the Information article for me was that tracking seemed to be unstable for the author. Optics aside I’m curious how their approach to solve for position compares with Hololens, Tango and Realsense.

    • That’s a good question, but as I hope you can appreciate, I have been focused on the display/optics part of the problem. I have a “feel” and even a “mental model” of how things like display systems work. I will take a look at it when I get a chance and will let you know.

      Another area that is very important (and witness Google’s purchase of EyeSense) is eye tracking. Eye tracking is extremely important to mixed reality. I think this is going to be a better way to solved for vergence/accommodation than ML’s focus planes (I think focus planes are a dead end — they give up too much to get focus planes).

Leave a Reply

%d bloggers like this: