iFixit’s Magic Leap One Teardown Confirms KGOnTech’s Analysis From November 2016

Just a quick, but timely note today. I was asked by iFixit to help them identify the optical components for their Magic Leap One (ML1) teardown published today. The picture above is from iFixit but with my labeling. It turns out the teardown confirms what I wrote my  November 20, 2016 article Separating Magic and Reality article.

In the Nov. 2016 article, I wrote that the ML patent application US 2016/0327789 Figure 6 (FIG 6) was the best fit for what Magic Leap was doing. As proven by the iFixit teardown today, this is indeed the case.

KGOnTech cut through the marketing hype of what Magic Leap called “photonic chips” and said they were diffraction waveguides. KGOnTech also correctly discounted the fiber scanning display (FSD) that other websites latched onto in the patents. KGOnTech reported that Magic Leap was likely using an LCOS microdisplay. iFixit was able to identify the LCOS Microdisplay as being made by Omnivision.

The only difference between 2016’s FIG 6 and the actual ML1 that the LCOS device and LEDs swap places in their location on the polarizing beam splitter.

If I have time, I will try and go into more detail in a follow-up article(s). I’m kind of busy with RAVN and some personal matters these days.

 

 

 

Karl Guttag
Karl Guttag
Articles: 260

24 Comments

  1. Interesting…. surprised you can get a waveguide that thin without it deforming… Is the image coming directly out of the wave guide to the eye, or is it reflected off the spherical outer “lens”?

  2. Great prediction.
    So only 2 planes of depth?
    That sounds disappointing, after all the multi-focal hype.
    Given the Magic Leap uses eye gaze anyways, wouldn’t some slow moving variable focus solution make more sense?

    Does anyone have any insight how the avegant glyph implements multi-focus?
    Thanks,
    Rob

    • It doesn’t actually use gaze tracking for anything at all at this time.
      That’s been left as an exercise for the developers.

  3. It would be very appreicated that if you could analyze the machanism of demultiplexing the two depths full RGB images into corresponding layer of waveguide correctly. It seems somehow related to the colorful patches on the entrance of the waveguide. Interesting, that is the key to the optics of MLO.

    • There are six LED (two sets of R,G,B) and six entrance gratings. See in particular step 10 of the iFixed article (https://www.ifixit.com/Teardown/Magic+Leap+One+Teardown/112245).

      The light entering the waveguide has to be nearly collimated which means it is focused near infinity with the light rays going nearly parallel to each other. The light entering a waveguide must be highly collimated for the waveguide to work. On the 3 (R,G,B) waveguides associated with the shorter focus, either the grating itself or some optic element after the exit grating slightly de-collimates the light so it acts like light rays from closer; this method is described in many of the earlier Magic Leap applications.

      I don’t know of any experts in the field of optics that are not associated with Magic Leap that believe that 2 focus planes is really going to solve much. I have already heard reports that you can see the colors shift and other changes when they change planes (they only present one plane to the eye at a time).

      • I tried to explain most of your question in the article: https://www.kguttag.com/2018/01/03/magic-leap-2017-display-technical-update-part-1/ — Look at the section “Two Focus Planes For VAC”.

        There are three major problems. 1) While LCOS is likely capable of displaying 120Hz they the sequential color field rate would be low enough that the color breakup would be pretty severe when you move your eyes or head. 2) Seeing both the near and far image at the same time would look like a mess. 3) With only two planes, there would be a very large difference in the near and far plane which would make #2 worse. Much of this is explained in the article referenced above and in the cited patent application (US 2017/0276948).

  4. Interesting stuff. Are they using a separate 4K GPU for the displays? I’m looking at the OmniVision website and you think that it’s their LCoS? Any more details you can provide on which chip – the 1080 or 720?

    • I don’t know anything about the GPU use. The LCOS is apparently a device that is not listed on the Omnivision website that is 1280 by 960 which is in between the two listed devices. Display chips are generally sold directly to customers and so companies often don’t list all their devices and/or they made a special device for Magic Leap.

  5. Karl, the diagonal measurements look off to me on the OmniVision 1080P LCoS. Maybe it’s the camera angle. Would you mind posting the dimension of the ML LCoS display. Thanks!

      • Thanks Karl. Did you get to personally inspect the chip to verify that it is manufactured by OmniVision?

      • No, I was given access to a number of high-resolution photos. iFixit is the one the found the Ominivision 2222 part number. I had other “sources” prior to the teardown that said that Omnivision made the LCOS and I told iFixit that the LCOS was likely going to be by Omnivision.

  6. In my opinion, this device does not warrant the 2.3B investment thus far and I see no path for this device.
    It is too big for consumers to use with regularity.
    It is too big for businesses to use for productivity gains and its not cost effective either.
    Magic Leap is likely to have a cost to produce that is higher than the price tag it’s put on it. They will need a lot more cash to continue this game. I don’t see how they can get significant further investment dollars after all the money sunk in it already.

    • I tend to agree with all your points. The price they set on the “developer’s edition” has little to do with the cost to make, they can wrap hundred dollar bills around them if they think it will help them raise more money. Magic Leap appears to be more about raising money than selling a product at a profit from what I have seen so far. I have written that they appear to be going for a very small segment of the VR game market as this product is more VR than AR.

  7. One more quick technical question for you Karl, is it possible that they are using a 1280 x 960 CMOS backplane with the LCoS on top? If they’re already manufacturing a CMOS sensor of that dimenrsion, what are the possibilities of creating an LCoS display or would this be something they would even consider?

    https://www.nature.com/articles/lsa201494

  8. Karl, I’m puzzled about the LCOS principle: Can it be that it’s spatially separated and not field sequential?
    – In case of field sequential – how does the image end up in the right grating? This would require some sort of controlled projection system synchronised with the LCOS operation, I guess.
    – If case of spatially separated colours – would this mean that each LED is covering only a part of the LCOS area (a subset of 1280×960)? In such a case, all six colours may work simultaneously.
    But what would be the actual resolution for one colour?

    Also, I wonder – why are the in-coupling gratings appear in octagonal shape?

    • The LCOS act like a mirror. The illumination LEDs are each in a slightly different location and thus reflect (through the injection optics) to a different injection waveguide. Each color gets the full resolution but there will be some field sequential artifacts (say a white color breaking into red, green, and blue) when a person moves their head and/or eyes.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading