Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Magic Leap has a way of talking about what they hope to do someday and not necessarily what they can do anytime soon. Their patent applications are full of things that are totally impossible or impractical to implement. I’ve been reading well over a thousand pages of Magic Leap (ML) patents/applications, various articles about the company, watching ML’s “through the optics” videos frame by frame, and then applying my own knowledge of display devices and the technology business to develop a picture of what Magic Leap might produce.
If you want all happiness and butterflies, as well as elephants in your hand and whales jumping in auditoriums, or some tall tale of 50 megapixel displays and of how great it will be someday, you have come to the wrong place. I’m putting the puzzle together based on the evidence and filling in with what is likely to be possible in both the next few years and for the next decade.
There have been other well meaning evaluations such as “Demystifying Magic Leap: What Is It and How Does It Work?“, “GPU of the Brain“, and the videos by “Vance Vids” but these tend to start from the point of believing the promotion/marketing surrounding ML and finding support in the patent applications rather than critically evaluating them. Wired Magazine has a series of articles as well as Forbes and others have covered ML, but these have been are personality and business pieces that make no attempt to seriously understand or evaluate the technology.
Among the biggest fantasies surrounding Magic Leap is the Arrayed Fiber Scanning Displays (FSD); many people think this is real. ML Co-founder and Chief Scientist, Brian Schowengerdt, develop this display concept at the University of Washington based off an innovative endoscope technology and it features prominently in a number of ML assigned patent applications. There are giant issues in scaling up FSD technology to high resolution and what it would require.
In order to get on with what ML is most likely doing, I have moved to the Appendix why FSDs, light fields, and very complex waveguides are not what Magic Leap is doing. Once you get rid of all the “noise” of the impossible things in the ML patents, you are left with a much better picture of what they are actually could be doing.
What left is enough to make impressive demos and it may be possible to produce at a price that at least some people could afford in the next two years. But ML still has to live by what is possible to manufacture.
At the heart all of ML optical related patents is the concept eye vergence-accomodation where the focus of the of the various parts of a 3-D image should agree with their distances or it will cause eye/brain discomfort. For more details about this subject see this information about Stanford’s work in this area and their approach of using quantized (only 2 level) time sequential light fields.
There are some key similarities in that between the Stanford and Magic Leap’s approaches. They both quantize to a few levels to make them possible to implement and they both present their images time sequentially and they rely on the eye/brain to both fill in between the quantizated levels and integrate a series of time sequential images. Stanford’s approach is decidedly not a “see through” with an Oculus-like setup with two LCD flat panel displays in series where Magic Leap’s goal is to merge the 3-D images with the real world with Mixed Reality (MR).
Magic Leap uses the concept of “focus planes” where they conceptually break up a 3-D image into quantized focus planes based on the distance of the virtual image. While they show 6 virtual planes in Fig. 4 from the ML application above, that is probably what they would like to do but they are doing fewer planes (2 to 4) due to practical concerns.
Magic Leap then renders the parts of an image image into the various planes based on the virtual distance. The ML optics make it planes appear to the eye like they are focus based their corresponding virtual distance. These planes are optically stacked on top of each other give the final image and they rely on the person’s eye/brain to fill in for the quantization.
Magic Leap’s patents/applications show various ways to generate these focus planes, the most fully form concepts use a single display per eye and present the focus planes time sequentially in rapid succession, what ML refers to as “frame-sequential“where there is one focus plane per “frame.”
Both due to the cost and size multiple displays per eye and their associated optics including those to align and overlay them, the only possible way ML could build a product for even a modest volume market is by using frame sequential methods using a a high speed spatial light modulator (SLM) such a DLP, LCOS, or OLED microdisplay.
Light rays that coming from a far away point that make into the eye are essentially parallel (collimated) and light rays from a near point have a wider set angles. These differences in angles is what makes them focus differently, but at the same time creates problems for existing waveguide optics, such as what Hololens is using.
The very flat and thin optical structures call “waveguides” will only work with collimated light entering them because of how total light totally internally reflects to stay in the light guide and the the way the diffraction works to make the light exits. So a simple waveguide would not work for ML.
Some of ML’s concepts use use one or more beam splitting mirrors type optics rather than waveguides for this reasons. Various ML’s patent applications show using a single large beam splitter or multiple smaller ones (such as at left), but these will be substantially thicker than a typical waveguide.
What Magic Leap calls a “Photonics Chip” looks to be at least one layer of diffractive waveguide. There is no evidence of mirror structures, and because it bends the wood in the background (if it were just a simple plate of glass, the wood in the background would not be bent), it appears to be a diffractive optical structure.
Because ML is doing focus planes, they need to have not one, but a stack of waveguides, one per focus plane. The waveguides in ML’s patent applications show collimated light entering the each waveguide in the stack like a normal waveguide, but then the exit diffraction gratings both causes the light to exit also imparts the appropriate focus plane angle to the light.
To be complete, Magic Leap has shown in several patent applications shown some very thick “freeform optics” concepts, but none of this would look anything like the “Photonics Chip” that ML shows. ML’s patent applications show many different optical configurations and they have demoed a variety of different designs. What we don’t know is if the Photonics Chip they are showing is what they hope to use in the future or if this will be in their first products.
Most of Magic Leaps patent applications showing optics have more like fragments of ideas. There are lots of loose ends and incomplete concepts.
More recently (one publish just last week) there are patent applications assigned to Magic Leap with more “fully formed designs” that look much more like they actually tried to design and/or build them. Interestingly, these applications don’t include as inventors the founders Rony Abovitz, the CEO, nor even Brian T. Schowengerdt, Chief Scientist, while they may use ideas from those prior “founders patent application.”
While the earlier ML applications mention Spatial Light Modulators (SLMs) using DLP, LCOS, and OLED microdisplays and talk about Variable Focus Element (VFEs) for time sequentially generating focus planes, they don’t really show how to put them together to make anything (a lot is left to the reader).
Patent Applications 2016/0011419 (left) and 2015/0346495 (below) show straight forward ways to achieve field sequential focus planes using a Spatial Light Modulator (SLM) such as DLP, LCOS or OLED microdisplay.
As focus plane is created by setting the a variable focus element (VFE) to a one focus point and then generating the image by the SLM. Then the VFE focus is then changed and a second focus plane is displayed by the SLM. This process can be repeated to generate more focus planes and limited by how fast the SLM can generate image and by level of motion artifact that can be tolerated.
These are clearly among the simplest way to generate focus planes. All that is added over a “conventional” design is the VFE. When I first heard about Magic Leap many months ago, I heard they were using DLPs with multiple focus depths but a more recent Business Insider is reporting ML is using using Himax LCOS. Both of these could easily be adapted to support OLED microdisplays.
The big issue I have with the straight forward optical approaches are the optical artifacts I have seen in the videos and the big deal ML makes out of their Photonics Chip (waveguide). Certainly their first generation might use a more straightforward optical design and then save the Photonics Chip for the next generation.
As I wrote last time, there is a lot of evidence from the videos ML has put out that they are using a waveguide at least for the video demos. The problem is when you bend light in a short distance using diffraction gratings or holograms is that some of the light does not get bent correctly and this shows up colors not lining up (chroma aberrations) as well as what I have come to call the “waveguide glow”. If at R2D2 below (you may have to click on the image see it clearly) you should see a blue/white glow around R2D2. I have seen this kind of glow in every diffractive and holographic waveguide I have seen. I have heard that the glow might be eliminated someday with laser/very narrow bandwidth colors and holographic optics.
The point here is that there is a lot of artifact evident that ML was at least using some kind of waveguide in their videos. This makes it more likely that their final product will also use waveguides and at the same time may have some or all of the same artifacts.
If you drew a venn diagram of all existing information, the one patent application that fits best it all is the very recent US 2016/0327789. This is no guarantee that it is what they are doing, but it fits the current evidence best. It combines the a focus plane sequential LCOS SLM (although it shows it could also support DLP but not OLED) with waveguide optics.
The way this works is that for every focus plane there are 3 Waveguides (RED, Green,and Blue) and spatial separate set of LEDs Because the are spatially separate, they will illuminate the LCOS device at a different angle and after going through the beam splitter the waveguide “injection optics” will cause the light from the different spatially separated LEDs to be aimed at a different waveguide of the same color. Not shown in the figure below is that there is an exit grating that both causes the light to exit the waveguide and imparts an angle to the light based on the focus associated with that give focus plane. I have coloring in the “a” and “b” spatially separated red paths below (there are similar pairs for blue and green).
With this optical configuration, the LCOS SLM is driven with the image date for a given color for a given focus plane and then the associated color LED for that plane is illuminated. This process then continues with a different color and/or focus plane until all 6 waveguides for the 3 colors by 2 planes have been illuminated.
The obvious drawbacks with this approach:
The ‘789 patent show an alternative implementation for using a DLP SLM. Interestingly, this arrangement would not work for OLED Microdisplays as they generate their own illumination so you would not be able to get the spatially separated illumination.
Magic Leap is almost certainly using some form of spatial light modulator with field sequential focus planes (I know I will get push-back form the ML fans that want to believe in the FSD — see the Appendix below); but this is the only way I could see them going to production in the next few years. Based on the Business Insider information, it could very well be an LCOS device in the production unit.
The the 2015/0346495 with the simple beam splitter would be what I would have choose for a first design provide there is an appropriate variable focus element (VFE) available. It is by far the simplest design and would seem to have the lowest risk. The downside is that the angled large beamsplitter will make it thicker but I doubt that much more so. Not only is it lower risk (if the VFE works) but the image quality will likely be better using a simple beam splitter and spherical mirror-combiner than many layers diffractive waveguide.
The 2016/0327789 application touches all the basis based on available information. The downside is that they need 3 waveguides per focus plane. So if they are going to say support just 3 focus planes (say infinity, medium, and short focus) they are going to have 9 (3×3) layers waveguides to manufacture and pay for and 9 layers to look through to see the real world. Even if each layer is extremely good quality, the error will build up in so many layers of optics. I have heard that the Waveguide in Hololens has been a major yield/cost item and what ML would have to build would seem to be much more complex.
While Magic Leap certainly could have something totally different, but they can’t be pushing on all fronts at once. They pretty much have to go with a working SLM technology and get their focus planes time sequentially to build an affordable product.
I’m fond to repeating the 90/90 rule that “it takes 90% of the effort to get 90% of the way there, then it takes the other 90% to do the last 10%” and someone quipped back, it can also be 90/90/90. The point being is that you can have something that look pretty good and impresses people, but solving the niggling problems, making it manufacturable and cost effective almost always takes more time, effort, and money than people want to think. These problems tend to become multiplicative if you take on too many challenges at the same time.
As far as display technologies go each of the spatial light technologies has it pro’s and cons.
I would be very concerned about Magic Leap’s image quality and resolution beyond gaming applications. Forget all those magazine writers and bloggers getting all geeked out over a demo with a new toy, at some point reality must set in.
Looking at what Magic Leap is doing and what I have seen in the videos about the effective resolution and image quality it is going to be low compared to what you get even on a larger cell phone. They are taking a display device that could produce a good image (either 720p or maybe 1080p) under normal/simple optics and putting it through a torture test of optical waveguides and whatever optics used to generate their focus planes at a rational cost; something has to give.
I fully expect to see a significant resolution loss no matter what they do plus chroma aberrations, and waveguide halos provide they use waveguides. Another big issue for me will be the “real world view” through whatever it takes to create the focus planes and how will it effect you say seeing you TV or computer monitor through the combiner/waveguide optics.
I would also be concerned about field sequential artifacts and focus plane sequential artifacts. Perhaps these are why there are so many double images in the videos.
Not to be all doom and gloom. Based on casual comments from people that have seen it and the fact that some really smart people invested in Magic Leap, it must provide an interesting experience and image quality is not everything for many applications. It certainly could be fun to play with at least for a while. After all, Oculus rift has a big following and its angular resolution is so bad that they cover up by blurring and it has optical problems like “god rays.”
I’m more trying to level out the expectations. I expect it to be a long way from replacing your computer monitor, as one reporter suggested, or even your cell phone, at least for a very long time. Remember that this has so much stuff in that in addition to the head worn optics and display you are going to have a cable down to the processor and battery pack (a subject I have only barely touched on above).
Yes, Yes, I know Magic Leap has a lot of smart people and a lot of money (and you could say the same for Hololens), but sometime the problem is bigger than all the smart people and money can solve.
The first step in understand Magic Leap is to remove all the clutter/noise that ML has generated. As my father use to often say, there are to ways to hide information, you can remove it from view or your can bury it.” Below is a list of the big things that are discussed by ML themselves and/or in their patents that are either infeasible or impossible any time soon.
It would take a long article on each of these to give all the reasons why they are not happening, but hopefully the comments below will at least outline the why:
A number of people of picked up on this particularly because the co-founder and Chief Scientist, Brian Schowengerdt, developed this at the University of Washington. The FSD comes in two “flavors” the low resolution single FSD and the Arrayed FSD
1) First, you pretty limited on the resolution of a single mechanically scanning fiber (even more so than Mirror scanners). You can only make them spiral so fast and they have their own inherent resonance. They make an imperfectly space circular spiral that you then have to map a rectangular grid of pixels onto. You can only move the fiber so fast and you can trade frame rate for resolution a bit but you can’t just make the fiber move faster with good control and scale up the resolution. So maybe you get 600 spirals but it only yields maybe 300 x 300 effective pixels in a square.
2) When you array them you then have to overlap the spirals quite a bit. According to ML patent US 9,389,424 it will take about 72 fibers scanner to made a 2560×2048 array (about 284×284 effective pixels per fiber scanner) at 72 Hz.
3) Lets say we only want 1920×1080 which is where the better microdisplays are today or about 1/2.5 of 72 fiber scanners or about 28 of them. This means we need 28 x 3 (Red, Green, Blue) = 84 lasers. A near eye display typical outputs between 0.2 and 1 lumen of light and you divide this then by 28. So you need a very large number really tiny lasers that nobody I know of makes (or may even know how to make). You have to have individual very fast switching lasers so you can control them totally independently and at very high speed (on-off in the time of a “spiral pixel”).
4) So now you need to convince somebody to spend hundreds of millions of dollars in R&D to develop very small and very inexpensive direct green (particularly) lasers (those cheap green lasers you find in laser pointers won’t work because they switch WAY to slow and are very unstable). Then after they spend all that R&D money they have to then sell them to you very cheap.
5) Laser Combining into each fiber. You then have the other nasty problem of getting the light from 3 lasers into a single fiber; it can be done with dichroic mirrors and the like but it has to be VERY precise or you miss the fiber. To give you some idea of the “combining” process you might want to look at my article on how Sony combined 5 lasers (2 Red, 2 Green, and 1 Blue for brightness) for a laser mirror scanning projector https://www.kguttag.com/2015/07/13/celluonsonymicrovision-optical-path/. Only now you don’t do this just once but 28 times. This problem is not impossible but requires precision and precision cost money. Maybe if you put enough R&D money into it you can make it on a single substrate. BTW, It looks to me that in the photo you see of Magic Leap prototype (https://www.wired.com/wp-content/uploads/2016/04/ff_magic_leap-eric_browy-929×697.jpg) it looks like they didn’t bother combining the lasers into single fibers.
6) Next to get the light injected into a waveguide you need to collimate the arrays of cone shaped light rays. I don’t know of any way, even with holographic optics that you can Collimate this light because you have overlapping rays of light going in different directions. You can’t collimate the individual cones of light rays or there is not way to get them to overlap to make a single image without gaps in it. I have been looking through the ML patent applications an they never seem to say how they will get this array of FSDs injected into a waveguide. You might be able to build this in a lab for one that is horribly inefficient by diffusing the light first but it would be horribly inefficient.
7) Now you have the issue of how are you going to support multiple focus planes. 72Hz is not fast enough to do it Field Sequentially so you have to put in either parallel ones so multiply by the number of focus planes. The question at this point is how much more than a Tesla Model S (starting at $66K) will it cost in production.
I think this is a big ask when you can buy an LCOS engine at 720p (and probably soon 1080p) for at about $35 per eye. The theoretical FSD advantage is that it might be able to be scaled it up to higher resolutions but you are several miracles away from that today.
There is no way to support any decent resolution with Light Fields that is going to fit on anyone’s head. It takes about 50 to 100 times the simultaneous image information to support the same resolution with a light field. Not only can’t you afford to display all the information to support good resolution, it would take and insane level of computer processing. What ML is doing is a “shortcut” of multiple focus planes which is at least possible. The “light wave display” is insane-squared, it requires the array of fibers to be in perfect sync among other issues.
ML patents show passive waveguides with multiple displays (fiber scanning or conventional) driving them. It quickly becomes cost prohibitive to support multiple displays (2 to 6 as the patents show) all with the resolution required.
Several of their figures show electrically controlled variable focus elements (VFE) optics on either side of the waveguides with one set changing the focus of a frame sequential image plane compensating while a second set of VFE compensates so the “real world” view remains in focus. There is zero probability of this working without horribly distorting the real world view.
Active Switching Waveguides – ML patents applications show many variations they drawn attention from other articles. The complexity of making them and the resultant cost is one big issue. There would likely be serious the degradation to the view all the layers and optical structures through to the real world. Then you have the cost both in terms of displays and optics to get images routed to the various planes of the waveguide. ML’s patent applications don’t really say how the switching would work other than saying they might use liquid crystal or lithium niobate but nothing so show they have really thought it through. I put this in the “unlikely” category because companies such as DigiLens have built switchable Bragg Gratings.
>. This means we need 28 x 3 (Red, Green, Blue) = 84 lasers.
Not true; with optical switching you can send the same lasers down different pipes very rapidly over time.
No, I was correct for two reasons:
1) Optical switching would be more complex and costly than having separate lasers.
2) Remember that the lasers have to turn on and off on a (spiral) pixel by pixel basis for each FSD to work.
> 1) Optical switching would be more complex and costly than having separate lasers.
I’ll trust you on the expense, but remember you said green lasers that can modulate on and off fast enough basically don’t exist, and would have to be invented. Maybe they wouldn’t be cheaper =).
> 2) Remember that the lasers have to turn on and off on a (spiral) pixel by pixel basis for each FSD to work.
Could it be possible to switch within each pixel? You have ~170ns to play with. Tunable lasers can change wavelength in a couple nanoseconds in some cases. Then if you could selectively filter, you’d have a toggle. Just throwing that out, I don’t know the feasibility of that for the different colors, etc.
If not, you could also potentially do interlacing patterns, eyes are less sensitive to red and blue so you could run them at half res, giving 1/3rd less lasers.
Or how about this alternative:
What if the fibers oscillate on both ends, with the end on the belt pack acting as a camera instead of projector (the invention was originally used as a surgical camera if I remember right). Both oscillating in sync. Then it just has DLP or LCOS chips in the belt pack, but still uses fiber projectors for output to the waveguide and eye.
Despite of ML secretecy, you succeeded writing a very thorough review, even in your standatds. Thanks.
However, it seems you have agenda against ML. My attitude, given that smart people inveted in them, is that they are not guilty, unless proven otherwise. Concretely, I tend to agree with you regarding everything you wrote in the appendix, but your skepticism regarding the image quality of the feasible technologies might prove wrong. From my experience, the human vision knows how to compensate for a few types of image flaws. Moreover, a smart VR/AR system, could potentially take more advantage of the human vision ability to compensate for poor image, if the system is able to probe the eye state (direction of gaze and focal distance) in real time.
Do you believe each waveguide in the stack is a surface relief grating (like Hololens)?
Do they really need a waveguide per focus plane? Could they not time sequence focal planes through the same waveguide?
I have no idea how they might make the waveguide.
The problem as I tried to point out, is that a waveguide wants collimated light for it to work. Collimated light is essentially light focused at infinity. So “close focus light”/non-collimated won’t work in a waveguide. There are alternatives to using a waveguide such as mirrors/prisms that don’t depend on TIR and they show some of these in their patents
Hello. Just wondering if the technique used in this recent volumetric display is covered in some of your descriptions above:
https://www.google.ca/amp/s/www.engadget.com/amp/2016/09/28/volume-is-a-1-000-holographic-display-for-your-home/
Thanks
No, that is something quit different, it is a Volumetric Display. It reminds me any of a number of Volumetric Displays I have seen through the years. Most of them involved a rotating plate that you project onto such as show at this link:http://forum.allaboutcircuits.com/threads/a-3d-volumetric-display-project.111648/. I would take some time to try and figure this one out as it does not appear to have a moving head.
Sorry, I wasn’t clear enough.
I know its a volumetric display, and most are swept.
Ive seen many in person in fact, but this one, stood out to me, in that it is stationary. It takes 10 or so horizontal strips from a standard screen and stretches and layers them in Z, using some type of optics I didn’t quite follow.
It loosely covered some of the challenges you addressed in terms of optic quality, when ML may be attempting to stack multiple images together.
Here’s another vid showing more of the optics:
https://youtu.be/NKTfP56rpDA?t=358
Cheers
What Ive heard regards the output in some articles is that people were mistaking the virtual elements with real objects and that a glow and other measures have been put in place to make the virtual elements more distinguishable and harder to mistake for real.
I don’t mind “artistic effects” but I don’t think that is what these are. I have seen these halos in the “A New Morning” video as well and those were simple shapes.
Despite of ML Confidentiality, you succeeded writing a very thorough review, even in your standatds. Thanks.
However, it seems you have agenda against ML. My attitude, given that smart people inveted in them, is to give them credit, unless proven otherwise. Concretely, everything you wrote in the appendix sounds solid to me, but I don’t share your skepticism regarding the image quality of the feasible technologies. From my experience, the human vision knows how to compensate for a few types of image flaws. Moreover, a smart VR/AR system, could potentially take more advantage of the human vision ability to compensate for poor image, if the system is able to probe the eye state (direction of gaze and focal distance) in real time.
I’m trying to evaluate what ML is doing. It is a puzzle I am trying to solve. It is a somewhat iterative process an I am putting stuff out there and some people are challenging my conclusions. I am now investigating one of the options, I initially discounted that is certainly worth a second look.
It may come off as an agenda, but I am trying as an engineer to get what they are saying to fit the evidence. You have to challenge some assumptions and realize that often companies have a good bit of marketing spin in what they are doing. It is the “scientific method” so to speak. I do start to wonder when someone makes big claims but has not shown it publicly.
There are lots of smart people in the world and I assume that ML has smart people. Microsoft Hololens has a lot of smart people and so did Google Glass, but these smart people could not do what ML is talking about doing. Maybe Magic Leap has made a great breakthrough, it is possible. It is also possible that while they have some big discoveries in one area but they will find they need other breakthroughs to support it.
It is very possible that Magic Leap is taking advantage of knowing where the eye is looking.
They are reported to be using Eyefluence for eye tracking. Would it be possible to reduce the number of lasers by only illuminating the portion of the screen being looked at?
There is definitely a chance that they could be using eye tracking, but trying to do as you suggest would be a nightmare to build and control
They have and application US 2016/0328884 where they use a single FSD and change the density of the spiral lines based on the where the eye is looking. I discarded this application in my original analysis and I am circling back to look at it. On first and now second look, I’m not sure if it has enough “lift” to make a single FSD have enough resolution where it is needed.
It is almost certain that they will use eye tracking and foveated rendering. They have looked at both SMI and Eyefluence and seem to be going with Eyefluence. Also, they only need multiple planes where the eye is looking and one plane everywhere else. The public videos they post cannot use foveated rendering so there is probably a difference in technology used even though they say “through the optics”.
Thank you!
It’s great that someone with the expertise to look into the patents and hard information and give a report.
Even though I only understood 1/10th of what you were saying about the technical side, the marketing is off the charts: A secret sucks people in like nothing else! I agree that expectations need to be muted significantly.
It is a twisted path. To some degree I am learning by doing. To really understand something, it help to try and explain it to someone else. I find as I do that, I’m force to fill in holes that it is easy to mentally gloss over. Hopefully it is not too tough to read.
I also employ a loose version of the “Socratic Method” (famously used in the movie Paper Chase). Basically challenging things and having a back and forth discussion where theories have to be backed up with facts/evidence.
Out of the back and forth discussion has come a different direction. I finally can see the path to come close to ML claims. I still have some issues with it, but there was a bit of an epiphany yesterday in going back and forth with someone on Reddit. I looked at a patent the person suggested, one I had looked at before and had too quickly dismissed, in looking at it more seriously it looks like it could work and dramatically alter my prior conclusions.
Hi Karl,
I write over at GPU of the brain and I just wanted to say that these articles are really great. Keep them coming! I’ve posted about then here: http://gpuofthebrain.com/blog/2016/11/21/opposing-evidence-is-magic-leap-actually-boring
Thanks,
I enjoyed your site and pointed to it. I’m trying to take a more skeptical and engineering approach. If it is “real” then you should be able to prove it. The array of fibers is total fantasy and a mis-direction in trying to understand ML. But there is a path (don’t know if it is manufacture-able yet but if it is there is a path) to get the “effective resolution” via a single fiber and eye tracking. There is still the issue of whether the optics are up to it.
BTW, the video’s “through the optics” ML shows don’t use this technique as no camera could video it properly (it would just be a blurry mess) and it requires eye tracking. The picture of what ML is doing is becoming clearer.
I hope to get an article up explaining this in a few days.
Thanks, Karl. I always appreciate critical thinking applied to market-driven claims of utility or performance. One of the most difficult problems I think about is object obscuration. One can project a virtual object at some distance, but getting it to truly obscure a more distant real object is a problem I don’t see addressed. The ML videos I’ve seen simply use low ambient light levels to mask the problem.
Have you looked at this problem?
Hi Steve, I hope all is well with you.
The only occlusion they are doing it to put their virtual objects behind real ones. They are real time scanning the real world. In the videos they have shot there is some registration issues of a few pixels. For showing on top, the are just combining light and therefor just dominate, which as you suggest why the real world light levels are low.
ML main claim to fame is support “focus planes” to have the focus be based on where the eye is focused whereas (as you know) most near eye displays focus at or near infinity.
Hololens also does “mix reality” by scanning the reals world so they can put virtual objects on top of things (I don’t know if deal with hiding behind real world objects but I don’t see how this would be much different in terms of processing.
Thanks for the article. Excellent summary and sound technical insight, but can you, please proof-read / correct it?
There are so many grammatical errors that it’s really hard to focus on the content.
I have to read everything 3 times.
Fixing those, would take it from a good article to a superb one!
Gabor please! In reading your post, I could also find fault with not only punctuation, but present and past tense. Karl did a great job as an engineer and any grammatical errors were minor.
[…] It is well known that Microsoft’s Hololens uses two Himax manufactured Field Sequential Color (FSC) LCOS microdisplays. Additionally there are reports, particularly from KGI Securities analyst Ming-Chi Kuo as reported in Business Insider that Magic Leap (ML) is also using Himax’s LCOS. Further supporting this is that of all ML patent applications, ML patent application US 2016/0327789 which uses LCOS best fits the available evidence. […]
[…] LCOS is starting to show up as more than a passing mention, most significantly in ‘789 shown at left and discussed on this blog over a year ago, but on other applications, LCOS is still ending up on a list that starts with […]
[…] 2016, I wrote that application 20160327789 seems like the best fit. In the ‘789 patent (right) they start with two sets of LED light sources that are at slightly […]
[…] method this blog pegged as their most likely configuration back in 2016 in articles Magic Leap – Separating Magic and Reality and Magic Leap: Focus Planes (Too) Are a Dead End […]
[…] Just a quick, but timely note today. I was asked by iFixit to help them identify the optical components for their Magic Leap One (ML1) teardown published today. The picture above is from iFixit but with my labeling. It turns out the teardown confirms what I wrote my November 20, 2016 article Separating Magic and Reality article. […]
Thank you Karl, and I had also spent some time on their patents back in 2015-2016 and came to many of the same conclusions. Not being a substrate engineer, I wasn’t sure at what level they could achieve this, but one thing that did stick out was the computational power needed to handle parallel layers as well as all the positional/rotational tracking and eye tracking (as is now known) in a mobile package. As with the Hololens, many of its limitations are due to what can be done in small package with a mobile CPU/GPU even if they are using one of the best on the market currently – namely the NIVIDIA TX2. Even then, that will not have processing power to provide more than a small image with poor dynamic range, with less than desirable update rate for tracking, occlusion and plane switching that is already being seen in many of the demos. All this would be okay, but I feel that engineering hurdles are still many years off and may dampen and disappoint those who believed in their magical marketing.
PS I plan to bring up your blog tomorrow (9/24/2018, 2:30 pm PST) as a great read for my viewers.
Thanks. As it turned out (this article was written nearly 2 years ago), they were only able to support 2 planes and they only display them one at a time. Based on eye tracking and where the eyes appear to verge, they select one of the two planes. Displaying both at the same time would cause visual problems.
[…] who theorized that Magic Leap would use waveguide optics were proven correct — there are separate waveguides for each color channel (R,G,B) and two separate focal planes, for […]
[…] who theorized that Magic Leap would use waveguide optics were proven correct — there are separate waveguides for each color channel (R,G,B) and two separate focal planes, […]
[…] patents suggested that they were operative with customary LCOS displays. Guttag’s 2016 patent analysis includes, underneath ‘Best Fit Magic Leap Application with Waveguides’, a apparent […]
[…] догадался о применении такой системы в очках Magic Leap ещё в 2016 году, основываясь на патентных документах […]
[…] “general use. I will avoid the very deep discussion of vergence-accommodation-conflict (VAC). Here is a link to an article I wrote about VAC in 2016 concerning Magic Leap, but you can find hundreds of articles elsewhere on the subject. It is a very active topic in […]
[…] the key display capability of Magic Leap 1 three years before it was formally announced (See 2016 “best fit” and the iFixit Teardown I helped analyze in 2019). As predicted, Magic Leap burned through over $3 […]
[…] element in some of their earliest applications and discussed in this blog’s 2016 article Magic Leap – Separating Magic and Reality. There are several known ways to make a variable focus element (VFEs). The most common […]