Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
So what is Magic Leap doing? That is the $1.4 billion dollar question. I have been studying their patents as well as videos and articles about them and frankly a lot of it does not add up. The “Hype Factor” is clearly off the chart with major and high tech news/video outlets covering them with a majors marketing machine spending part of the $1.4B, yet no device has been shown publicly, only a few “through the Magic Leap” online videos (6 months ago and 1 year ago). Usually something this much over-hyped ends up like Segway (I’m not the first to make the Segway comparison to Magic Leap) or more recently Google Glass.
Magic Leap appears to be moving on many different technological fronts at once (high resolution fiber scanning display technology, multi-focus- combiner/light fields, and mega-processing to support the image processing required) which almost always is a losing strategy even for a large company no less a startup, albeit a well funded one. What’s more, and the primary subject of this article, they appear to be moving on many different fronts/technologies with respect to the multi-focus-combiner.
Based on the image above from Wired in April 2016 and other articles talking about a “photonic chip,” a marketing name for their combiner not used in any of their patent applications that I could find. By definition, a photonic device would have some optical property that is altered electronically, but based on other comments made by Magic Leap and looking at the patents, the so called “chip” is just as likely a totally passive device.
It is also well known that Magic Leap is working on piezo scanned laser fiber displays, a display technology initially developed by Magic Leap’s CTO while at the University of Washington (click left for a bigger image). Note that is projects a spiraling cone of light.
A single scanning display is relatively low resolution and so to achieve Magic Leaps resolution goals will require arrays of these scanning fibers as outlined in their US Application 2015/0268415.
Magic Leap is moving in so many different directions at the same time. I plan on covering the scanning fiber display in more detail much more detail in the near future.
A key concept running through everything about Magic Leap it is that their combiner supports at least multiple focus depths at the same time. The term “Light Fields” is often used in connection with Magic Leap, but what they are doing is not classic light fields such as Nvidia has demonstrated (very good article and video is here). Or even what Stanford’s Gordon Wetzstein work talks about with compressive light field displays (example here) and several of his YouTube videos, in particular this one that discusses light fields and the compressive display. (More on this background at the end).
A key think to understand about “light fields” and Magic Leaps multi-focus-planes is that they are based on controlling the angles of the rays of light as it controls the focus distance. The rays of light that will make it through the eye’s pupil from a point on far away objects come in nearly parallel, whereas the rays from a nearby point have a wider range of angles.
Magic Leaps patents show a mix of related and very different types of waveguide combiners. Most in-line with what Magic Leap talks about in the press and videos are the ones that include multi-plane waveguides and scanned laser fiber displays. These include US patent applications US20150241705 (‘705) and the 490 page US20160026253 (‘253). I have clipped out some of the key figures from each below (click on the images to see larger images).
Fig. 8 from the ‘705 patent uses a multi-layer electrically switched diffraction grating waveguide (but they don’t say what technology they expect to use to cause the switching). In addition to switching each diffraction grating makes the image focus differently as shown in Fig. 9. While this “fits” with the “photonic chip” language by Magic Leap, I’m less inclined to believe this is what Magic Leap is doing based on the evidence to date (although Digilens has developed switchable SBGs in their waveguides).
Fig. 6 likely comes closer to what Magic Leap seems to be working on, at least in the long term. In this case there is one or more laser scanning fiber displays for each layer of the diffraction grating (similar to Fig. 8 but passive/fixed). The gratings layers in this setup are passive and based on which display is “on” chooses the grating layer and thus chooses the focus. Also note the ” collimation element 6” between the scanning fibers 602a-e and the waveguide 122. They take the cone of rays from the spiral scanning fiber and turns them into an array of parallel (collimated) rays. Below shows a prototype from the June 2016 “Wired” article with two each of red, green, blue fibers per eye (6 total)which would support two simultaneous focus points (in future articles I plan on going into more about the scanning fiber displays).
Above I have put together a series of figures from Magic Leap’s US patent application 2015/0346495. Most of these are difference approaches to accomplish essentially the same effect, namely to create 2 or more images in layers that appear to be in focus at different distances. In some approaches they will generate the various focused images time sequentially and rely on the eye’s persistence of vision to fuse them (the Stanford Compressive Display works sequentially). You may note that some of the combiner technologies shown above are not that flat including what is known as “free form optics” (Fig. 22G above) that would be compatible with a panel (DLP, LCOS, or Micro-OLED display).
To the left patent application 2015/0346495 that shows a very different optical arrangement with a totally different set of inventors from the prior patents. This device supports multiple focus effects via a Variable Focus Element (VFE). What they do is generate a series of images sequentially and change the focus between images and use the persistence of the human visual system to fuse the various focused images.
This is a totally different approach to achieve the same effect. It does requires a very fast image generating device which would tend to favor DLP and OLED over say LCOS as the display device. I have questions as to how well the time sequential layers will work with a moving image and would there be temporal breakup-effect.
There are also a number of patents with totally different optical engines and totally different inventors (and not principles of Magic Leap) with free-form (very thick/non-flat) optics 20160011419 and 20160154245 which would fit with using an LCOS (or DLP) panel instead of the laser fiber scanning display.
I have heard from more than one source that at least some early prototypes by Magic Leap used DLPs. This would suggest some form of time sequential focusing.
“Edge injection” waveguide – There needs to be an area to inject the light. All the waveguide structures in Magic Leaps patents show use “side/edge” injection of the image. Compare to the Microsoft’s Hololens (at right)which injects the image
light in the face (highlighted with the green dots). With a edge injected waveguide, the waveguide would need to be thicker for even a single layer, no less the multiple layers with multiple focus distances that Magic Leap is requires.
Lumus (at left) has series of exit prisms similar to a single layer of the Magic Leap ‘495 application Figs. 5H, 6A, 8A, and 10. Lumus does edge injection but at roughly a 45 degree angle (see circled edge) which gives more area to inject the image and gets the light started at an angle sufficient for Total Internal Reflection (TIR). There is nothing like this in the Magic Leap chip.
Looking at the Magic Leap chip” (right) there is not obvious place for light to be “injected”. One would expect to see some discernible structure such as an angled edge or a some structure like in the ‘705 application Fig. 8 for injecting the light. Beyond this, what about the injecting multiple images for the various focus layers. There is a “tab” at the top which would seem to be either for mounting or it could be a light injection area for a surface injection like Hololens, but then I would expect to see some blurring/color or other evidence of diffractive structure (like Hololens does) to cause the light to bend about 45 degrees for TIR in such a short distance.
Another concern is that you don’t see any structure other than some blurring/diffusion in the Magic Leap chip. Notice in both the Lumus and Microsoft combiners you can see structures, a blurring/color change in the case of Hololens and the exit prisms in the case of Lumus.
Beyond this if they are using their piezo scanned laser fiber display, it generates a light spiral angular cone of light that has to be “columated” (make the light rays parallel which is shown in the patent applications) so they can make their focus effects work. There would need to be a structure for doing the columation. If they are using a more conventional display such as DLP, LCOS, or MicroOLED they are going to need a larger light injection area.
My conclusion is that at best this Magic Leap chip shown is either part of their combiner (one layer) or just a mock-up of what they hope to make someday. I haven’t had a chance to look at or through it and anyone that has is under NDA, but based on the evidence I have, it seems unlikely that what is shown is function.
I’m curious to see how small/critical the pupil/eyebox will be for their combiner. On the one hand they want light at a the right angles to create the focusing effects and on the other hand they will will diverse/diffused light to give a large enough pupil/eyebox which could be at a cross purpose. I’m wondering how critical it will be to position the eye in precisely the right place. This is a question and not a criticism per say.
I had been studying the various patents and articles for some time and then last week’s Business Insider (see: http://www.businessinsider.in/Magic-Leap-could-be-gearing-up-for-a-2017-launch/articleshow/55097808.cms) throws a big curve ball. The article attributes KGI Securities analyst Ming-Chi Kuo as saying:
“the high cost of some of Magic Leap’s components, such as a micro projector from Himax that costs about $35 to $45 per unit.”
I have no idea as to whether this is true or not, but if true it suggests something very different. Using a Himax LCOS device is inconsistent with about everything Magic Leap has filed patents on. Even the sequentially focusing display would at best be tough with the Himax LCOS as it has a significantly lower field sequential rate than DLP.
If true, it would suggest that Magic Leap going to put out a “Magic Leap Very Lite” product based around some of their developments. Maybe this will be more of a software, user interface, and developer device. But I don’t see how they get close to what they have talked about to date. The highest resolution Himax production device is 1366×768.
Both are based on greatly reducing the image content from the general/brute force case so that a feasible system might be possible. The Stanford approach is different from what Magic Leap appears to be doing. The Stanford System has a display panel and a “modulator” panel that selects the lights rays (via controlling the angle of light that gets through) from display panel. In contrast Magic Leap generates multiple layers of images with different focus associated with each layer in an additive manner. This should mean that there two approaches to things like “occlusion” where parts of an image hide something behind it will have to be different (it would seem to be more easily dealt with in the Stanford approach I would think).
A key point that Dr. Wetztein makes is that brute force light fields (ala Nvidia which hugely sacrifices resolution) are impractical (too much to display and too much to process) so you have to find ways to drastically reduce the display information. Dr. Wetztein also comments (a passing comment in the video) the that the problems are greatly reduced if you can track the eye. Reducing the necessary image content has to be at the hear the heart of Magic Leap as well. In all the incarnations in the patent art and Magic Leap’s comments point to supporting simultaneously two or more focus points. Eye tracking is another key point in Magic Leap’s patents.
One might wonder if you can eye track and if you can tell the focus point of the eyes, you could eliminate the need to the light field display altogether and generate an image that appears to be focused and blurred based on the focus point of the eye. Dr. Wetztein points out that one of the big reasons for having light fields is to deal with the eyes focus not agreeing with where the two eyes are aimed
Summing it all up, I am skeptical that Magic Leap is going to live up to the hype, at least anytime soon. $1.4B can buy a lot of marketing as well as technology development, but it looks to me that to accomplish what Magic Leap wants to do, is not going to be feasible for a long time. Assuming they can make it work (I wonder about the fiber scanning display), there is then the issue of feasibility (The Concord SST airplane was “possible” but it was not “feasible” for example).
If they do enter the market in 2017 as some have suggested, it is almost certainly going to be a small subset of what they plan to do. It could be like Apple’s Newton that arguably was too far ahead of its time to fulfill its vision or it could be the next SST/Segway.
Next time I am planning on writing about Magic Leap’s scanning fiber display.
Karl,
Great article .
If you check the USTPO site regarding the prosecution of ML patent application 20150268415 you will see that it had received a Final Rejection of all claims on 4/1/16 . An RCE – Request for Continued Examination was then filed by ML on 10/3/16 . On 11/4/16 another Non-Final Rejection of all claims was made by the examiner .
Do you think this could effect the timeline of a product using their piezo scanned laser fiber display ?
Some on this Reddit thread suggested that not having this patent wouldn’t be a factor .
https://www.reddit.com/r/magicleap/comments/5auzat/any_comments_regarding_key_patent_rejection/
Thanks,
I was just about to check out Reddit. I spent a lot of time preparing for the ML article trying to figure out how to explain it to someone that does not know about “light fields.” I’m shooting for a semi-technical audience that may not understand optics. I looked at a number of other articles and videos about ML and had to cringe sometimes, but even I had to generalize a lot to try and keep from going down a details rabbit hole.
I know a good bit about patent prosecution (disclaimer – I am NOT a lawyer nor an agent). As a matter of my background, my father was and two brothers are patent lawyers, I am inventor on 150 US patents, I have been intimately involved in filing some of my patents, and I have been a technical expert in patent litigation. I don’t know your background in patents but in typical patent prosecution (the act of getting a patent) it is very routine to get rejected and even “finaled.” In fact, the system by which examiner get credit rewards “finally rejecting” patent; all a final means is that you lost with the examiner twice and you have pay more fees to get a “continuation.” Sometimes the rejections are solid/good but many times they are almost ludicrous. A few years ago at case known as KSR v Teleflex https://en.wikipedia.org/wiki/KSR_International_Co._v._Teleflex_Inc. had the net effect of making much easier for examiners to reject claims without providing a solid foundation. This as been a gift to some examiners to final patents with less than good reasons and get more “office actions” credited (and the patent office more fees). Without looking through the patent file wrapper (which is public) and reading everything it would be hard to have an opinion on the specific case.
A company would be crazy to delay a product introduction based on patent prosecution as the length of time to get a patent is totally unpredictable. Some patents can take many years to make it through. You can file continuations and divisionals (new claims) through the 20 years after filing. It should have ZERO impact on when they bring the product to market.
I’m planning my next article on the Laser Fiber Scanning which is closely related to mirror based laser beam scanning (and they both originally came out of the U of Washington). Frankly, I think the thing keeping the fiber scanning display back is it feasibility and practicality and zero to do with patents. Like old LBS, it seems a lot simpler to do and scaling it up for higher resolution (it is an inherently low resolution process with electro-mechanical fiber scanning) by arraying multiple displays seems fraught with problems.
I will go and check out the Reddit Thread.
Karl,
Thanks for the clarification re patents .
In your next article can you address if you believe laser safety will be an issue with a piezo scanned laser fiber solution for AR ? My concern is that no testing has been done for this application . Often times the progression for new tech is from the military then to the consumer which I don’t believe has happened in this situation other then perhaps some very limited usage of the Microvision NOMAD . AR devices will be used by people for long periods of time . Also there is the negative perception of lasers light being projected into the eye whether it is a safety issue or not .
TIA
Good point but laser safety is probably NOT one of their problems technically but could be a marketing issue. The problem with a laser projector is that it is sending a large amount of lumens out that hit a screen and then scatter around a room and less than 0.000001% of the light makes it into your eye in NORMAL use. The danger is if you put your eye were directly looking into the lens.
With a near eye display, a high percentage of the light (not all of it due to supporting an “eyebox”) that is generated goes into the eye. You are typically looking at less than 1 lumen to total light and it is spread over the eyebox. Ironically perhaps, but because it is designed to shoot into your eye (and the light levels are tremendously reduced accordingly) it should be safe.
Using an array of scanning lasers, as some Magic Leap information suggests, would further reduce the risk by spreading out the light (each sub-projector would cover only a part of the total area and thus be individually dimmer than a single projector), a bit more like a panel display (LCOS, DLP, or OLED).
There still may be a perception problem and I know of technical people that did not want to even try out the Microvision near eye devices (“let someone else be the Guinea Pig”). You would still have safety concerns of shutting down the laser if the scanning stopped (making sure whatever detect the problem is 100% foolproof). But it is a much more tractable problem than for a projector.
So of ALL the things I might fault Magic Leap for, this is not one of them.
On patents. The Patent Litigator (which I believe he/she was) commenting on Reddit definitely knew his/her stuff so all I could do is agree (and I pretty much wrote the same thing before seeing what he/she wrote).
Bottom line, the patent process is a crap shoot, and they keep changing the rules. You can’t as a business let the patent process affect you or you will sit there and do nothing.
The whole process has been upended by the KSR ruling by the Supreme Court and the “America Invents Act” (written by anti-patent lobbyists) that screwed up everything even more. My dad who was a patent lawyer always said, “The Supreme Court rarely rules on a patent case and when they do they always get it wrong.” On top of this the current Patent Commissioner (and former head patent council of Google) is very anti-patent. The impact of these actions are very anti-small companies/inventors and it will take at least 10 years to figure out how they will affect things (until they have played out in court which usually happens 10 or more years after the patents are filed).
Karl ,
A couple more points regarding lasers .
You mentioned reducing the laser light down to 1 lumen for example to make it safe . Would this be bright enough then for an AR device ? From what I can gather around 3,000 nits is what’s needed for a device that would be workable outside . Perhaps even more nits would be needed with more sunlight
Also the Figure 3 in your initial post shows an RGB Combiner . Would the combiner be a MEMS or LCOS or is their some other method ?
Good questions.
One lumen should be way more than enough to support 3,000 nits because the eyebox/pupil is small. For the automotive HUD which requires about 15,000 nits in direct sunlight, I was able to get over 1,000 nits/lumen supporting a comparatively huge eyebox of more than 20 square inches. For the near eye you have an eyebox/pupil typically much less than a square inch (or less than 1/20th the area). So supporting more than 20,000 lumens with 1 lumen for near eye should be no problem, at least for LCOS and DLP (it is still difficult for transmissive displays and MicroOLED to these level of nits the last I heard).
The R,G,B combiner is passive optics generally using dichroic (color selective) mirrors. It it what any laser scanning projector has to do (see my article on the Sony engine that combines 5 lasers https://www.kguttag.com/2015/07/13/celluonsonymicrovision-optical-path/). It is a very common thing to do. With laser beam combining in the case of Magic Leap you have to be very precise in all 6 axis of freedom so that all of the lasers will enter the fiber reasonably parallel to each other.
Magic Leaps patent application US2015/0268415 shows using of 2 (eyes) by 84 sets of 3 lasers per eye to get their ~50 Megapixel display. That is a LOT of very precise aligning of very small optical elements. Imagine the Sony engine linked to above but rather than combining one set of 5 lasers there are 168 sets of 3 lasers to combine. It certainly could be done simpler per set than what Sony did, but it is still a huge job.
Note that in the engine that Magic Leap has shown that I copied into this article, they are not combining the lasers but rather using 2 of each R, G, B color per eye. This would suggest that combining the lasers into a single fiber just 4 times was non-trivial (otherwise why didn’t they do it?).
And by the way you didn’t ask about speckle. A good bit of the Sony engine I think is dedicated to reducing speckle. This could be an issue if the lasers are used directly and don’t have these extra optical elements.
It’s easy (say on the Reddit Board) to trivialize all the “hidden” things that have to be done to make it work and there is a lot to do after you get the laser light in the fiber to A) produce the sub-displays and B) combine the sub-displays into a single reasonably uniform image.
Summarizing, brightness/nits I don’t see as being an issue. Combing the light is a big issue.
Karl too bad your not ever going to get invited to Magic Leap’s headquarters to see what they have built. It’s way more that you could ever imagine, so sit back and keep speculating because you sir show that you’re truly irrelevant.
I will put you down for having drunk the red Kool-Aid or one who wants to remain totally ignorant. If you knew anything you would give cogent argument rather than just an ad hominem attack.
I must be relevant if what I am writing troubles you so much.
Excellent answer!!!!
Karl’s unwhimsical writings are refreshing and absolutely preferred,especially when the other option is either tremendously full of whim, or whackoism, or whatever they are up to.
RE laser safety and microvision, the CEO again tried to deflect by blaming competition “tactics”.
If the laser safety issues are lies spread by competition, why have the never refuted the well-known studies that show they have a problem?
I’m not sure what studies you are referring to, but below is a quote from the Seeking Alpha transcript of the Conference Call with my bold Highlight:
Alexander Tokman: “A final point that we’ve been getting a lot of inquiries about is laser safety, in particular on selling products with Sony engine in Europe. Using laser safety to combat LBS technology is an old tactic from our competition.
The standards are very clear and we believe selling products with Sony LBS engine in Europe is permissible under current standards as long as the guidelines on marketing for children are observed and the products are properly labeled.
So who is going to make a high volume product like a sell phone where you have to make sure that it can’t be sold to children? Back when I was meeting with cell phone companies they were nervous about anything above Class 1 laser products (totally eye safe under all conditions).
Studies see here:
http://www.laserpointersafety.com/picoprojectors/picoprojectors.html
Strange that MVIS didn’t respond if it’s just a tactic from the commppetition.
Microvision certainly dodges around the subject, but they did respond in their last conference call. Laser scanning projector are measured differently due to the scanning than laser pointer and this is taken into account (the laser inside a Class 3R projector is a Class 4 laser), which is what the papers by Ed Buckley (pointed to in your link) describe.
Quoting Mr. Tokman, CEO of Microvision, on their last conference call, “A final point that we’ve been getting a lot of inquiries about is laser safety, in particular on selling products with Sony engine in Europe.
Using laser safety to combat LBS technology is an old tactic from our competition. The standards are very clear and we believe selling products with Sony LBS engine in Europe is permissible under current standards as long as the guidelines on marketing for children are observed and the products are properly labeled.”
LBS become class 3R a bit above 20 lumens which gets into a safety class that is troublesome in selling to consumers in many countries. Even taking Mr. Tokman’s statement at face value, it is problematic to have high volume product that you can’t market as being usable by children. One would think that cell phones and small projectors would be used by children.
Karl,
Thank you for taking the time to write such a thoughtful overview.
Quick question: You say that $1.4B can buy a lot of marketing. What is the marketing they have done? All I’ve seen thus far, really, are articles/interviews (which are certainly a form of marketing, but not one that requires a budget). So was just curious!
Thanks for your time.
bjh
There are lots of ways to market. You don’t think they got all that publicity without a professional marketing team effort? Certainly they have not bought a lot of advertising, but all the demos and placement cost money.
It just is relatively small amount when they have a $1.4B investment. Of course the fact they raised so much money without having sold a product is a good free marketing magnet.
Hi Karl, I am a PhD student and have been following AR optics since the hype of Google glass. Thanks for your insightful blog on AR related optics and I learnt a great deal.
I am just wondering have you noticed the background (wood at the right of the image) has been distorted/magnified? If this is real, then it means this combiner is a kind of lens which will affect the real-world view. What do you think?
Based on your analysis and this strange behaviour, I agree with you that this is not a real functioning one. Let’s keep digging:)
Thanks,
A few things.
1. Could you be more specific about the “wood on the right side of the image”? In the “A New Morning Video” they pan the image around so the angular wood blocks on the table (if these are what you are referring to) move around from left to right in the frame. If you can give a time window when you are seeing what you think you are seeing. It would be very interesting if there are any changing in the apparent magnification as this would suggest perhaps a free-form or curved combiner versus a flat waveguide, but it could also be caused by a diffraction grating.
2. I think you may be misinterpreting my comments. The video appear to me to be REAL and functioning and shot through some kind of combiner optics. It may or may not be what they will use in a final product; this could be shot through some huge monster prototype for all we know. There is also evidence that they are indeed able to control the focus relative to depth (I’m working on a post that shows this), but I can’t be conclusive that it was done optically versus computer generated because the optics are so poor/blurry and the resolution of the camera. What I am an trying to do is discern what they are doing (and not doing). It’s very clear that the videos are not using the Fiber Scanning Display (FSD) but they have not said what they are using for the display device.
He’s referring to the still image of the photonic chip in Rony’s hand. It both diffracts and warps the wood grain of the wall.
Thanks Sean,
I thought he was talking about the wood blocks in the “A New Morning” video which I have been staring at for a week and was preparing to make a post about 🙂
Yes, definitely what Rony is holding up has something like a diffraction grating or other optical structure in it to bend the wood and blur it slightly; totally plain glass would not do that. Since it looks flat and Magic Leap talks a lot about diffraction gratings, I would lean toward a diffraction grating. It may be just one element/layer of what they are making.
In looking at their on-line videos, I have yet to see anything indicates a diffraction grating (and it could be hard to tell), but they could be doing something totally different in their videos from what they plan to use in a final product. As I point out in these article, a big issue for waveguides or any combiner is that it must direct the generated image light toward the eye while letting the real world light pass through without “damaging” it; invariably, these two requirements are in conflict; the question is in what what does it hurt the real world image (darken, polarize, geometrically distort, a combination of issues, etc.).
Thanks Karl and Sean. Yeah that’s what I was referring to the lens in his hand. Definitely you can make a lens with diffraction grating. From their patents, they have mentioned this diffraction lens (or optical combiner) could generate spherical wavefront to make image perceived as coming from different focal planes.
What i was trying to say is he is probably just holding a plain lens haha:)
Karl as you mentioned optical combiner is necessary for AR optics and it all hurts the real world to some extent (darken, polarization, color). So everyone is just trying to make a choice to sacrifice which property, e.g. diffraction grating needs to comprise on color aberration.
Compared to darken and color, human eyes are less sensitive to polarization. So it seems that optical combiner based on polarization will potentially have the least distortion for the real world view.
As far as I know Lumus seems to build their waveguide combiner based on polarised light. Does this make it a more promising approach in terms of the distortion to real world? Or what’s your perception on this?
I’m about to post an article on what I believe Magic Leap is doing. It turns out they only are using a single combiner and generating the focus effect time sequentially.
All the stuff about multiple layers of combiner is just Magic Leap covering all the possible ways of doing it and/or some of the things they hope to do someday.
If the polarizer will direct the light toward your eyes via reflection, it will also polarize the light coming through it from the real world. This will cause only half of the randomly polarized light in real world to reach your eye. The other issue is that if you look at something that uses polarized light like an LCD Monitor or TV then the colors/brightness will get very distorted.
I don’t know what Lumus uses to make the light bend. The one I looked though used Himax panels which use polarized light so it is possible. A big issue with the multi-prism waveguides like Lumus is discontinuities at the segment boundaries. What the diffraction gratings and holographic waveguides do is make the structure that cause the light to exit extremely small and uniform so the effect is consistent with no discontinuities.
Note that there is no free lunch even with diffraction gratings and holograms. To make light at one angle exit the waveguide, they have to have some effect on the light through it. As you have noted the diffraction grating on the Magic Leap “Photonic Chip” is bending the through light, my guess is that it will also have other effects.
This is in reply to your final comment to Frankenberry about beam combining and splitting, the comment didn’t have a reply button as it was too many levels deep:
What do you think about pigtailing laser sources and joining the 3 fibres, which I believe has been shown in some papers?
Also what do you think of a optical circuit “nanophotonic” slab waveguide with both beam mixing and then splicing, which further could have phased array beamsteering of these precisely aligned sources?
I was recently talking to a photonic circuit start-up about such a pie-in-the-sky display source and they seem confident of the beam splicing, even 10s of channels) but less so of the phased array (although phased array laser beamsteering and focus has been demonstrated by DARPA/ MIT among others)
I’m planning on an article discussing the Fiber Scanning Display (FSD).
There are two ways in which the array of FSD sub-displays could be used. What I will call the “conventional way” where it generates and image and the “light field array”.
In the conventional use the conical nature of the scan is problematic in that they will want the light rays collimated (made parallel) and this would seem to be EXTREMELY difficult (as in I don’t think even diffraction gratings and holograms could do it) to do while generating seamless image. The key problem is how do you bend the light rays from one FSD in one direction while being the light rays from the one next to it in the other WHILE at the same time having a seamless image.
With the direct light field you have the in some ways the opposite problem. You need a wider diversity of light rays than would come from the conical scan. This is why they go to putting triple (or more) fiber per FSD to create more different angles. But now you need even more lasers. You are almost needing a low to medium resolution laser display that is time multiplexed to drive the light field array. If you end up with hundreds of laser with a bunch of optics to combine the RGB into each fiber and then you have to control all this.
I’m trying to go through and look at what it takes to make these displays and it just does not add without assuming a number of near miracles to build even an extremely expensive one, no less something a consumer could afford anytime soon. The only published picture they have shown is of a FSD with only 2 each of R, G, and B, per eye (I assume for 2 different depths and not for an array) where they are not even combining the RGB’s into a single fiber.
Uh Oh !
Engineers at $4 billion Magic Leap are ‘scrambling’ ahead of a big board meeting next week
http://www.businessinsider.com/magic-leap-engineers-scramble-prototype-february-board-meeting-2017-2?utm_source=dlvr.it&utm_medium=twitter
[…] first Magic Leap article was titled Magic Leap “A Riddle Wrapped in an Enigma”. I followed up with Magic Leap – Separating Magic and Reality where I identified the most likely […]