304 North Cardinal St.
Dorchester Center, MA 02124
304 North Cardinal St.
Dorchester Center, MA 02124
Flat and thin waveguides are certainly impressive optical devices. It is almost magical how you can put light into what looks a lot like thin plates of glass and an small image will go on one side and then with total internal reflection (TIR) inside the glass, the image comes out in a different place. They are coveted by R&D people for their scientific sophistication and loved by Industrial Designers because they look so much like ordinary glass.
But there is a “dark side” to waveguides, at least every one that I have seen. To made them work, the light follows a torturous path and often has to be bent at about 45 degrees to couple into the waveguide and then by roughly 45 degrees to couple out in addition to rattling of the two surfaces while it TIRs. The image is just never the same quality when it goes through all this torture. Some of the light does not make all the turns and bends correctly and it come out in the wrong places which degrade the image quality. A major effect I have seen in every diffractive/holographic waveguid is I have come to call “waveguide glow.”
Part of the problem is that when you bend light either by refraction or using diffraction or holograms, the various colors of light bend slightly differently based on wavelength. The diffraction/holograms are tuned for each color but invariably they have some effect on the other color; this is particularly problem is if the colors don’t have a narrow spectrum that is exactly match by the waveguide. Even microscopic defects cause some light to follow the wrong path and invariably a grating/hologram meant to bend say green, will also affect the direction of say blue. Worse yet, some of the light gets scattered, and causes the waveguide glow.
To the right is a still frame from a “Through the lens” video” taken through the a Hololens headset. Note, this is actually through the optics and NOT the video feed that Microsoft and most other people show. What you should notice is a violet colored “glow” beneath the white circle. There is usually also a tendency to have a glow or halo around any high contrast object/text, but it is most noticeable when there is a large bright area.
For these waveguides to work at all, they require very high quality manufacturing which tends to make them expensive. I have heard several reports that Hololens has very low yields of their waveguide.
I haven’t, nor have most people that have visited Magic Leap (ML), seen though ML’s waveguide. What ML leap shows most if not all their visitors are prototype systems that use non-waveguide optics has I discussed last time. Maybe ML has solved all the problems with waveguides, if they have, they will be the first.
I have nothing personally against waveguides. They are marvels of optical science and require very intelligent people to make them and very high precision manufacturing to make. It is just that they always seem to hurt image quality and they tend to be expensive.
Microsoft acquired their waveguide technology from Nokia. It looks almost like they found this great bit of technology that Nokia had developed and decided to build a product around it. But then when you look at Hololens (left) there is this the shield to protect the lenses (often tinted but I picked a clear shield so you could see the waveguides). On top of this there is all the other electronic and frame to mount it on the user’s head.
The space savings from the using waveguides over much simpler flat combiner is a drop in the bucket.
I’m picking Osterhout Design Group’s for comparison below because because they demonstrate a simpler, more flexible, and better image quality alternative to using a waveguide. I think it makes a point. Most probably have not heard of them, but I have know of them for about 8 or 9 years (I have no relationship with them at this time). They have done mostly military headsets in the past and burst onto the public scene when Microsoft paid them about $150 million dollars for a license to their I.P. Beyond this they just raised another $58 million from V.C.’s. Still this is chump change compared to what Hololens and Magic Leap are spending.
Below is the ODG R7 LCOS based glasses (with the one of the protective covers removed). Note, the very simple flat combiner. It is extremely low tech and much lower cost compared to the Hololens waveguide. To be fair, the R7 does not have as much in the way of sensors and processing as the as Hololens.
The point here is that by the time you put the shield on the Hololens what difference does having a flat waveguide make to the overall size? Worse yet, the image quality from the simple combiner is much better.
Next, below is ODG’s next generation Horizon glasses that use a 1080p Micro-OLED display. It appears to have somewhat larger combiner (I can’t tell if it is flat or slightly curved from the available pictures of it) to support the wider FOV and a larger outer cover, but pretty much the same design. The remarkable thing is that they can use the a similar optical design with the OLEDs and the whole thing is about the same size where as the Hololens waveguide won’t work at all with OLEDs due broad bandwidth colors OLEDs generate.
ODG put up a short video clip through their optics of the Micro-OLED based Horizon (they don’t come out and say that it is, but the frame is from the Horizon and the image motion artifacts are from an OLED). The image quality appears to be (you can’t be too quantitative from a YouTube video) much better than anything I have seen from waveguide optics. There is not of the “waveguide glow”.
They even were willing to show text image with both clear and white backgrounds that looks reasonably good (see below). It looks more like a monitor image except for the fact that is translucent. This is the hard content display because you know what it is supposed to look like so you know when something is wrong. Also, that large white area would glow like mad on any waveguide optics I have seen.
The clear text on white background is a little hard to read at small size because it is translucent, but that is a fundamental issue will all see-though displays. The “black” is what ever is in the background and the “white” is the combination of the light from the image and the real world background. See through displays are never going as good as an opaque displays in this regards.
It looks to me like Hololens and Magic Leap both started with a waveguide display as a given and then built everything else around it. They overlooked that they were building a system. Additionally, they needed get it in many developers hands as soon as possible to work out the myriad of other sensor, software, and human factors issues. The waveguide became a bottleneck, and from what I can see from Hololens was an unnecessary burden. As my fellow TI Fellow Gene Frantz and I used to say when we where on TI’s patent committeed, “it is often the great new invention that causes the product to fail.”
I (and few/nobody outside of Magic Leap) has seen an image through ML’s production combiner, maybe they will be the first to make one that looks as good as simpler combiner solution (I tend to doubt it, but it not impossible). But what has leaked out is that they have had problems getting systems to their own internal developers. According the Business Insider’s Oct. 24th article (with my added highlighting):
“Court filings reveal new secrets about the company, including a west coast software team in disarray, insufficient hardware for testing, and a secret skunkworks team devoted to getting patents and designing new prototypes — before its first product has even hit the market.”
From what I can tell of what Magic Leap is trying to do, namely focus planes to support vergence/accommodation, they could have achieved this faster with more conventional optics. It might not have been as sleek or “magical” as the final product, but it would have done the job, shown the advantage (assuming it is compelling) and got their internal developers up and running sooner.
It is even more obvious for Hololens. Using a simple combiner would have added trivially to the the design size while reducing the cost and getting the the SDK’s in more developer’s hands sooner.
It looks to me that both Hololens and likely Magic Leap put too much emphasis on the using waveguides which had a domino effect in other decisions rather than making a holistic system decision. The way I see it:
Hololens and Magic Leap appear to be banking on getting waveguides into volume production in order to solve all the image quality and cost problems with them. But it will depend on a lot of factors, some of which are not in their control, namely, how hard it is to make them well and at a price that people can afford. Even if they solve all the issues with waveguides, it is only a small piece of their puzzle.
Right now ODG seems to be taking more the of the original Apple/Wozniak approach; they are finding elegance in a simpler design. I still have issues with what they are doing, but in the area of combining the light and image quality, they seem to be way ahead.
Is the waveguide glow as bad in laser source displays or for resonant metamaterial waveguides?
That a good point and one I forgot to mention in the article. The short answer is no, it should be better for laser light sources. I don’t know if it will fix everything (I tend to doubt it until I see it), but the narrower the spectrum/line-width of the colors the better the hologram or diffractive optics will work.
Here is a video from AWE looking through Project Horizon
Reality is starting to hit the fan!
https://www.theinformation.com/the-reality-behind-magic-leap & http://www.theverge.com/2016/12/8/13894000/magic-leap-ar-microsoft-hololens-way-behind
If you read “The Information” article you will find it dovetails on the business side with what I have written about on the technical side.
Congratulations , It looks like your Research is pretty close to the mark .
Is a “combiner” a curved lens element that refracts the image from the micro display to a virtual focus distance, as in Google Glass or the Epson BT-200? If so, isn’t the problem that these distort the real world image coming through them? A distorted world behind the projected content makes so-called mixed reality impossible. It’s fine for random pop-up display info, but rules them out for what Hololens and Magic Leap are attempting: making the displayed content appear to intermingle with real world stuff.
“Meta” is an example of an AR company that is probably doing what you’re talking about: lower cost, non-waveguide AR.
A “combiner” is anything that takes a “generated image” and combines it with the real world. The simplest one is a semi-mirror tilted at about 45 degree. It lets light from the real world through to the eye while sending light from the other direction from the generated image to the eye. In Google glass and the BT-200 have a prism combiner with a 45 degree prism in it.
The combiner can also be curved. For example the Meta 2 uses a large curved combiner. The inside of the curved combiner has a thin mirror coating that reflects about 20% to 40% of the light back to the eye (the rest is lost). Because the combiner is curved it will act to magnify the image and make the focus appear to be further away. Light from the “real world” goes more or less straight through with minimal distortion other than some light loss/darkening.
As my article shows, ODR is using a simple plate combiner at about 45 degrees. Their headsets are smaller than Hololens even though they use simple combiners.They can intermingle with the “real world stuff” if they have the sensors/cameras to figure out where things are in the real world. There can be an issue as the combiner is moved further from the eye (say on the other side of a person’s glasses) as the combiner has to get bigger and takes up space with the 45 degree tilt.
Waveguides are the “high tech” solution because they are flat. But they are expensive to make and hurt the image quality
You haven’t addressed my main point.
A combiner, whether flat (which means the focus distance will be very close) or curved is still a source of both distortion (especially a prism with a further virtual focus distance) and and discontinuity at its edges. This breaks the illusion of mixed content.
A mirror in theory could avoid the distortion problem but a mirror is not purely additive. The greater the reflectivity of the mirror the more light is held out from the background. This is a huge problem since bright content is a key goal of mixed reality.
Do you agree that a waveguide is the only current technology that is both purely additive (minus some constant of light loss) and doesn’t distort the background (minus some very small amount) and doesn’t introduce a discontinuity in the field of view (e.g. the edge of a prism)?
I think it’s fair to say that this is the reason both Hololens and Magic Leap (presumably) are pursuing waveguides, for now.
I do agree that the market might end up preferring a cheaper solution at the cost of just “putting up with” these issues. But I think it does a disservice to your readers to ignore this point.
Karl, could you please add your insight about this. It is rather interesting point ArmF is making.
Thanks, things have gotten very busy around here and I missed his question so I just answered it.
Sorry, I got busy and missed replying to this earlier.
With a “flat combiner” such as what ODG is using on their R7 and I suspect the R9 as well, they “move” the focus point with refractory (lens) optics prior to the mirror. So there is no problem moving the focus out.
EVERY combiner, not matter the type, has some affect on the real world image. The only question is how much they degrade the real-world image.
Either flat or curved combiners are going to cause a loss in brightness. But the eye is very adaptable and a 20% to 30% loss is not significant except in a very dark environment. The human perception of brightness is non-linear.
With a waveguide, the real world light has to pass through whatever causes the light to exit the waveguide. Diffractive/holographic can cause light from the real world to get bend and captured (I often see overhead lights captured). I’m less familiar with the “thin mirror” type waveguides from Lumus, but not they have in effect a set of beam splitting mirrors and work similarly to the plate waveguide.
I think Hololens and ML are using waveguides due “Industrial Design” as in how it make it look. I firmly believe that they could achieve better optical results with simpler optics like ODG is using.
I was not deliberately ignoring you point. Hopefully the above clears things up some.
Great article as always… Just a quick correction: the OSG Horizon is not 1080p, it is actually 2k x 2k for each eye:
To other readers. Ron went back and checked and corrected this on his website. ODG’s “Horizon” just named the R-9 does indeed have two 1080p (1920×1080) Micro OLED microdisplays per eye.
Karl, thanks for the great article. Can you comment on waveguides’ other potential benefit, pupil expansion?
Thank, I know a bit about optics and a lot of “tricks” that can be used but I’m more an I.C. and systems person by training. I am far from an optical designer so I don’t know how well I can talk about pupil expansion. I believe in the ML patents they plan on support this with the exit diffractive elements. Basically, to expand the pupil you need to somehow scatter the rays slightly.
2K by 2K must mean display by eman
Actually, I discussed this with Ron Mertens of http://www.oled-info.com and he now agrees that ODG Horizon is most likely going to use a 1080p (1920×1080 also known as a 2K display). Ron thinks he may have jumped to the wrong conclusion when he saw 2K and like you assumed eMagin.
You can see his correction here: http://www.oled-info.com/odg-raises-58-million-what-kind-oled-will-it-use-its-horizon-consumer-ar-glasses
3 issues with ODG, and all AR glasses. First the light engine is generally placed in a spot that occludes peripheral vision, this can have problematic effects with prolonged use. Second, the reflective combiner distorts natural light, like looking through a glass bottle, which when not using AR makes just wearing the ODGs tough after 30 minutes or greater of wear and 3 the form factor of glasses limits your system to 150 grams since it is to heavy on your bridge of your nose and ears. I could also have mentioned adjustable focal distances, but I think that is discussed. My main issue is that I never hear or read about prolonged use of these systems. I would like to see authors wear the devices for 4-6 hours and address the issues i raise in my comments. Human factors will plaY an important factor for adoptabilty.
There are a lot more than just 3 issues. I wrote a piece for Display Daily about a year ago about the many factor http://www.displaydaily.com/display-daily/34129-vr-and-ar-head-mounted-displays-sorry-but-there-is-no-santa-claus . The key point of this article was that once you have a great display, you are really only at the starting line and not the finish line. I’m totally with you on human factors and long term use.
To be fair to ODG, the idea of a “good” AR is to not occlude peripheral vision but invariably they must to some degree. ODG appears to try and reduce the blockage of peripheral vision by injecting the image come from the top and a fairly open frame to the sides at least below below the eye line. A good reflective combiner should not distort the real world much. In the case of ODG R7 they have a flat tilted plate combiner that should have minimal distortion; I have not seen the new Horizon one and not of the pictures I have seen show the beam splitting mechanism conclusively. I’m not saying they are acceptable because I have not tried them one and evaluated them, but ODG appears to have at least tried in these two areas.
People want to think the flat waveguides are a panacea but they have their problems as well, just look at Hololens. What difference did the flat waveguides make.
I’m totally with you on weight. I when from glass glasses over the year to high index plastic. I also know from my experience in near eye displays that the nose bridge and ears get sore even if a small amount of weight is left on them for a long time.
I notice that you are from RealWear.com, I assume the CEO, and looked at your website. Back in 1998-2000 I worked on a near eye display device that was going to use and around the back and no weight on the nose that was monocular; very similar to one of your models.
Yes sir! This is Andy Lowery. I also was a president at daqri, so have a wide range of experiences, For example, lumus who years ago developed a reflective wave guide. It is simple and good idea but they never pulled the cost out, which hopefully they are now doing with the 15M A round that Ben, the new CEO of Lumus, just raised. RealWear is a spin out of Kopin so the design is based on the legendary Golden-I
Thanks, I was going to say that RealWear looked a lot like a slimmed down version of Golden-I. You may know that Golden-I was in Syndiant’s CES 2011 suite, but we never closed the deal with them beyond doing some prototypes.
I remember seeing Lumus at SID in 2011. There are pro’s and cons to all the various types of waveguides and other combiner optics. One thing I always wondered about Lumus is the that it in effect is segmented and no matter how perfect you try to make it, one would think there would be problems at the segments; the slightest imperfection and you either get a gap or overlap.
Hi Karl, Great review of Hololens, Magic Leap, and ODG. You might include Lumus Optical in your review. Their display, while it uses a flat combiner, does not use holograms. Rather, they use physical embedded reflectors. This avoids the halo effects that you describe for Hololens and Magic Leap.
The ODG glasses with their simple angled flat combiner, are bulky. The extra bulk creates more forward center of gravity which I find annoying after an extended period of use. I needed to be careful not to tip my head down as the glasses would have fallen off. To be sure, Hololens has the same problem. Hololens has put so much stuff above the waveguides for tracking and gesture recognition that they have defeated the CG benefit of the waveguide. Building an AR display is hard as the acceptable solution space is de minimis.
I have mentioned Lumus only once in this series (https://www.kguttag.com/?s=Lumus) so you are right they they have not been given anywhere near the same level of coverage. I am definitely interested in Lumus and seeing their technology at CES next week where they have a booth is high on my list of to do’s.
Lumus to me is more of a “component” than and AR system company and I do gravitate to the companies making the most noise. I first became aware of Lumus at SID 2011 but I have not seen their technology being used in systems. If it were clearly the best solution, why haven’t more system companies started using it? I would think that every company inside the industry would at least know of Lumus even if they are not a name known by the public at large.
I definitely want to understand the pro’s and con’s of the technology. What display devices can it work with (I have only known it to work with LCOS and likely DLP). It would seem to overcome some of the major drawback of diffractive/holographic waveguides, but it still would seem to need collimated light for the waveguide to work.
I conceptually think of Lumus as having a “piece wise prism” so of the prism equivalent of a Fresnel lens. I wonder what happens at the discontinuities through a range of conditions. The eye is moving around and so not perfectly looking perfectly perpendicular to the waveguide flat surface and I wonder what happens under various conditions.
I very much appreciate the size, weight, and industrial design (aesthetics) advantages of waveguides in general over say prisms. The advantages of waveguides are obvious, the problems which could include image quality issues and cost, are generally not so obvious.
You make some good points about ODG and the others. If they want to do “Mixed Reality” (MR) or what Microsoft has dubbed (and what others have started to mistakenly call) “Holograms” then they need all these sensors (and maybe more to do it well). As they add everything they need to do MR you start to wonder what difference does it make how big the optics are as they will be buried under a bunch of other stuff. I would tend to agree with your comments on the ODG glasses, they weigh about 5X more than my glasses which is a lot of weight to be supported only by your nose and ears and they are front heavy which is a problem in multiple ways.
The other way to go as you stated is a minimalist AR solution, essentially a “better Google Glass” rather than trying to do everything. The question here is whether it does “enough” to warrant having the device by a large number of people? There is definitely a need for hand’s free operation, but is it a big market? BTW, I’m not saying MR is necessarily a big market.
Anyway keep reading, I plan on writing about Lumus after I get back from CES. I’m hoping to take pictures through their optics.
would you happen to know what display technology Atheer Air is using? I have
looked around quite a bit but have not found any specifications.
Thanks for the post.
Where does Avegant fit in the picture – waveguide or combiner?
Are waveguides and combiners both capable of producing multiple focal depths?
First it is important to understand that light rays from a point on an object far away that make it to the eye are all moving parallel. When you “collimate” light you make all the rays move parallel (or as parallel as is practical) and thus collimated light (what Magic Leap in their patents call “flat rays” indicated by flat lines in their figures) appears to be coming from far away (perfectly collimated light would be coming from infinity). As the object and the points on it move closer to the eye, you eye will see more and more diversity of angles of light will make it from the points on the object through the pupil of the eye. Thus if the light is less collimated or what Magic Leap calls in the patents convex rays (indicated by curved lines in their figures).
You might at this point want to go to may article https://www.kguttag.com/2016/11/04/magic-leap-a-riddle-wrapped-in-an-enigma/ and scroll down to the section “Magic Leap Patents” and their Fig. 8. Magic Leap is using waveguides and waveguide require collimated light to work or else the image will come out in the wrong place along the waveguide. So they must inject collimated light into the waveguides. For focus planes they have a stack of two sets of waveguides (waveguides are also tuned to a color so they need multiple waveguide layers for a single color image). One stack of exit diffraction gratings just bend the light to make it exit still collimated and thus appearing to come from far away and the other set of exit diffraction gratings in addition to bending the light to make it exit, slightly decollimate the light which also makes it appear to come from much closer. The problem for Magic Leap is that for each focus plane you need another set of waveguides and then you must look through all of this stuff to see the real world, plus the images from further away from the eye must pass through more layers of other waveguides to reach the eye and this has a negative impact on the image.
Avegant is doing “focus planes” a different way than Magic Leap. Avegant uses “birdbath” optics with a curved semi-mirror. This type of combiner whether it is curved or flat will work with either collimate or non-collimated light. They vary the collimation and thus apparent distance of the light before the combiner taking into account that the curve mirror will have an optical effect as well. It is not currently public (I have my guesses but I don’t know for sure) how Avegant changes the collimation/apparent focus of the light. If you scroll down to Magic Leaps Fig 9 in my same blog article, you will get some idea of how Avegant’s works. But note, they say they are NOT using a variable focus element lens and that their method is “all electronic” and so they way the change the focus of the light is different.
The cost and complexity and degradation of image quality would seem to favor the Avegant approach as you add more focus planes, but the waveguide approach gives thinner looking optics. There are a lot of other pro’s and con’s to each approach, but that is it at the high level.
[…] anywhere in their patent application, they do discuss diffractive (mostly) waveguides. I have previously discussed some of the major well-known issues with waveguides. This article started out as one purely on the technical side of waveguides, but in reviewing the […]