304 North Cardinal St.
Dorchester Center, MA 02124
304 North Cardinal St.
Dorchester Center, MA 02124
I have been wanting to get this article out for some time but have been very busy with some personal matters and catching up from a lot of traveling. To get this article out ahead of the likely Hololens 2 announcement at MWC on February 24th, 2019, this is mostly going to be text article without the usual figures and citing references. I will try and follow up after the announcement with a more detailed technical analysis.
I bought some (a very small percentage of my holdings) Microvision stock based on the rumors. I expect the stock to go up in spite of my analysis below and I will likely sell after I think it peaks. I saw this behavior when I was the first to report that Himax was in Google Glass, the stock when crazy even though any rational person should have known that it would have a trivial business effect on Himax.
I work for RAVN as Chief Science Officer (CSO) which is working on military AR equipment and could be considered a competitor to Hololens’ military program. The views on this blog are my own and not those of RAVN. Additionally, I am focused on the Hololens 2 use as a volume consumer and enterprise product and not for military use.
It looks to me like Hololens 2 (not the official name as far as I know) is going to combine using diffractive waveguides with all their problems in terms of poor image uniformity and capturing light sources in the real-world with the poor image quality of laser beam scanning (LBS). Furthermore, it is likely that Microsoft Hololens is using Microvision’s LBS (not totally sure but it seems very likely). Anyone that has read this blog knows I have found many problems with both diffractive waveguides and laser beam scanning. About the only thing worse I could think of doing with diffractive waveguides is to use laser beam scanning display with them, but that appears to be what Microsoft has chosen to do.
I often say, “when smart people do something that looks dumb, the alternative at the time must have seemed worse to them.” Microsoft has some very smart people working on Hololens but so did Magic Leap and Google Glass (some of them are the same smart people that moved from program to program). Sometimes it is internal politics or business pressures, sometimes it is the people you have hired with expertise in a particular area (like hiring a bunch of people with LBS expertise), sometimes it is simply optimizing for one set of criteria while ignoring others, and sometimes they are trying to do the impossible and are grasping at straws. In this case, I think this might be some combination of all of the above.
Even as the evidence was building that Hololens 2 was going to use laser beam scanning, I kept saying to myself, “surely they are not that desperate.” But they appear to have proven me wrong. My expecation is like every other forey into LBS in the past, this one will be short lived.
Microsoft had kept a pretty good lid on what they were doing up until CES other than the patent applications being published. At CES I had a few sources tell me that Hololens 2 was going to use Laser Beam Scanning (LBS). By Photonics West, a few weeks after CES, it seemed to be an open secret that Hololens 2 is going to be using LBS. I think this leaking has been reflected in Microvision’s stock price which was just 51 cents on December 17, 2018, and has more than doubled since that time.
Earlier in 2018, there is a lot of patent activity that has been documented on the Microvision Stock Forum on Reddit and, there have been many articles by other sources referencing these same patents/applications. Laser Beam Scanning often included in the list of display devices including LCOS, DLP, LCD, and OLED on patents Large companies file patents on many concepts that will never go to market. is typically a “throw in.” So just because there is a patent, it not in and of itself an indicator of a serious effort. The Microvision Reddit Forum also has identified a number of people that used to work for Microvision that now work at Microsoft (hard to know the difference between the cause and effect). Both companies are in Redmond Washington and the original technology on which Microvision started was developed at the University of Washington. The problem with the Microvision Reddit Forum is that by and large, the ultimate place for confirmation bias. Still, what started as a trickle of patent applications early in 2018 grew through the year and the patents became more detailed. I even argued against it on these forums as it seemed like a technically poor solution (and it still is IMO) and the lack of positive movement in Microvision’s stock (suggesting the secret was being kept).
Soon after CES, I got to see Creative MicroSystems Corporation (CMC) prototype that successfully couples laser beam scanning into a near to eye display waveguide. CMS is a quiet player in the AR industry that developed what they call Imageguide™ technology. They are receiving funding from the U.S. military and already delivered daylight viewable, 110° binocular display to an undisclosed U.S. government customer. According to their CEO, Bill Parker, they have overcome significant obstacles to use a laser approach and are continually improving its performance.
The CMC system was an early prototype, but it did demonstrate coupling laser beam scanning into a waveguide, which is not at all easy to do. For a waveguide to work, the light rays from the image being injected into the waveguide need to be moving parallel to each other, but with the LBS, each light ray is moving at a different angle as part of the scanning process as the mirror(s) tilt. With LCOS or DLP, the light rays illuminated their mirrors are highly collimated, and then the image has collimation optics.
The “easy” way to deal with the various angles from the LBS process would be to scatter the light rays with essentially a small rear projector screen, also known as a “pupil expander” to make in effect a tiny rear screen television. Optics would then be use to collimate the resulting image. Several Microsoft patents/applications show using a pupil expander after the laser scanning including US20180292654 and US10,025,093.
As can be seen in Figure 8 from Microsoft Patent 10,025,093 below, there are considerable optics required around the pupil expander (EPE). This optics diagram in the patent also shows that Microsoft was taking this issue very seriously and is not just a high-level hand-wave as is often seen in patents. Still there is a lot to go wrong with so many optical elements (chances for reflections distortion).
Using a pupil expander would seem to be very optically inefficient, but it would work. Another serious issue with a pupil expander is that it will cause speckle and other noise/grain into the image similar to a projection screen.
I want to be clear here; I am judging this on the basis of ever (not just now but in the future) being a mass market (greater than 1MU per year) product. I have other serious technical issues with it for applications like industrial and military use. I’m not judging on a curve and giving an A for effort.
I have seen a lot of “early phase” technology in my 42 years in the high tech industry, and IMO, Hololens’ diffractive waveguides and electromechanical laser scanning displays are technological dead ends. The many different aspects of image quality will be very poor by any objective measurement, and the physics challenges to fixing them are daunting and likely impossible.
I don’t go for the “celebrity and rich people endorsement,” rational. By this rationale, Google Glass should have been a great success. Often times rich people (even otherwise smart ones) make poor investments in things they don’t fully understand. Being smart in one area of technology does not make you smart in other areas. I look at the technology, the physics and the practical issues for volume manufacturing.
I have been writing about the serious technical problems with Laser Beam Scanning (LBS) since this blog started back in 2011. A couple of earlier articles I would suggest reading are from 2012 and 2015, but I will give a quick review below. There have been dozens of failed attempts to use LBS in display products by Microvision and other companies. Some of the companies that have played with LBS over the years include Motorola, Sony, Pioneer, Samsung, and Hitachi, not to mention many smaller companies. While the advantages of LBS are readily apparent, like an iceberg, the serious problems are hidden below the surface. Even Microsoft and hundreds of millions of dollars cannot change the laws of physics.
Until 2018m Microvision used a single two-axis mirror for the scanning process, but recently Microvision announced a dual single axis mirror engine, claiming, 1440p resolution. I no information whether Hololens is using the single or dual mirror version and Microsoft has patent applications showing both configurations. In theory, the resolution should be better with the dual mirror, but there has yet to be any products on the market to test. But at the same time, it would seem that the dual mirror version would couple worse to a waveguide than a single mirror LBS (this is a complicated “angle of light thing”).
LBS uses an electro-mechanical scanning process which inherently limits the speed and accuracy of the scan. The mirror movement is dictated by the natural resonate frequency of the scanning mirror that is then moved by electromagnetic or electrostatic fields. The LBS scan line is not the same as rows of pixels. LBS roughly scans a series of curves, usually approximating a sine wave, that do not match the square grid of pixels. Every pixel must be resampled which results in a single pixel blurring over several scan lines.
With LBS the mirror’s tilt and thus the beam scan must accelerate from zero on the left or right side of the image to its maximum velocity at the center and then decelerate back to zero at the other side. On the left and right side of the image, the beam is moving slowly, so the beam is turned on for longer but must be at lower brightness to compensate. In the center, the beam has to be very bright but for a very short duration to cause the same width and net brightness to the eye. Much of the dynamic range and intensity control of the laser beam has to be given up to compensating for the beam scanning speed variation.
In the case of Microvision’s products that have made it to market to date, the scan speed is so marginal that the resorted to bi-direction and interlaced scanning (see figure on the right and my 2012 article). With bi-direction scanning, they turn the laser on in both direction, unlike old CRTs that only turn the beam on in one direction, resulting in the distance between scans of variable distances and thus variable resolution from the center to the outside of the display. The devices only ran at 60Hz Interlaced which means they only refreshed the entire display at 30Hz like an old CRT only without any persistence in with the phosphors. Thus they had massive amounts of 30Hz flicker. To legitimately support the resolution Microvision claimed and at a refresh rate to reasonably eliminate flicker, they would need a mirror going more than 8 times faster. Based on Microvision’s history, with 2 mirrors, they are likely “cheating” on both the resolution claim of 1440p and the refresh rate and would be the first things I would look for in a new LBS product.
Laser light, which is coherent, is prone to causing speckle. Speckle is caused by the coherent light interfering with itself as it hits any surface. If you scan a laser directly into the eye you don’t see speckle, but if you introduce a surface, to say expand the pupil (Exit Pupil Expander or EPE) of the laser light as is necessary to have a decent “eye box” you will get speckle.
The other serious issue with LBS is both real and imagined eye safety. At any given instance in time, the entire brightness of the display is concentrated into a single dot over a very short period of time. relying on the persistence of the eye to average it out. You can’t just talk brightness in terms of candelas per square meter (cd/m2 or nits) to measure eye safety, you also have to look at the peak energy concentrated on the eye at any instance in time. The “good news” is that the image quality is so bad with LBS, that we probably won’t have to worry about the eye safety issues with it.
There are many other issues with LBS such as beam alignment that both drive up cost and hurt image quality. Let’s just say, there are many reasons why LBS has failed many times and failed so badly that few know about them. Microvision is a 26-year-old “startup” that has survived by finding suckers, both in terms of investors and R&D groups paying NRE, thinking there is a pot of gold at the end of the LBS rainbow. OK Karl, but tell us what you really think about LBS 😊 .
Assuming Microvision’s LBS is being used, one can almost count on the irrational exuberance that will follow such an announcement (as I wrote in my Discloser, I placed a small bet on it). Few people seem actually to put pen to paper (or keyboard to spreadsheet) and see what it means. I saw this effect first hand with Himax and Google Glass.
Hololens is reportedly selling at a 25K Unit per year rate or 50K display engines for both eyes per year. Dropping a laser engine into the product is not going to change things dramatically in terms of cost and size and thus potential unit volume. Making some very rough estimates, Hololens likely will pay somewhere on the order of $100/eye for the display engine (lasers, laser beam scanning mirror(s), combining the R, G, and B lasers into a single beam with alignment, and related optics). Most of the cost would go to the lasers and other optics. Maybe Microvision cut would be on the order of $25/display (just a reasonable guess).
Using the number above, it only translates to 50K times $25 or about $1.5M in revenue per year for Microvision; good thing it looks like they got NRE. But assuming ~$200 in the display engines alone means the price of the Hololens is going to stay far too high for a consumer product ignoring poor image quality I am expecting that will also limit the sales. Then you have the cost of the waveguides, SLAM, Computer, battery, headset case, etc.
One can make different sets of assumptions, and you still don’t get to a very big number of dollars flowing to Microvision. Double, Triple, or Quintuple the current Hololens volumes and you don’t get to a sustainable business. To get the cost lower and drive volume, Microvision has to receive lower dollars per unit which means needing even high volumes. To justifying building display devices, you need unit sales in the hundreds of thousands if not millions of units.
I have yet to see an LBS display that could compete on image quality with any other popular display technology. The combination of having to re-sample the image due to the non-linear scanning process and laser speckle is a huge hurdle.
In theory with laser scanning, they should improve on the Hololens 1 contrast, but by how much will have to be seen. While the lasers can turn completely off if the whole screen is black, or nearly black, if there is some content on display then the light will scatter from the other optics including say a pupil expander that will reduce the contrast. One needs to go back and measure contrast which translates into transparency with typical image content.
In theory, lasers should work better will diffractive waveguides due to their narrower line (frequency) width. But we should still expect to see the color uniformity problems evident in all diffractive waveguides. See for example the picture taken of the current Hololens below:
I have seen many different laser projectors and by any objective measure, the image quality it poor. I’m expecting to see speckle and noise from the pupil expander. The effective resolution will be low due to the scanning process and resampling.
Hey why not, laser still sound new and modern.
I have had several sources tell me that Microsoft went to lasers to get brighter. I’m not sure I buy this rationale, but I do think Hololens wants to get brighter. Hololens 1 has about 300 nits with their current LCOS design which is not nearly enough for outdoor use, however, Lumus’s vision 1080, also using LCOS, has specified their Vision 1080 at over 7,000 nits. I don’t know if a laser scanner could go to 7,000 nits without having eye safety issues since the laser scanning works by having an extremely bright spot over a very short period, but I would be concerned about it.
Switching to LBS should have almost no positive effect on the size, weight or cost of Hololens – If you take the current LCOS based optical engine to zero in each of the categories, it will barely move the needle. Thus even if the LBS engine was less expensive (which it is not) and smaller (not significantly if at all), it is not going to budge the needle much. We talking the effects of a fly on an elephant when it comes to size, weight, and cost. Any improvement in the size, weight and cost will have to come from other parts of the system.
If you look at the billions of dollars of R&D money Microsoft has thrown at Hololens it dwarfs the the potential revenue which will measure in the 10’s of millions of dollar. In this weird world of Magic Leap and Hololens where you spend many times the potential revenue on R&D, the sales price of the product becomes just a marketing concept. Thus the final sales price is somewhat arbitrary and based on how much money the company is willing to lose.
This one is not directly attributable to using LBS directly as it could be used with other display technologies, but rather speculation that Hololens 2 may in effect have two waveguides in one for each eye, what is being called a “butterfly waveguide.”
Unfortunately, there are some physics issues with simply making the waveguide bigger. These fundamental physics issues with diffractive waveguide technology is outlined in Microsoft’s US Patent 9,791,703 . Quoting from the patent:
in optical waveguides that include an intermediate-component used for pupil expansion , which is distinct from the input-coupler and output-coupler of the waveguide, the intermediate-component typically limits the diagonal field-of- view (FOV) that can be supported an optical waveguide based display to no more than 35 degrees
What this means is that in order to have more than about 35 degrees, Hololens need to in effect have two waveguides side by side. This results in a look that resembles a butterfly (see left)
The butterfly waveguide concept has been talked about in numerous Microsoft Patents/Applications including US 9,791,703 and US 20170363871 and was mention by Bernie Kress, Partner Optical Architect at Microsoft, in his Photonics West Presentation. This could support up to a 70-degree FOV or roughly double that of the Hololens 1. In effect, they would be spitting the image into the two waveguides and joining them back. It is hard to believe that there will not be a visible seam or other artifacts where the image from the two halves of the butterfly waveguide joins
I had many reports that one of the biggest cost factors in the original Hololens was the yield of the diffractive waveguide. The butterfly waveguide would seem to make manufacturing even more difficult as they have to keep tolerances and yield over a much wider area.
As always, I look forward to a technical discussion and welcome any corrections. Please spare us the conspiracy theories and accusations. There are plenty of news sources that will just republish the marketing spiels if you just want “good” news.