Hololens 2 is Likely Using Laser Beam Scanning Display: Bad Combined with Worse

I have been wanting to get this article out for some time but have been very busy with some personal matters and catching up from a lot of traveling. To get this article out ahead of the likely Hololens 2 announcement at MWC on February 24th, 2019, this is mostly going to be text article without the usual figures and citing references. I will try and follow up after the announcement with a more detailed technical analysis.


I bought some (a very small percentage of my holdings) Microvision stock based on the rumors. I expect the stock to go up in spite of my analysis below and I will likely sell after I think it peaks. I saw this behavior when I was the first to report that Himax was in Google Glass, the stock when crazy even though any rational person should have known that it would have a trivial business effect on Himax.

I work for RAVN as Chief Science Officer (CSO) which is working on military AR equipment and could be considered a competitor to Hololens’ military program. The views on this blog are my own and not those of RAVN. Additionally, I am focused on the Hololens 2 use as a volume consumer and enterprise product and not for military use.

“Bad Combined with Worse”

It looks to me like Hololens 2 (not the official name as far as I know) is going to combine using diffractive waveguides with all their problems in terms of poor image uniformity and capturing light sources in the real-world with the poor image quality of laser beam scanning (LBS). Furthermore, it is likely that Microsoft Hololens is using Microvision’s LBS (not totally sure but it seems very likely). Anyone that has read this blog knows I have found many problems with both diffractive waveguides and laser beam scanning. About the only thing worse I could think of doing with diffractive waveguides is to use laser beam scanning display with them, but that appears to be what Microsoft has chosen to do.

I often say, “when smart people do something that looks dumb, the alternative at the time must have seemed worse to them.” Microsoft has some very smart people working on Hololens but so did Magic Leap and Google Glass (some of them are the same smart people that moved from program to program). Sometimes it is internal politics or business pressures, sometimes it is the people you have hired with expertise in a particular area (like hiring a bunch of people with LBS expertise), sometimes it is simply optimizing for one set of criteria while ignoring others, and sometimes they are trying to do the impossible and are grasping at straws. In this case, I think this might be some combination of all of the above.

Even as the evidence was building that Hololens 2 was going to use laser beam scanning, I kept saying to myself, “surely they are not that desperate.” But they appear to have proven me wrong. My expecation is like every other forey into LBS in the past, this one will be short lived.

Comparison of Laser Illuminated LCOS to Laser Beam Scanning

Information Pointing to Hololens 2 Using Microvision LBS

Microsoft had kept a pretty good lid on what they were doing up until CES other than the patent applications being published. At CES I had a few sources tell me that Hololens 2 was going to use Laser Beam Scanning (LBS). By Photonics West, a few weeks after CES, it seemed to be an open secret that Hololens 2 is going to be using LBS.  I think this leaking has been reflected in Microvision’s stock price which was just 51 cents on December 17, 2018, and has more than doubled since that time.

Earlier in 2018, there is a lot of patent activity that has been documented on the Microvision Stock Forum on Reddit and, there have been many articles by other sources referencing these same patents/applications. Laser Beam Scanning often included in the list of display devices including LCOS, DLP, LCD, and OLED on patents Large companies file patents on many concepts that will never go to market. is typically a “throw in.” So just because there is a patent, it not in and of itself an indicator of a serious effort. The Microvision Reddit Forum also has identified a number of people that used to work for Microvision that now work at Microsoft (hard to know the difference between the cause and effect). Both companies are in Redmond Washington and the original technology on which Microvision started was developed at the University of Washington. The problem with the Microvision Reddit Forum is that by and large, the ultimate place for confirmation bias. Still, what started as a trickle of patent applications early in 2018 grew through the year and the patents became more detailed.  I even argued against it on these forums as it seemed like a technically poor solution (and it still is IMO) and the lack of positive movement in Microvision’s stock (suggesting the secret was being kept).

Soon after CES, I got to see Creative MicroSystems Corporation (CMC) prototype that successfully couples laser beam scanning into a near to eye display waveguide. CMS is a quiet player in the AR industry that developed what they call Imageguide™ technology. They are receiving funding from the U.S. military and already delivered daylight viewable, 110° binocular display to an undisclosed U.S. government customer. According to their CEO, Bill Parker, they have overcome significant obstacles to use a laser approach and are continually improving its performance.

The CMC system was an early prototype, but it did demonstrate coupling laser beam scanning into a waveguide, which is not at all easy to do. For a waveguide to work, the light rays from the image being injected into the waveguide need to be moving parallel to each other, but with the LBS, each light ray is moving at a different angle as part of the scanning process as the mirror(s) tilt. With LCOS or DLP, the light rays illuminated their mirrors are highly collimated, and then the image has collimation optics.

The “easy” way to deal with the various angles from the LBS process would be to scatter the light rays with essentially a small rear projector screen, also known as a “pupil expander” to make in effect a tiny rear screen television. Optics would then be use to collimate the resulting image. Several Microsoft patents/applications show using a pupil expander after the laser scanning including US20180292654 and US10,025,093.

As can be seen in Figure 8 from Microsoft Patent 10,025,093 below, there are considerable optics required around the pupil expander (EPE). This optics diagram in the patent also shows that Microsoft was taking this issue very seriously and is not just a high-level hand-wave as is often seen in patents. Still there is a lot to go wrong with so many optical elements (chances for reflections distortion).

Using a pupil expander would seem to be very optically inefficient, but it would work. Another serious issue with a pupil expander is that it will cause speckle and other noise/grain into the image similar to a projection screen.

Not the Future, But at Best a Stop Gap

I want to be clear here; I am judging this on the basis of ever (not just now but in the future) being a mass market (greater than 1MU per year) product. I have other serious technical issues with it for applications like industrial and military use. I’m not judging on a curve and giving an A for effort.

I have seen a lot of “early phase” technology in my 42 years in the high tech industry, and IMO, Hololens’ diffractive waveguides and electromechanical laser scanning displays are technological dead ends. The many different aspects of image quality will be very poor by any objective measurement, and the physics challenges to fixing them are daunting and likely impossible.

I don’t go for the “celebrity and rich people endorsement,” rational. By this rationale, Google Glass should have been a great success. Often times rich people (even otherwise smart ones) make poor investments in things they don’t fully understand. Being smart in one area of technology does not make you smart in other areas. I look at the technology, the physics and the practical issues for volume manufacturing.

Laser Beam Scanning “Iceberg”

I have been writing about the serious technical problems with Laser Beam Scanning (LBS) since this blog started back in 2011. A couple of earlier articles I would suggest reading are from 2012 and 2015, but I will give a quick review below. There have been dozens of failed attempts to use LBS in display products by Microvision and other companies. Some of the companies that have played with LBS over the years include Motorola, Sony, Pioneer, Samsung, and Hitachi, not to mention many smaller companies. While the advantages of LBS are readily apparent, like an iceberg, the serious problems are hidden below the surface. Even Microsoft and hundreds of millions of dollars cannot change the laws of physics.

Until 2018m Microvision used a single two-axis mirror for the scanning process, but recently  Microvision announced a dual single axis mirror engine, claiming, 1440p resolution. I no information whether Hololens is using the single or dual mirror version and Microsoft has patent applications showing both configurations. In theory, the resolution should be better with the dual mirror, but there has yet to be any products on the market to test. But at the same time, it would seem that the dual mirror version would couple worse to a waveguide than a single mirror LBS (this is a complicated “angle of light thing”).

LBS uses an electro-mechanical scanning process which inherently limits the speed and accuracy of the scan. The mirror movement is dictated by the natural resonate frequency of the scanning mirror that is then moved by electromagnetic or electrostatic fields.  The LBS scan line is not the same as rows of pixels. LBS roughly scans a series of curves, usually approximating a sine wave, that do not match the square grid of pixels. Every pixel must be resampled which results in a single pixel blurring over several scan lines.

With LBS the mirror’s tilt and thus the beam scan must accelerate from zero on the left or right side of the image to its maximum velocity at the center and then decelerate back to zero at the other side. On the left and right side of the image, the beam is moving slowly, so the beam is turned on for longer but must be at lower brightness to compensate. In the center, the beam has to be very bright but for a very short duration to cause the same width and net brightness to the eye. Much of the dynamic range and intensity control of the laser beam has to be given up to compensating for the beam scanning speed variation.

Microvision’s Bi-Directional and Interlaced Scanning

In the case of Microvision’s products that have made it to market to date, the scan speed is so marginal that the resorted to bi-direction and interlaced scanning (see figure on the right and my 2012 article). With bi-direction scanning, they turn the laser on in both direction, unlike old CRTs that only turn the beam on in one direction, resulting in the distance between scans of variable distances and thus variable resolution from the center to the outside of the display. The devices only ran at 60Hz Interlaced which means they only refreshed the entire display at 30Hz like an old CRT only without any persistence in with the phosphors. Thus they had massive amounts of 30Hz flicker. To legitimately support the resolution Microvision claimed and at a refresh rate to reasonably eliminate flicker, they would need a mirror going more than 8 times faster. Based on Microvision’s history, with 2 mirrors, they are likely “cheating” on both the resolution claim of 1440p and the refresh rate and would be the first things I would look for in a new LBS product.

Laser light, which is coherent, is prone to causing speckle. Speckle is caused by the coherent light interfering with itself as it hits any surface. If you scan a laser directly into the eye you don’t see speckle, but if you introduce a surface, to say expand the pupil (Exit Pupil Expander or EPE) of the laser light as is necessary to have a decent “eye box” you will get speckle.

The other serious issue with LBS is both real and imagined eye safety. At any given instance in time, the entire brightness of the display is concentrated into a single dot over a very short period of time. relying on the persistence of the eye to average it out. You can’t just talk brightness in terms of candelas per square meter (cd/m2 or nits) to measure eye safety, you also have to look at the peak energy concentrated on the eye at any instance in time. The “good news” is that the image quality is so bad with LBS, that we probably won’t have to worry about the eye safety issues with it.

There are many other issues with LBS such as beam alignment that both drive up cost and hurt image quality. Let’s just say, there are many reasons why LBS has failed many times and failed so badly that few know about them. Microvision is a 26-year-old “startup” that has survived by finding suckers, both in terms of investors and R&D groups paying NRE, thinking there is a pot of gold at the end of the LBS rainbow. OK Karl, but tell us what you really think about LBS 😊 .

My Expectations and Conclusions

Microvision’s Stock Price is Likely to Going Up . . . for a While

Assuming Microvision’s LBS is being used, one can almost count on the irrational exuberance that will follow such an announcement (as I wrote in my Discloser, I placed a small bet on it). Few people seem actually to put pen to paper (or keyboard to spreadsheet) and see what it means. I saw this effect first hand with Himax and Google Glass.

Hololens is reportedly selling at a 25K Unit per year rate or 50K display engines for both eyes per year. Dropping a laser engine into the product is not going to change things dramatically in terms of cost and size and thus potential unit volume. Making some very rough estimates, Hololens likely will pay somewhere on the order of $100/eye for the display engine (lasers, laser beam scanning mirror(s), combining the R, G, and B lasers into a single beam with alignment, and related optics). Most of the cost would go to the lasers and other optics. Maybe Microvision cut would be on the order of $25/display (just a reasonable guess). 

Using the number above, it only translates to 50K times $25 or about $1.5M in revenue per year for Microvision; good thing it looks like they got NRE. But assuming ~$200 in the display engines alone means the price of the Hololens is going to stay far too high for a consumer product ignoring poor image quality I am expecting that will also limit the sales. Then you have the cost of the waveguides, SLAM, Computer, battery, headset case, etc.

One can make different sets of assumptions, and you still don’t get to a very big number of dollars flowing to Microvision. Double, Triple, or Quintuple the current Hololens volumes and you don’t get to a sustainable business. To get the cost lower and drive volume, Microvision has to receive lower dollars per unit which means needing even high volumes. To justifying building display devices, you need unit sales in the hundreds of thousands if not millions of units.

Worse Overall Image Quality

I have yet to see an LBS display that could compete on image quality with any other popular display technology. The combination of having to re-sample the image due to the non-linear scanning process and laser speckle is a huge hurdle.

In theory with laser scanning, they should improve on the Hololens 1 contrast, but by how much will have to be seen. While the lasers can turn completely off if the whole screen is black, or nearly black, if there is some content on display then the light will scatter from the other optics including say a pupil expander that will reduce the contrast. One needs to go back and measure contrast which translates into transparency with typical image content.

In theory, lasers should work better will diffractive waveguides due to their narrower line (frequency) width. But we should still expect to see the color uniformity problems evident in all diffractive waveguides. See for example the picture taken of the current Hololens below:

Hololens Color Uniformity Issues

I have seen many different laser projectors and by any objective measure, the image quality it poor. I’m expecting to see speckle and noise from the pupil expander. The effective resolution will be low due to the scanning process and resampling.

Microsoft Marking Will Leverage “Laser Display”

Hey why not, laser still sound new and modern.

Expecting a Brighter Image than Hololens 1

I have had several sources tell me that Microsoft went to lasers to get brighter. I’m not sure I buy this rationale, but I do think Hololens wants to get brighter. Hololens 1 has about 300 nits with their current LCOS design which is not nearly enough for outdoor use, however, Lumus’s vision 1080, also using LCOS, has specified their Vision 1080 at over 7,000 nits. I don’t know if a laser scanner could go to 7,000 nits without having eye safety issues since the laser scanning works by having an extremely bright spot over a very short period, but I would be concerned about it.

No improvement on the Hololens’ Size, Weight, and Cost Due to LBS – Can’t Predict the Sales Price

Switching to LBS should have almost no positive effect on the size, weight or cost of Hololens – If you take the current LCOS based optical engine to zero in each of the categories, it will barely move the needle. Thus even if the LBS engine was less expensive (which it is not) and smaller (not significantly if at all), it is not going to budge the needle much. We talking the effects of a fly on an elephant when it comes to size, weight, and cost. Any improvement in the size, weight and cost will have to come from other parts of the system.

If you look at the billions of dollars of R&D money Microsoft has thrown at Hololens it dwarfs the the potential revenue which will measure in the 10’s of millions of dollar. In this weird world of Magic Leap and Hololens where you spend many times the potential revenue on R&D, the sales price of the product becomes just a marketing concept. Thus the final sales price is somewhat arbitrary and based on how much money the company is willing to lose.

Bigger Field Of View – Butterfly Waveguide?

This one is not directly attributable to using LBS directly as it could be used with other display technologies, but rather speculation that Hololens 2 may in effect have two waveguides in one for each eye, what is being called a “butterfly waveguide.”

Unfortunately, there are some physics issues with simply making the waveguide bigger. These fundamental physics issues with diffractive waveguide technology is outlined in Microsoft’s US Patent 9,791,703 . Quoting from the patent:

in optical waveguides that include an intermediate-component used for pupil expansion , which is distinct from the input-coupler and output-coupler of the waveguide, the intermediate-component typically limits the diagonal field-of- view (FOV) that can be supported an optical waveguide based display to no more than 35 degrees

What this means is that in order to have more than about 35 degrees, Hololens need to in effect have two waveguides side by side. This results in a look that resembles a butterfly (see left)

The butterfly waveguide concept has been talked about in numerous Microsoft Patents/Applications including US 9,791,703 and US 20170363871 and was mention by Bernie Kress, Partner Optical Architect at Microsoft, in his Photonics West Presentation. This could support up to a 70-degree FOV or roughly double that of the Hololens 1. In effect, they would be spitting the image into the two waveguides and joining them back. It is hard to believe that there will not be a visible seam or other artifacts where the image from the two halves of the butterfly waveguide joins

I had many reports that one of the biggest cost factors in the original Hololens was the yield of the diffractive waveguide. The butterfly waveguide would seem to make manufacturing even more difficult as they have to keep tolerances and yield over a much wider area.

Looking Forward To Your Comments

As always, I look forward to a technical discussion and welcome any corrections. Please spare us the conspiracy theories and accusations. There are plenty of news sources that will just republish the marketing spiels if you just want “good” news.

Karl Guttag
Karl Guttag
Articles: 244


  1. There is a reason for laser scanning that you’ve not commented on. That is to scan objects in view and get a fine degree of resolution. Both for printing and recognition, as well as for visual overlays.

    Current hololens room scan has perhaps 1 inch resolution, taking that to 1 mm will add tremendous value, and no one is thinking about matching up with 3D printing.

    • He is talking about laser beam scanning display, not outside facing laser to scan objects. That’s a totally unrelated thing.

      It is also pretty much a nonstarter, because the scan would need to be extremely fast (like maybe 1/200s or faster) in order to avoid blurring the data into an unusable mess – keep in mind that this wouldn’t be a metrology grade scanner sitting on a tripod but something on the user’s moving head! There are no laser scanners on the market that can do anything even close to this. A laser scanner capable of 1mm (or better) resolution and accuracy is a fairly large unit transported in a suitcase and connected to a laptop. The fastest scanners available are the various automotive LIDARs – which have none of the resolution needed, are power hungry and still quite large (and extremely expensive) units.

      • Are you sure?

        Microvision is showing this advanced, relatively tiny (13 cc) LBS lidar at MWC in the STMicroelectronics booth.

        Their IP shows that an LBS scanner can be used simultaneously to provide display, SLAM and eye-tracking, which saves on cost, size and number of components. It is not hard to imagine that MSFT’s efforts and unlimited resources can allow a slimmed down and integrated version of the above discussed devices to provide all 3 functions in Hololens 2. That is not to say it is in Hololens 2 (given even Karl’s grudging article is still speculation) but it is not impossible as you say. And, even if not in Hololens 2, what about Hololens 3 or 4?

      • The 3D LIDAR is an entirely separate product from the laser scanning display. It has literally nothing in common with it, and it certainly can’t do both tasks. You would need completely different optics, lasers, and electronics — the only thing it could have in common is the MEMS mirror. And Microvision doesn’t have anything that’s even theoretically capable of doing eye tracking. They might have put some kind of crazy stuff like that in a patent application, but that doesn’t mean it actually is workable or would make sense. Not to mention, there are many time of flight 3D camera technologies that image the whole frame at a time instead of scanning. They don’t work outdoors, but neither does Microvision’s system.

      • It’s been more than theoretical for a very long time. For example, here’s a foundational VRD with eye tracking patent from UW in 2001 employing the scanner for both functions.


        Microvision was spun out of the UW in 1993 with the original VRD patent (Furness, et al) with a licensing agreement from UW for all future VRD IP that continued for a long time. This is just a piece of it.

    • Microsoft said that they will use their ToF based depth camera (“Project Kinect for Azure”) in the next Hololens, so I find it quite unlikely that they will use a LBS solution instead.

      • this is the waveguides on the lens that the eyes see thru .. Your talking about scanning the room objects with sensors (kinect) … 2 different applications of lasers

      • Microvision LBS uses ToF for 3D sensing (it can also use structured light, even simultaneously). That’s not to say that Hololens 2 uses LBS ToF for depth sensing (there are other non-LBS ToF solutions) but it’s not an either/or proposition as your post implies.

      • Yes, it appears they are using their ToF solution for SLAM. The LBS would be used for the display feeding the diffractive waveguide.

      • Did you go to the Photonic West Show and use Hololens there and talk to the engineers and Bernard or is your opinion based on reddit’s patents only ?

      • I went to Photonics West. My conclusion that the next generation Hololens is using Laser Beam Scanning is due to multiple sources at Photonics West, none of which were working for Microsoft. The patents are only one factor in my conclusion.

      • Did you try their Hololens that was being demonstrated and was it Hololens 2 or Hololens shown last year? If you went to the AR VR MR demonstration did you talk to MSFT’S engineers or Bernard?

      • From the pictures it looks like they are still using four monochrome cameras for SLAM. The depth camera in the center of the device is for spatial reconstruction and gesture tracking, just like in the HoloLens 1.

  2. Regarding seam – because the image is in angle space, providing there is some field overlap, I do not think you will perceive it. Waveguides, even reflective, can vary in luminance by over 50%, but the human eye being logarithmic does not see it.

    Another problem with lasers and diffractive guides is the latter requires actually requires a broad spectrum to cover the field-of-view – monochromatic sources will not diffract over the full angular range, resulting in holes (bands) in the field. The butterfly guide might be partly to address this without delivering much more FOV over Hololens 1, or perhaps they are using more lasers with slightly different wavelengths, or else red and blue SLEDs… (green LDs are already quite broad) Of course I could be wrong 😉

  3. I think Microsoft’s real goal here is to be at forefront of creating the Operating System of the future, not making hardware.

    Microsoft missed the boat of the web, social networks and mobile platforms and has had to battle hard to be relevant in the cloud computing business. For a lot of users their platform is their browser now not their Operating System. If you look at VR gaming platforms they are all introducing 3D launching environments so that you can switch between applications without having to take your headset off. This is becoming their desktop environment, just like the browser is for many people today. All of these developments could further push the Operating System into the background and out of relevance. An operating system that is truly 3D and spatially aware could fight off this encroach into Microsoft’s core territory.

    I think this ridiculous level of investment is just about making hardware that is good enough for them to get ahead in creating their vision for the 3D Operating System. If that is the case it doesn’t really matter if any particular hardware route is a dead end or not, as long as it keeps the OS progressing as they don’t want to miss the next big thing again. If someone else eventually comes up with a better hardware solution they are still in position to produce the software platform for it. If they end up backing the right hardware solution then they have a load of IP they can license.

  4. There are some public strong hints, both from a recently published patent filing and especially from Microsoft’s Bernie Kress, that HL2 will incorporate eye tracking.

    Your CES sources could be correct about the presence of LBS but if so it is more likely for use in IR eye tracking than for use in sourcing color video for a waveguide display.

    • I find it highly unlikely that they are using LBS for eye tracking. Everyone that seems to know says they are using it for the display and these seems supported by the patent activity.

      • Two of the very recent published MS patent filings disclose LBS-based eye tracking, and it seems a nifty way to navigate the patent minefield. The receiver in the MS filings is particularly elegant – a few discrete photodiodes around the periphery of the eyebox – but counter-balanced by kludges to attain a decent perspective for an LBS to scan from. Scanning would need to be 2-D, but it might be quite sparse.

        If scanning is used for the display, perhaps HL2 is mechanically 1-D scanning a projection from iLED arrays rather than lasers?

      • I wouldn’t read too much into patent filings. MS is a very large company. Not every patent is aligned with business strategy and not every patent is a good one. Sometimes employees just file patents due to pressure of filing something…

    • Just wondering if you made it to the Photonics West Show or not and if so did you use Hololens and talk to the engineers or Bernard?

  5. Hello Karl,

    thank you very much for your new article. We are less than two days ahead of the Microsoft event in Barcelona, so we will see soon what Microsoft will really present.

    I am very sorry, but your article contains unusually some very important wrong figures and you made some mistakes about the AR market and what Microsoft will likely present.

    First, even if it may be named Hololens 2, if Microsoft present a new Hololens, this will be the Hololens 3. It is well known that Microsoft skipped the Hololens 2 to go directly to a major update, the Hololens 3. Four years of development will show more improvements than you think. Please let me explain later.

    First to Microvision: Unfortunately, you repeat only the speculations over the last two years, added with your skepticism about LBS etc. You were correct with your analysis about LBS in the past. But you missed important facts.
    1) Refresh rate: The new Microvision engines has an (official) 120 Hz refresh rate at 1440p. So, Microvision has not only increased significantly the resolution. It doubled also the refresh rate. I think this will have a major (positive) impact on the display quality.
    2) Laser safety: I think this would be a laser safety class 1 product and not 3R like the projectors.
    3) If Microsoft uses LBS then the new engine. No way that it uses the old, in reality low resolution engine.
    4) The current (= old) engine, still used for LiDAR and display products, is a ten-year-old development. Microvision had never the money to improve it significantly so that it is still on the level when it was developed. Major improvements have been only made by Sony but were not back-ported to Microvision.
    5) So, you cannot make assumptions based on a ten-year-old product to a completely (!) new developed engine that even uses now two instead of one mirror. Completely different.
    6) But of course, historically Microvision tech specs never match the reality. But Microsoft would likely not have been except solutions below targets.
    7) If Microvision is inside the new Hololens this would be the first design win for Microvision in their history. Sony was a win, too, but Sony never used the engine in a major or even flagship product. Microsoft would be the first company that uses a Microvision engine in a flagship product. Even it is a very low volume product. But it is a, if not the, flagship product of Microsoft (hardware division).

    Now to Microsoft: I still think that would be a major update, a complete new and complete different Hololens compared with the existing one. Not a me-too product like from MagicLeap. So, I am here also not in line with your expectations. Hololens is still primarily a business device for industrial applications. The major drawback that prevents a widely use in the industry is not the display quality. Beside the field of view, it is the weight! Because no worker can wear such a heavy device as a Hololens eight hours a day while she/he is e.g. assembling or repairing a machine, car, etc. So, I think, Microsoft will address this with a new Hololens design and such a new Hololens would likely more build like traditional glasses. Maybe by moving the battery, processor etc. in a separate case. I also think, the preview video posted by Microsoft indicates that they will use Kevlar instead of plastics because of this requirement. I think your figures about sales are also wrong. Microsoft sold approx. 50,000 units in four years. So, only 12.500 per year. Also, you missed that Microsoft is not focusing on hardware. Primarily target for Microsoft is to implement the state of art and dominant AR software platform to prevent a disaster like for smartphones (lost against Android/iOS).

    Business impact for Microvision:
    1) Maybe you remember: The last shares you bought (posted here; as far as I remember), eMagin, are now at all-time low and lost 80% (?).
    2) Your article was posted too late so it had no impact at all to the Microvision share price and share price moves on Monday will be based on news from the MWC. Even if it would had been posted earlier, I think it would had no impact because everything you wrote is known for weeks, months, years.
    3) Even if Microvision won a flagship design this would only prove that Microvision can get a customer. But only a very low volume customer. Microvision has not received any follow-up order for nearly two years from the AR customer, which could be Microsoft. . So, likely the initial production is included in the development contract. $0 for Microvision.
    4) This means for Microvision unfortunately: No new money. But Microvision runs out of money according fillings + last share sales approx. in April.
    5) Shareholders are waiting for new, big orders. Not for an event that will not generate a single Cent.
    6) The future share price will likely more reflect order entries and dilutions.

    That are only my thoughts.

    • You brought up a lot. I will try and answer as much as I can quickly.

      There really is no name for the next generation Hololens and I pointed this out in the article. Product generations that don’t make it to market don’t count.

      1) Good point on the refresh rate. I should have pointed that out. If it is 120Hz INTERLACED the same way as the single mirror was 60Hz interlaced, then that is still too slow for being flicker free, particularly considering there is zero persistence. Note that CRT computer monitors had to go to higher than 60Hz to eliminate flicker (it was even mandated in Europe). There is still no documentation or products available with the 1440p scan process. As I pointed out in the article a “scan” with LBS is NOT the same as a row of pixels. IF they are bidirectionally scanning as they did with the single mirror then the resolution at the outsides of the display are about 1/2 and combined with having to resample the image, the effective resolution will be even less.

      2) My laser safety reference was with respect to outdoor use requiring very high nits.

      3) Most of the Microsoft patents show using a single mirror. There are also light coupling to the light guide issues with using the resultant light from a 2 mirror scanning process. The light rays are nowhere like radiating from the center.

      7) Is a fair point, but I think if Microsoft was really serious, they would have bought Microvision. From what I am seeing everything, LCOS, DLP, OLED, and LBS are stop gaps waiting on something like MicroLEDs to be perfected. This is not just my opinion, but a large number of experts. MicroLED have the technical high ground in just about every aspect.

      They are trying to cram a lot into the Hololens devices. Part of what causes the frontal area is the support for SLAM. You put a stack of thin fragile diffractive waveguides in from the eyes and you have to encase them in thick polycarbonate plastic to both protect them and the user’s eyes. By the time you do this, you are well on your way to the current Hololens. From a user-friendly point of view, I much prefer the current Hololens over Magic Leap. The cord is a pain. My recommendation to nReal was to offer a cap/headband and a shorter cable so they didn’t have a tether that snags.

      The 50KU estimate for Hololens sales came out 10 months ago and they had very restricted sales for about the first 6 months and didn’t go on sale generally until Oct 2016. Therefore, the 50KU number covers about a 2 year period.

      1) Yep, I lost money on Emagin. Magic Leap has “faked” the demo videos using technology that they were not going to use in a product.
      2) Why do I care about the market? I am talking about technology. I’m not pumping the stock, if I was, I would not have written what I did about them.
      3) As I wrote, I think there will be irrational exuberance. Maybe not, I don’t bet much.
      4) They will use the Microsoft win (assuming it does happen) as a reason to sell more stock like they always do with any good news.

      The one ray of hope for Microvision might be Lidar. I don’t follow that market but at least it potentially has some volume, unlike LBS projectors. Still, you have to wonder with all the Lidar players in the market, what does Microvision have that they didn’t already do? Dual mirror scanning is very old and there are many companies that have the technology due to bar scanners. I don’t follow this space, but it seems crowded and Microvision is late to enter.

      • Honestly, I was just lucky and maybe a bit lazy not to say I was just using Hololens 2 as a stand-in until we know the name.

  6. Beyond optics & operating system Hololens 2 will truly be a generation leap over Hololens 1:

    1) custom AI chips
    2) 5G integration
    3) volumetric video upgrades with improved content content produced at the half dozen Microsoft Mixed Reality Capture studios globally—Hololens 2 content is captured like a movie rather than produced like a video game. That type of content can exceed 50GB and couldn’t be rendered in full resolution on an untethered Hololens 1.

  7. technical comment- none of the figures/images appear in the post, getting white rectangles on iphone and nothing on desktop

  8. Seems to me if CMC’s product can work with LBS then it would also be suitable for use with OLED . In fact in this CMC military contract they proposed this :

    “CMC proposes to develop an image overlay see-through display for a 4-tube night vision goggle system by integrating four of CMCs proprietary Holographic Imageguide Displays (HID) to provide a conformal full field of view (FOV) over a GPNVG.”


    GPNVG is an L3 product that used eMagin OLED’s :


    I’m not suggesting that Hololens 2 will use OLED but I’m having a hard time understanding how they will make LBS & diffractive waveguide suitable for military use particularly when OLED dominates night vision .
    Incidentally eMagin OLED’s seems to be capable of achieving high enough brightness for the F-35:

    This ManTech project is integral to the initial operational capability (IOC) for the F-35C, which is scheduled to be achieved in early 2019. A quantity of 62 OLED flight helmets have been ordered for IOC trials. F-35 prime contractor Lockheed Martin has planned additional qualification testing, which will take the helmet to a Release Authorization Notice 6 production helmet in 2019.

    page 77


    • OLED, will “work” with some technologies but they are not bright enough for outdoors AR. Most diffractive waveguides only let through about 3% of the light that goes in. Lumus is more like 5 to 6%. If you have a largely transmissive to the real world combiner, the light throughput from the display is terrible. That is why for outdoors AR they want to start with over 100,000 nits which is possible with LCOS, DLP, and MicroLEDs.

      OLED does not dominate military night vision. The military uses light amplification tubes that have phosphors in them made by L3.

      • eMagin supplies OLED’s that are used in ENVG -II & ENVG-III and are currently the only company supplying displays for ENVG-B to my knowledge .


        OLED is the superior choice for night vision due to it’s high contrast ratio and the fact that no backlight is used . Back light is what caused the “green glow” in the F-35 HMD and is why they are migrating from Kopin LCD to eMagin OLED .

        This is from an L-3 night vision patent :

        The benefits of OLED displays over LCD displays are known. OLED displays are lighter weight than their LCD counterparts, can provide greater flexibility in the display, can have a wider viewing angle and a faster response time than corresponding LCD displays. Additionally, as described above, OLED displays are preferred in low-light conditions as OLED displays have a higher contrast ratio than their corresponding LCD displays. Additionally, OLEDs do not require a backlight which provides the thinner and lighter display than a corresponding LCD. At its most basic, an OLED display comprises a single organic layer between the anode and cathode. However, an OLED display having multiple layers of organic material is another possibility. Further, one of the most common OLED display configurations is a bilayer OLED comprising a conductive and emissive layer as described above.


        So yes eMagin OLED is the dominate display choice for night vision and I fail to see how Hololens will be able to implement a night vision solution using LBS and/ or diffractive waveguides in their military device .

      • I’m not sure what you are looking at but the military ground forces I’m familiar with only use photomultiplier tubes with phosphors such as the unit shown in the article you linked to. There is no other display at least for the primary image. They want totally passive (no IR emitters) and they don’t want the delay between a camera and the display.

        Take a look at the L3 night vision catalog. https://www.l3t.com/integratedlandsystems/assets/2017_L3-IT_Catalog_LowRes_r2h.pdf . The only thing listed with an OLED is for an IR based range finder.

      • ENVG-I uses a KOPN LCD display , ENVG-II & III uses an eMagin OLED display .

        FWS-I (contract spit between KOPN & EMAN displays) integrates with ENVG-III

        FWS-I wirelessly transmits to ENVG-III display so soldiers can shoot around corners . FWS-I also has a display .


        ENVGB is binocular and replaces monocle ENVG-III . ENVG-B also integrates with FWS-I .

        ” the ENVG-B also can superimpose tactical data over the user’s field of view”


        The HUD 3.0 mentioned in the above article is now IVAS- the Microsoft contract .

        ENVGB is being fast tracked as part of the Army Modernization effort .

        L3 won the initial no bid award of 391m for 13,000 ENVGB units program total expected to be 100,000 units .

        L3 2Q 2018 CC:

        Christopher Eugene Kubasik – L3 Technologies, Inc.

        One highlight for Sensors was a three-year $390 million next-generation night vision goggle award for 13,000 units; 10,000 for the Army and 3,000 for the Marines. Consistent with our customers’ desire to move rapidly, the Army and L3 is in alpha contracting process to negotiate and complete the deal in about 60 days. This 10,000 unit initial order positions us to compete for the Army’s planned purchase of 100,000 night vision goggles in the years ahead. This is a case in point of bringing the disruptive and innovative technology fast to market at an affordable price.


        Christopher Eugene Kubasik – L3 Technologies, Inc.

        I think that night vision goggle example is a perfect case. That’s, again, $391 million for 10,000. The opportunity for 100,000, you can do the math, those need to be competed over the next several years, the three phases or whatever the Army wants to do. You can get some pretty big numbers with billions in them, and they backed it up. I’ve personally been in the building as well as our executive leaders and we’re responsive, we’re committed. I think these guys are doing a great job and they need people like us that can align and prove that these types of things could be done.


        Extrapolating the 10,000 $390m contract to 100,000 units puts it at a $3b program .

        eMagin 3Q 2018 10Q :

        We supported multiple prime contractors with display deliveries for pre-production units for the US Army Enhanced Night Vision Goggle program and are providing additional engineering support for the Binocular (ENVG-B) program. This program is anticipated to commence production in 2019 with an overall acquisition objective by the US Army of 190,000 systems. (should be 100,000 systems = 200,000 displays)

        We received a follow-on contract for the Family of Weapon Sight – Individual (FWS-I) program following the delivery of displays for the LRIP phase of the program late last year. We are currently in the process of finalizing another follow-on contract for the FWS-I.

        So the progression is from ENVG-I , II , & III , ENVGB to IVAS .

        I fail to see how night vision will be integrated in IVAS (military Hololens) without eMagin being part of the equation .

    • I should have been clearer in the article that while I was sure Microsoft was using LBS, I was not sure whether or not they are buying units from Microvision. This is why the title did not include Microvision. I would think they would at least do some license with Microvision simply because Microvision has a very large patent portfolio on LBS.

      I don’t know how much you know about patent law, but just because you get a patent, you can still violate other patents. A patent is a “negative right” in that it only allows you to prevent other companies from doing something, it does NOT give you to the right to even make your own invention if it violates other’s inventions.

      All this said it could very well be that Microsoft got a license but are making the unit themselves.

      • Thanks for the comments and insight.

        Patent lawsuits are a game for kings (the one with the most gold usually crushes the feeble underfunded peasant) unless you are a patent troll and just wait to find a way capitalize on a patent violation.

        Here’s the patent for Microsoft’s MEMS laser scanning display they designed and developed in house. They are using a manufacturer with the know how and IP protection to build the MEMS system…… and it’s not Microvision.


        By the way, I think you’re going to be surprised when you get to look through the H2. It’s amazing! There’s a reason they won the military contract and it wasn’t based on H1.

        Cheers and I’m looking forward to seeing what you guys come up with at the new venture.

      • No, it must be a Microvision projector because Microvision mentioned that it would use the new 2K display somewhere in the future also in their other business units (projectors, consumer and automotive LiDAR). They would not be able to do so if this is a Microsoft engine.

        Also, I think it is impossible for a company to develop all this in only 21 months. Independently how big a company is. An engine contains not only the mirrors and the principles mentioned in patents. It is also the firmware to drive the mirrors and convert a image signal into colors. Remember that Bosch is working several years, likely more than five, on its low resolution engine that is still not market-ready.

        I think that Microvision developed the projection engine while Microsoft developed the other parts to combine projector and waveguides. See my other replay about The Verge article that describes what all was developed.

  9. Hello Karl,

    what do you think about what The Verge mentions:


    “Lasers and mirrors


    The lasers in the HoloLens 2 shine into a set of mirrors that oscillate as quickly as 54,000 cycles per second so the reflected light can paint a display. Those two pieces together form the basis of a microelectromechanical system (MEMS) display. That’s all tricky to make, but the really tricky part for a MEMS display is getting the image that it paints into your eyeball.

    Microsoft doesn’t want any of those problems, so it turned to the same thing it used on the first HoloLens: waveguides. They’re the pieces of glass in front of your eye that are carefully etched so they can reflect the holograms in front of your eyes. The waveguides on the HoloLens 2 are lighter now because Microsoft is using two sandwiched glass plates instead of three.

    When you put the whole system together — the lasers, the mirrors, and the waveguide — you can get a brighter display with a wider field of view that doesn’t have to be precisely aimed into your eyes to work. Zulfi Alam, general manager for Optics Engineering at Microsoft, contends that Microsoft is way out ahead with this system and that waveguides are definitely the way to go for mixed reality. “There’s no competition for the next two or three years that can come close this level of fidelity in the waveguides,” he argues.

    Do you want a wider field of view? Simple. Just increase the angle of the mirrors that reflect the laser light. A wider angle means a bigger image.

    Do you want brighter images? Simple again. Lasers, not to put too fine a point on it, have light to spare. Of course, you have to deal with the fact that waveguides lose a ton of light, but the displays I saw were set to 500 nits and looked plenty bright to me. Microsoft thinks it could go much brighter in the final version, depending on the power draw.

    Do you want to see the holograms without getting specifically fitted for your headset? Simple yet again. The waveguide doesn’t require specific fitting or measurement. You can just put the headset on and get going. It also can sit far enough in front of your eyes to allow you to wear whatever glasses you need comfortably.

    Simple, simple, simple, right? In truth, it’s devilishly complex. Microsoft had to create an entirely new etching system for the waveguides. It had to figure out how to direct light to the right place in the waveguides nearly photon by photon. “We are simulating every photon that comes from the laser,” Alam says. The light from the lasers isn’t just reflected; it’s split apart in multiple colors and through multiple “pupils” in the display system and then “reconstituted” into the right spot on the waveguides. “Each photon is calculated where it’s expected to go,” Alam says. That takes a ton of computing power, so Microsoft had to develop custom silicon to do all of the calculations on where the photos would go.

    And though alignment is much easier with the waveguide, that doesn’t mean it’s perfect. That’s why there are two tiny cameras on the nose bridge, directed at your eyeballs. They will allow the HoloLens 2 to automatically measure the distance between your pupils and adjust the image accordingly. Those cameras will also allow the HoloLens 2 to vertically adjust the image if it gets tilted or if your eyes are not perfectly even. (They are not. Sorry.)”

    • 1. The 54,000 cycles per second seems too slow. TO begin with, Divide by 60 and that is only 900 “cycles” per 1/60th of a second. If the laser is on in both directions, then they get 1800 scans that are not uniform (there is sort of a “Z shape” where the pixels on the right and left side and spaced funny. Some of these “scans” will be during “retrace.” So maybe there are 1440 “useful” scans, but that is not the same as 1440 rows of pixels. The effective resolution is about than 1/2 of that or 720. I would still be worried about 60Hz flicker, particularly there being zero persistence in LBS.

      2. They are only talking 500 nits which are about 200 more than the Hololens 1 but not nearly enough for outdoor use. You need over 3,000 nits and more like 7,000 nits for outdoor use. I wonder if they could get to 7,000 nits and still be “eye safe.”

      3. The waveguide, in essence, is a massive pupil replicator (Benard Kress talked about this at Photonics West). So the eyebox should be much bigger than say North Focals or Vaunt.

      4. “Microsoft had to create an entirely new etching system for the waveguides. It had to figure out how to direct light to the right place in the waveguides nearly photon by photon.” Marketing puffery.

      5. Using the cameras to look at the pupils and then adjust the IPD “electronically” seems like a good idea.

      BTW, they seem to be all over the place with the FOV. One place they say they “more than doubled it” and I read another place that it is 54 degrees. Hololens 1 had about a 35-degree diagonal FOV. Maybe they are playing a marketing game and saying they double the AREA of the FOV which would be misleading. BTW, 54-squared/35-squared = 2.38x. Just looking at the size of the waveguides and guessing at the eye relief (remember they still support wearing glasses) it does not look to me like they are big enough to support a 70-degree (diagonal) FOV (but I have not analyzed this in detail).

      I’m still waiting to look through one and see what the real resolution is and if there is any flicker or other issues.

    • The Verge says they are aiming for factory / indoor use, and Ii the pictures are anything to go by, it looks like they’ve upped the see-through transmission to get close to ANSI (a rough measurement suggests ~85%, on the nail). Given diffraction gratings also diffract the outside world – losing up to 30% – in order to increase the transmission, they would have had to reduce diffraction efficiency. This explains the move to lasers – not just because Hololens 1 wasn’t bright enough – but that an LED illuminated approach would be even dimmer with these guides. They must also using AR coatings on the waveguides, tailored not to impact TIR, as per some of their other patents. If they weren’t so committed to diffractive waveguides then they wouldn’t have been forced into this LBS compromise.

      • Why do you think they are letting 85% of the light through? Every pictures and video I have seen make it look like they are blocking well more than 50% of the light. I have seen pictures of different visors but they all seem to block a lot of light.

      • Bottom pic in the Verve article… of course, we won’t really know until see one in the flesh.

  10. CNET is reporting 52 but is not clear whether that’s horizontal or diagonal. In their YouTube video today they state they visited Mirosoft HoloLens engineering before today’s press event and right at the beginning show 30 for HL1 compared to 52 for HL2.

  11. Hi, Karl.

    If I may ask you to put your investor/trader/speculator hat on for a moment… All else being equal, do you have any guesses as to what kind of a “peak” the MVIS share price will see? I understand that you’ll probably be feeling it out, in real time, as the share price increases, and I obviously won’t hold you to it, because situations change over time and we only have a limited view on the facts. But, I just wanted to get a feel for what kind of impact, in your opinion and based on the data available, a reveal like this could have on the MVIS stock price.

    I believe the Sony footnote a few years ago sent MVIS above $4 briefly before coming back down to the lower $3s, but that was with a lot fewer shares outstanding — IIRC, 50-60 million at the time vs. the ~100 million outstanding now.


    • There is so much that is irrational about the stock market that it can be a fool’s errand to try and predict. As you wrote, the Sony design “win” drove MVIS way up but it proved to be a nothing deal and the stock crashed. The Hololens win will have more legs, but still, it is hard to see how MVIS will make a lot of money anytime soon with the deal (there are not enough potential dollars going to MVIS). Frankly, I would have thought the Hololens announcement would have driven MVIS stock price higher already due to irrational exuberance. Maybe MVIS’s track record is holding it down.

  12. […] While the small text is the “canary in the coal mine” in terms of showing up resolution problems, I would expect to see aliasing and other artifacts occur in any image with sharp edges. They are rending objects of higher resolution than the display can properly display. And as discussed in the Math Fails article, the interlaced LBS scanning process is distorted/non-rectilinear and with a varying resolution across the display which only adds to the problems. The inherent scanning resolution problems are then compounded by non-uniformity issues of diffractive waveguides (see my article: Hololens 2 is Likely Using Laser Beam Scanning Display: Bad Combined with Worse). […]

Leave a Reply