Hololens 2 Display Evaluation (Part 2: Comparison to Hololens 1)

Introduction – Answering a Simple Question

In response to Hololens 2 Display Evaluation (Part 1: LBS Visual Sausage Being Made), someone asked, “How does the Hololens 2 (HL2) compare to the Hololens 1 (HL1) in terms of image quality.” I had read someone else comment that while he like many of the new features and ergonomics of the HL2, he though thought the image quality of the HL1 was much better.

I decided to fire up the Hololens 1 and see for myself. Sure enough, the Hololens 1 was dramatically sharper. I decided to shoot pictures of the HL1 display that were as close as possible to the conditions I shot the HL2. I used the exact same lens, camera, and test pattern.

The pictures below give a fair impression of how blurry/soft the HL2 is relative to the HL1. The difference is dramatic when you go directly from wearing an HL2 to an HL1, the HL1 is so much sharper. After looking through the HL1, the HL2 is like looking through a fogged-up window.

HL2 Uses Laser Beam Scanning (LBS) and the HL1 Uses LCOS

For those that don’t know, the HL2 uses laser beam scanning (LBS) and the HL1 uses LCOS microdisplays as the display device. The last article explained the HL2’s LBS process. There is plenty of information available on how LCOS (Liquid Crystal on Silicon) works (such as here).

Comparison Images

On the HL2, the image was sized and the headset positioned such have one scan line per pixel in the test pattern in the center of the image (see part 1). On the HL1, the headset was positioned to have one pixel in the image equal to one pixel in the test pattern.

Below is a side by side comparison taken with the same camera and lens with very similar setups. If you click in the image below, it will open a larger but still scaled down by about 4X version. Even scaled-down, the HL1 is noticeably sharper. The HL2 FOV is slightly wider than the lens such that about 6% of the right side of the HL2’s FOV is cut off, but the lens roughly equals the HL2’s vertical FOV.

Hololens 2 to Hololens 1 – (click for larger version — still scaled down by ~4x in H and V)

Below are some full camera resolution image crops from the center of the pictures above that show the sub-pixel-size detail. The leftmost image shows a single field taken at 1/125th of a second from the HL2 using the same focus as the middle HL2 image which averaged 15 fields over 1/8th of a second. The rightmost image is a crop from the HL2.

If you put the above image on a monitor and back away, you will get a good idea of how it looks in real life. If anything, this comparison benefits the HL2 as the HL2 also has temporal artifacts including flickering edges and rolling effects from the 4-way (plus) shifting.

Below are the HL2 and HL1 images in just green to better show the difference in resolution.

In the next set of image crops, the HL2 image has been resized to match the pixel sizes in the test pattern between to the HL1. A version filtering green has been included.

The HL2 Has Less Than 20 Pixels Per Degree

As reported last time, the Hololens 2 has about 854 scan lines in the middle of the screen and the resolution is less than the number of the scan line. The HL2’s vertical FOV is about 30 degrees so they only get 854scanlines/30_degrees = ~29/scan lines per degree. As I wrote, the resolution is more like 600 lines or 20 pixels per degree for the HL2. The HL1 has 720 pixels vertically over about 17.5 degrees or about to roughly 41 pixels per degree (41 ppd).

With 60 arcminutes in a degree, 20 ppd is 3 arcminutes-per-pixel (3 am/p). Human vision is a little better than 1 am/p (60 ppd) or roughly the same as what Apple called a “retinal display.” Most people will agree that 1.5 (40 pixels/degree) is good enough for most purposes. As the resolution drops to 2 arcminutes/pixel (30 ppd), people can notice problems with screen door effects or a blurry image.

Microsoft’s Alex Kipman statement on HL2: “47 pixels per degree of sight is an important number to remember

In the Hololens 2 announcement in February 2019, Microsoft’s Alex Kipman made claims about the resolution that appear to be provably false. Quoting (with added highlights) starting at 29:55 in the video:

 47 pixels per degree of sight is an important number to remember. Now, this is important because this is what allows us to read an 8-point font on a holographic website. This is what allows us to have precise interactions with holograms and ultimately, this allows us to create and be immersed in mixed reality.

Hololens 1 is the only headset in the industry capable of displaying 47 pixels per degree of sight. And today, I’m incredibly proud to announce that with Hololens 2, we more than doubled our field of view while maintaining 47 pixels per degree of sight on Hololens.

To put it in perspective, and to highlight the generational leap, this is the equivalent of moving from 720p television to 2K television for each of your eyes. Now, no such technology exists in the world. So, in the same way, we had to invent time of flight sensors for connect, waveguides for Hololens, and holographic processing for the edge, with Hololens 2 we invented industry-defining mems display. Now, these are the smallest and most powerful efficient 2K displays in existence.

Alex Kipman, Microsoft at MWC19 Barcelona

And these were not some off-the-cuff remarks or misquotes. There were prepared visuals to go along with each claim as shown with clips from the video below.

Some of the Hololens 2 False Claims from its Announcement Video

Let’s start with the easy one. The Hololens 1 has a vertical FOV of about 17.5 and has a display with 720 pixels. Dividing 720p by 17.5° gives 41 ppd so he is fudging to say the HL1 had 47 pixels-per-degree. One could say this is within the “marketing margin of error.”

But as the pictures viewed through the HL2 prove, the HL2 has less than half the claimed resolution. They could try to play some word games as they did with the 2X FOV (more on that in a bit), only he also stated that an 8-point font would be readable. There is absolutely no way that when displaying 47 pixels per degree than an 8 point font, as commonly defined, would be readable by anyone. BTW, the default font on the Microsoft Edge browser is the size of the 16-point font in this image.

The images below were taken with the same test pattern. The top image shows the HL2 viewed at 29 pixels/degree (setting the test pattern height to equal the number of scan lines). The second image below shows the HL2 with the test pattern set up to display at 47 pixels/degree. And the final picture shows the first generation (HL1) viewed at 41 pixels/degree (pixels in the text pattern equal to pixels on the LCOS display). The HL1 image gives an idea of what 8 point text should look like and note it seems sharper and more readable than the HL2 at 29 pixels/degree.

The test pattern above is the first I have shown with some color content. The pictures give a hint that there are problems with color as well with the HL2’s LBS display (a subject for a later article).

Kipman’s saying, “this is the equivalent of moving from 720p television to 2K television for each of your eyes” is just absurd. Commonly, 2K would mean 1920 by 1080p. The HL1 does have about 720p resolution (but the color uniformity is very poor). But with the HL2 the resolution is lower than the HL1. The HL2 has about 2.4X less effective horizontally and 1.8X less vertically than “2K.” Or since Kipman likes to use area calculations (see next paragraph), less pixels buy a more than a factor of 4X.

The “more than 2X field of view” fib was covered by many websites back in Feb. 2019. Kipman later “clarified” that he was talking about “area” when everyone else measures FOV linearly (see my Hololens 2 First Impressions: Good Ergonomics, But The LBS Resolution Math Fails!, RoadtoVR, and UploadVR among others).

Conclusion & Comments: Why the Unforced Error By Microsoft?

Hopefully, pictures speak for themselves and reinforce what was written in Part 1, namely that the effective resolution of the HL1 is something less than 800 by 600 pixels. Others were so focused on the Kipman’s misleading (being generous) 2X FOV improvement over the HL1 statement that they never considered that the statements before and after it were categorically false.

In February 2020, most people that took the Hololens 2 to task on the “2X FOV,” assumed that the did it to make it sound like the HL2 FOV was significantly better than the Magic Leap One when it was about the same (see image on the right from UploadVR)

It is less obvious why they out and out lied about the resolution and I can only speculate. On the surface, it seems like an unforced error. Maybe they were trying to sound better than Magic Leap again. Maybe it was just seen as “good” marketing. Maybe they were trying to convince people, perhaps upper management at Microsoft, that they made the right choice to go with laser beam scanning (LBS) displays and spend hundreds of millions of dollars on development and manufacturing setup when they could have simply bought much higher resolution displays from multiple companies. One thing for sure, there were many people at Microsoft that know they were lying about the resolution.

Also, please don’t buy into the Emperor’s New Clothes argument that was Tweeted By Kipman that the HL2 does something magical that only works when directly seen by your own eyes. Yes, a camera works differently that the human visual system. With careful picture taking, you can get representative images, as shown above.

Appendix: The Photography Setup

The same camera (Olympus E-M1 Mark III) and lens (Olympus 25mm F1.8 prime lens) were used. The camera was located relative to the respective headset to give the best image possible. The HL2 was shot at ISO100, F11, and 1/8th of a second to average out as much of the pixel-shifting/wobulation (see part 1) for the “Interlaced” Test Pattern. The color pictures the text and “Elf” with the HL2 were shot at 1/15th of a second at F8 and ISO100. The HL1 was shot at ISO200, F8, and 1/30th a second to average out the color field sequential effects. The pictures were all shot with manual focus. All images were shot in the Olympus RAW (.ORF) format and were converted to JPEG just before uploading.

Karl Guttag
Karl Guttag
Articles: 260

21 Comments

  1. Hi Karl

    Thanks for your experiments, and it is really useful to let people to know difference in optics performance between LBS and traditional display (eg. DLP or LCoS)
    I do believe for AR glasses or even HMD, it always many design tradeoff, so do you have any idea about why LBS is selected in HL2. Better color for WG, Power saving or any others?

    • Based on what I have heard, the main reason for HL2 using LBS was due to etendue coupling into the waveguide. Basically, it is difficult/inefficient to get light to couple into the small slit of a pupil expanding waveguide. A laser starts with near-zero etendue (angular variability). But the LBS engine itself is very power inefficient. The lasers have to be kept in the subthreshold, but still on, even when “black” so they will switch on fast enough when needed. There is also a lot of processing work/power to control the scanning process as well as all the image correction and resampling for the scanning process.

      A secondary issue was eliminating the field sequential color breakup effect, but they traded this issue for having visible flicker all the time. The color control is horrible with the HL2 as I will be showing.

      Hololens seems to me to be an R&D project that “escaped the lab too soon.” Also, they like playing with technology more than building a product. They probably thought a few hundred million dollars could solve the problems with LBS and didn’t understand that they were peeling an onion with many layers of problems.

      [One example but it is speculation]: Maybe they had a lead on a dual (stacked) laser on a single die/chip and thought it would solve the problem of the fast mirror being too slow to support the resolution they wanted. But while “solving” the speed problem, it made the problem with the scan paths worse as the part 1 article shows; so they “fix” that with the 4-way interleaving. But then this makes the flicker and fuzziness of the image worse. This is the way it has always been with LBS, each “fix” opens up new problems.

      • Thank you @KarlG for detailed analysis. If I would found it earlier – I would postpone purchasing HL2. But unfortunatly I have already got my HL2 directly from Microsoft. Now I can compare both HL1 and HL2 – and I can assure without sophisticated measurements – quality of image on HL2 is 10x lower than on HL1 (which I’m using since 2017). That’s why even before start programming for HL2 I’ve jumped to internet to see if it is only me was so unlucky, or it is common problem with HL2. My only hope now is that my unit was from earlier batches of HL2 and Microsoft will exchange it to better/newer one…. But if all units like my – then unfortunately it is big failure for Microsoft. Not sure if anyone will buy it even for $1000. Looking at your research and analysis I’m afraid the problem is fundamental and MS may not have better units… but I will try.

      • There is a component of unit-to-unit variability, but many of the major issues are inherent in the design and won’t be getting much, if any, better.

        The resolution issues and flicker/ripple described in Parts 1 and 2 are inherent and will not get significantly better from unit to unit. The image is fuzzy, and it wiggles and blinks. The lack of resolution is due to the way the HL2’s laser scanning process works and will not get any better with the HL2.

        All units will have the color (what is dubbed “the rainbow problem”) variability across the field. That is inherent in the HL2’s diffractive waveguide design. The “trapezoid” area (in part 3) is likely caused by the butterflied waveguide. How bad the color rainbow looks will vary somewhat from unit to unit.

        Another thing I’m sure you have noticed, particularly if you wear glasses, is that the “eye relief” is much smaller and not adjustable on the HL2 like it was on the HL1. The HL2 at least has enough eye relief for me with my glasses, but I suspect it could be a problem with some people where it would not be with the HL1.

        Some of this comes down to what you are expecting to do with the HL2, the “use model.” Nobody would buy an HL2 or HL1 for watching a movie; the image quality is horrible compared to the cheapest TV you can buy.

        The HL2 is for overlaying fuzzy and not-color-accurate information on the real world. It may, for example, find use in industrial/enterprise applications to aid an assembly worker in doing their job, where they can justify the cost of the HL2 if it makes the worker even say 10% more efficient. But if you are trying to overlay detailed information or things like pictures, then the HL2 is probably the wrong tool.

  2. “It is less obvious why they out and out lied about the resolution and I can only speculate”

    Maybe they wanted to land a half a Billion contract with the DOD at the expense of Magic Leap?

  3. Thank you very much for your in-depth analysis. You mentioned the FOV and brightness are improved from HL1 to HL2, can you provide some insight about what challenge LCoS would face to achieve similar FOV and brightness? Also, do you think there’s any ergonomic effect of LBS over LCoS (eg. form factor, heat, etc…)?

    • I don’t know of any challenges to LCOS achieving the same or greater FOV and brightness as companies such as WaveOptics, Displelix, and Lumus have demonstrated the capability. Lumus, using a different waveguide technology, has achieved thousands of nits with about the same FOV. Dispelix and Waveoptics have demonstrated great FOV using similar diffractive waveguide technology. While the optics are less efficient with LBS, the light (photon) generation is more efficient with LEDs illuminating LCOS.

      While the laser scanning likely to say that they turn the lasers off when black, this is not true. What they do is set them to a subthreshold current so they don’ts lase. If they turned the lasers all the way off, they would not be able to turn on fast enough. When is did power studies years ago on laser pico projectors, I found the power consumption for a black screen to be higher than an LCOS engine producing a white or any other image. I have not done a study with the HL2 headset (as it likely would be destructive), but I suspect it is still true. Heat is a result of power efficiency.

      The main downside to LCOS is that the high resolution, more power-efficient, LCOS uses field sequential color. In this method, a single LCOS mirror is used for red, green, and blue time sequentially. Most LCOS works at 360 color fields per section (I think this is what Hololens 1 used but I haven’t measured it) which is basically R, G, and B color fields at 60 times per second two times (60Hz X 3 (RGB) X 2 = 360). Compound Photonics has an LCOS with 1440 fields per second which is 4 times faster and will significantly reduce if not eliminate the sequential color breakup (I have not tried nor seen studies to prove it).

  4. Karl Guttag, how did you get your usaf target test images to stay in the field of view of the HL1 and HL2? as soon as i attempt to take off the headset (to position a camera in the eye box area), the images disappear. currently i’m loading similar test images onto the OneDrive app of the hololens, and opening these images for view. but yes i wanted to ask how you got around the fact that as soon as the eye pupil is off the glasses, the images disappear. in addition, how do you get the display to project the full screen image?

    • You need to go into the Hololens (1 and 2) device portal and change the sleep setting (see: https://docs.microsoft.com/en-us/windows/mixed-reality/develop/platform-capabilities-and-apis/using-the-windows-device-portal and search for “sleep”). I think the longest you can set sleep to is 30 minutes (as I remember it).

      I’m not sure how you are mounting the HL2, but I also remember having issues if the headset got too near my big metal ball head on the tripod. It may have been a coincidence or some sensor in the headset getting messed up by all the metal around it, but I found it better to have the headset hang from an l-bracket arrangement. I let the headset “float” from the over the head straps.

      In the beginning, I did a lot of trial and error with different resolution images. I picked my final test pattern resolution based on the scan lines and the spacing of the dual lasers per color. I had a blackout cloth material to form a background that the Hololens2 to give a good background with the light being on in the room (the Hololens 2needs light for SLAM to work). I then put patches of masking tape just outside the FOV for the SLAM to work with (see: https://kguttag.com/2020/09/06/hololens-2-display-evaluation-part-6-microsofts-fud-on-photographs/ and in particular this image through the HL2’s video feed: https://i2.wp.com/kguttag.com/wp-content/uploads/2020/09/91568-hl2-camera-feed-view-20200904_164234_hololens-scaled-1.jpg?resize=1290%2C870&ssl=1).

      I then navigated to my web site’s test pattern page (https://kguttag.com/test/). I “stuck” the browser onto my black screen with the tape marking setup. I then sized and expanded the browsing window to match the FOV of the HL2. I had the HL2 on one tripod with a large ball mount with an L-bracket on top of the ball mount and use the image I could see with my eye to align the HL2 moving the image. The visor was partially up as you would wear it to get the headband out of the way and dangling from the over the head straps. I found it best to position the browser window and thus the tripod with the HL2 as a height so I could roll my chair under the headset with my head slightly duck down and then raid my head for alignment. I then used my Olympus camera (which is a very small mirrorless camera) on a separate tripod to take the picture. You will get complaints from the headset wanting to recalibrate but I just keep saying “cancel” to these requests.

      The article links above will hopefully be helpful. I tried to add a few other little tips above. If you need more help, let me know.

  5. […] avoid “counting” things like a haze contributing to brightness. As I often say/write, this blog is about the only one calling out Hololens 2 on their falsehood about resolution, and that is simple to measure. So if companies will “lie” about easy-to-measure […]

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading