Hololens 2 Display Evaluation (Part 1: LBS Visual Sausage Being Made)

Introduction

A Hololens 2 (HL2) was purchased directly from the Microsoft store online, and I have been evaluating its display technology for a couple of weeks. Unfortunately, this further pushed out my article on MicroLEDs that I have been working. Since the HL2 uses laser beam scanning (LBS), seen by some as a competing display technology with MicroLEDs, discussing the HL2 provides some foundation information for the coming MicroLED article.

This blog posting follows up on two articles I wrote in 2019; namely, Hololens 2 is Likely Using Laser Beam Scanning Display: Bad Combined with Worse and Hololens 2 First Impressions: Good Ergonomics, But The LBS Resolution Math Fails!. Regarding the second of the two articles, there is a slight change in the “math” because the HL2 uses dual, vertically stacked lasers per color, but the conclusion is the same because the mirror is scanning at half the speed that was reported.

As is the practice of this blog, all the test patterns are available so that anyone can repeat my results. All the test patterns used can be found on the Test Patterns page. If you want to see what is going on, you are going to want to click in the images to see them in higher resolution. Even these images are typically scaled down by more than 50% of what the camera took for web viewing.

The Laser Beam Scanning Display – No Color “Rainbow” Pictures . . . Yet

I have taken thousands of photographs of the HL2 displays. Most of these pictures were experiments to get the images that best conveyed what the eye sees as well as to capture things the eye cannot see decipher the scanning process from the images.

Yes, there are problems with uniformity and color “rainbows” across the display related to the waveguide and other optics, it is built into the “physics” of a diffractive waveguide. The HL2 unit I have is not nearly as bad as some of the pictures I have seen posted. I suspect that part of the issue is that it is difficult to get a representative picture.

I have taken representative full-color, full-frame pictures, for use later. This article is going to concentrate on how the HL2 laser beam scanning process works. Understanding how it works helps in understanding the resolution, artifacts, and temporal (time changing) issues (including flicker) of the LBS display engine.

For the LBS tests, I am only going to use two reasonably simple test patterns that test resolution. The original patterns are on my “Test Pattern Page” and are shown below (click on the thumbnails for a full-size version):

These test patterns have “targets” in nine locations; the center, top, bottom, and four corners that are based on the well-known USAF-1951 Standard Target. The name of the target locations is included in the pattern.

Definition of Field vs. Frame

For scanning displays, “field” is one top-to-bottom scan of the image. A “frame” represents the whole image at a point in time. For example, old U.S. CRT televisions used “interlaced scanning” with two 60 Hz fields, each covering odd and even lines to build up one 30 Hz frame. What constitutes a “frame” is not as clear today as some display systems will switch to showing field(s) from the next frame rather than building up a complete single “frame/image.” As will be shown below, the HL2 has four (4) identifiable “fields,” each displayed at 120 Hz, but likely, the HL2 changes the image content at 60Hz or 120Hz.

Scan Lines vs. Pixels

With scanning displays, scan lines are not the same as rows of “pixels.” In the case of the HL2, the scan lines pseudo-randomly cross through what should be a single pixel. The effective vertical resolution in “pixel rows” of the HL2 is less than the number of scan lines. For this article, a “pixel” is the size of the pixel that should be displayed.

How HL2 Builds Up an Image (How the Sausage is Made)

The following subsections walks you through how the HL2 builds up an image. It is a complicated process.

120 Hz Field Rate

As widely reported, the HL2 has a 120 Hz field rate, and my tests support this conclusion. For capturing a single field/scan, the camera was set to 1/125 of a second shutter speed.

Two vertically-stacked lasers per color

The HL2 has two lasers per color that are stacked vertically, one pixel apart. Bernard Kress of Microsoft gave an excellent presentation, captured on video, about AR Optical challenges paving the road… in February 2020. In the talk at about 54:40, he mentions that the HL2 had “Two Lasers per color.” BTW, Kress is also the author of the recently published excellent AR headset optical reference book “Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets.”

Below is the photographic evidence of the dual vertically stacked lasers. The picture is a close-up of 2- and 1-pixel wide vertical and horizontal lines with a crop of the test pattern enlarged to match. The picture shows a single field, and the red and blue are filtered out (in Photoshop) to show the individual laser scans more clearly.

The HL2 was positioned, and the test pattern was sized such that a single-pixel wide line in the test pattern would be the same height as one scan line from one laser.

Looking at the 2-pixel wide vertical lines of the test pattern in the photograph, notice there is a thin black line separating the two vertically stacked green lasers. The overlaid small red squares are the size of a single-pixel in the test pattern.

You should also notice that the 2-pixel-wide vertical lines are zig-zagging in pairs of two pixels. If you look carefully at the red pixel size squares, you should see that each pair of 2 pixels is shifted left or right about half a pixel. Each zig or zag of 2-pixels is coming from a different direction of the scan in a single field. The lasers are being turned on in both directions of the horizontal scanning.

Bi-Directional Scanning with the Lasers Turned On

Only in the horizontal center of the display as the scans and pixels so neatly stacked. The figure below has crops near the left side and the center from a single field (one scan). Notice how the separations of the lasers are relatively even in the center of the display and then overlap, causing a bright centerline, with gaps between scans.

The bright center and gaps are caused by the lasers being turned on in both directions (“bi-directional scanning“). An odd CRT TV or monitor that only turned on when the electron beam was scanning from left to right, which results in more evenly spaced scan lines.

The figure on the left gives further evidence of bidirectional scanning. The HL2’s red lasers are slow, turning on from black/off. It can sometimes take more than a dozen pixels for them to turn on (this is true in both displays on the HL2 being evaluated). I first noticed this with cyan (blue/green), where there should be white in areas of the test patterns. In this color close-up from near the center of the display, you can see how every other scan pair is very cyan (lack of red). Where the laser pair is more cyan is indicative of the laser scanning from the black to the white (left to right in this example).

Only about 854 scan lines (not pixels) per field

With the HL2 displaying an image with pixels that closely matched the scan lines in the center of the display, it then becomes a simple matter to calculate the total number of scan lines visible by seeing how much of the test pattern is evident. Doing the math, it turns out that the HL2 has roughly 854 visible scan lines per field, counting the dual-stacked lasers as two “scans” as well as counting both directions of the scan. This is the “maximal counting” with one “cycle” back and forth of the horizontal mirror counting as four scan lines. As will be shown, the effective vertical resolution in “pixels” of a laser scanning display is less than the number of scan lines.

Curved, overlapping, variable speed, and 4-way interlaced scanning

The HL2 scanning process is, to say the least, very complex. As I wrote back in Feb 2019, “The LBS Resolution Math Fails!” the “fast” mirror is not moving fast enough by at least a factor of four to come close to Microsoft’s resolution claims for the HL2. While the HL2 has stacked two lasers per color, which would double the number of scan lines, they are running at half the horizontal scan speed as what they said, so the net result is the same. Dual-stacked lasers also cause new image problems.

Non-uniformed curved scanning

US Patent Application 2018/0255278 to Microsoft discussed interlaced scanning with dual lasers. It also talks about the beams from the two lasers crossing during the scanning process. In addition to showing the patent application’s Fig. 6, there is a colorized version on the left below. The figures are greatly simplified and only show a few lines of the scans.

The upper left part of the figure shows the scanning process for a single field/scan with the dual-stacked lasers. A scan of a single laser is a distorted “bowing” of a sine wave. The distortion of the sine wave comes from one or more mirrors, at least some of which are curved, being hit “off-axis.” A key thing to note (see yellow ovals) is that when the scanning reverses horizontally, the lower laser’s path from the prior scan will cross the upper laser’s path for the next scan.

In the upper right corner of the figure, I have included a crop of 1 field of the green scan showing the gaps between the scans on the outsides of the displayed image for a single field. The display’s scan lines are much closer together than shown in the figure, and the dual-stacked lasers noticeably overlap for more than a third of the display on each side.

In the diagram above on the lower-left corner, I have added the “interlaced field” in red and magenta. The interlaced field is shifted about 2 pixels down, in red and magenta. The interlaced field will somewhat fill in the gaps on the left and right side of the first field.

4-Way Interlacing

One slight surprise is that the HL2 is using not the common two field interlacing, but rather four variations of interlacing or field-types. The 4-way interlacing appears to be necessitated by the use of the dual-stacked lasers, which has to skip down two lines (in the center of the display) when interlacing. With the bowed bidirectional scanning process, there would still be significant visible gaps if they didn’t also have four fields.

In the figure below, I have cropped the same area on the far left side of the four field types found to date. Four white horizontal reference lines that are one scan line apart are overlaid for reference. Each of the four fields starts one laser scan width down from the next. While I have numbered the fields in numerical order from top to bottom, I would expect that they probably go in a different order to reduce temporal artifacts.

Bowed Scanning

Shown below is a full-field image (green only). On it, I have drawn three curved red lines that follow a scan line on the top middle and bottom of the image. The scan lines exhibit the bowed scanning, as shown in the Microsoft patent application. I have included in light blue, horizontal straight lines for reference.

The next figure shows crops from nine “targets” of the white on black test pattern in a single field. It gives some idea as to how the scanning varies across the display.

Four-Way Interlaced (Wobulation) – Papering Over the Cracks

In 2005 HP, then in the rear-projection TV business, used a vibrating mirror to move around a DLP image very quickly with a technique they called “wobulation.” While officially the term “wobulation” died with HP’s exit of the TV business, many in the industry still call it “wobulation.” Today there are several companies making projectors that do a 4-way shift, which has been dubbed “Faux-K” (see True 4K vs. Faux-K). The figure below shows the concept of 4-way shifting/wobulation from Kress’s book.

fthus the derisive Faux-K nickname). You simply can’t paint a 1-pixel wide line with a 2-pixel wide brush. The 4-way shifting primarily makes the image smoother and hide screen door effects.

The HL2 uses a variation of 4-way pixel-shifting/wobulation. I have captured four primary shifts below taken from four different fields. Fields 1 and 3 shift every other vertical pair of pixels left or right by 1/2 pixel with opposite zig-zag effects (look at the vertical lines under the “2” in the figure below). Similarly, fields 2 and 4 zig-zag in opposite directions. Below, in Photoshop, I have averaged together fields 1 & 2, 3 & 2, all four fields, and fields 2 & 4.

Averaging Fields 1 and 3 or fields 2 and 4 produce a reasonable image. But note, there is still a considerable zig-zag effect when averaging just two fields.

Using a flat white background shows where there are gaps or unwanted textures caused by the scanning process. The two sets of images from the center and the left side below show the four fields. As before, the various fields are averaged together to show the net effect at 30 Hz.

If you look carefully at the “All 4” images above, you will notice that it still has some wiggling. Also, in large flat shaded areas, you can still see horizontal lines/textures even when adding all four fields together. At first,

Verifying the averaging process above, I took a series of pictures at different shutter speeds (see below). When shot at 1/30th of a second, the camera is averaging four fields together (120 fields/30 = 4 fields). Notice there are still lines/textures in the image similar to the photoshop averaging. I found I had to slow the shutter speed down to about 1/8th of a second or 15 fields to get the center smooth with no wiggles and the size of the lines/textures where they would not be noticeable. The figure below shows crops from the center and center-left of a test pattern (both sides and the corners act similarly).

A close inspection of various samples of the same field type shows that the HL2 has minor variations from Field to Field of the same type. On the left are two variations. You should notice that their overall appearance is similar. But if you look carefully, you will find minor differences. For example, the two pixel wide horizontal lines point at by the red arrows have differences in intensity but are in the same location.

I have also seen some sub-pixel movement of the fields but have not identified what is going on. Perhaps there is some vibration (something to explore later).

While the pixel shifting hides some artifacts such as the “screen door effect,” it also softens/blurs sharp edges. In the case of the HL2, it is using pixel shifting to paper over the holes caused by the bidirectional scanning.

Temporal Artifacts Including 60 Hz and 30 Hz Flicker

Pixel shifting is that because it presents the whole image over time, it causes “temporal artifacts” when people move their eyes (which is constantly), the best know being flicker. With 4-way shifting (“wobulation), it takes four frames to get a mostly complete image. With the base field frequency being 120 Hz, this means there are 60 Hz and even some 30 Hz flicker components.

As I wrote in Hololens 2: How Bad Having Tried It?, the industry learned in the 1990s that scanning computer displays, with typically 200 nits should have a non-interlaced refresh of about 85 Hz. The HL2 is specified to have up to 500 nits, and the graph below suggests it should have a non-interlaced update of better than 95 Hz.

These studies resulted in ISO-9241-3 in 1992 recommendations for computer monitors. It was found that even 60Hz progressive scanning was not fast enough and that the perception of flicker also varied with screen brightness (among other factors). The ISO committee put out a recommendation based on a formula but which simplified down to about 85Hz refresh for most practical uses. See the graph below based on the ISO-9241-3 standard from the article The Human Visual System Display Interfaces Part 2 on website What-When-How.

Another issue for the HL2 is that humans are more susceptible to flicker in their peripheral vision. And it is the sides of the display that are flickering the most as the black gaps come and go with the interlaced scanning processes.

The slower temporal artifacts will give a rippling appearance. I particularly notice ripping in horizontal lines. The user will see solid areas become striped when they move their head or eyes. Many people have complained that small text is hard to read due to the rippling.

Red lasers can’t go from black/off in a single pixel or more in the center of the display

As shown earlier, there is a problem with the red lasers turning on from full black. White lines on backgrounds come out cyan, which means they lack red. The problem gets worse at the lower brightness levels on the HL2.

Below is the full-color image and just the red component from a single image. There are crops from the left and the center of the frame. Notice inside the yellow ovals that the white lines are much more cyan in the crop from the center than from the left side. In the red component image, it can be seen that there is almost no red in the center crop. The reason for this difference is due to the scanning process, which moves slowly on the outsides (the speed drops to zero when it reverses) than in the center where the beam is moving at maximum speed, and the lasers have to switch faster for the same width pixel.

Mirror Scanning Speed is 27 kHz and not 54 kHz as Previously Reported

Quoting The Verge Hololens 2 announcement article based on information they got from Microsoft during the February 2019 announcement:

The lasers in the HoloLens 2 shine into a set of mirrors that oscillate as quickly as 54,000 cycles per second so the reflected light can paint a display.

Quoting the 2018/0255278 patent application to Microsoft with my highlight:

However, current MEMS technology places an upper limit on mirror scan rates, in turn limiting display resolution. As an example, a 27 kHz horizontal scan rate combined with a 60 Hz vertical scan rate may yield a vertical resolution of 720p. Significantly higher vertical resolutions (e.g., 1440p, 2160p) may be desired, particularly for near-eye display implementations, where 720p and similar vertical resolutions may appear blurry and low-resolution

This statement suggests a problem with getting the “fast/horizontal” mirror to oscillate faster than 27kHz and thus a reason to stack the two lasers. The rest of the math in the application seems a bit “fuzzy.” It is interesting as well for saying that “720p and similar vertical resolutions may appear blurry and low-resolution” when the HL2 is lower resolution than 720P.

The HL2 has 120 fields per second. The photographic evidence shows that there are 854 (plus or minus about 4) laser scans per field counting the two stacked lasers as two scans and counting the bidirectional scan. The mirror thus sweeps 854/4 = 213.5 cycles per frame. Multiplying 213.5 by 120 Hz gives ~25.6 kHz and or about 27 kHz, including about 5% for vertical retrace.

The Field of View Checks Out As Claimed at About 43 By 29 degrees.

The main lens I used was an Olympus 25mm (prime) 4/3rds system lens with a FOV of 37.6 by 28.2 degree. The HL2 image just overfilled the camera’s horizontal FOV by about 5%.

Conclusion: Amazing Technology Used To Produce A Low-Resolution Image that Flickers

It is impressive all the technology Microsoft used in the HL2, and in particular, the precision of the laser alignment. I have been evaluating laser scanning displays (LBS) since before this blog started in 2011, including Cynic’s Guild to CES — Measuring Resolution. In 2015, I wrote a series of articles Sony’s LBS engine using Microvision mirrors in the Celluon LBS projector (see: Celluon Laser Beam Scanning Projector Technical Analysis – Part 1, Celluon Laser Beam Steering Analysis Part 2 – “Never In-Focus Technology,” Celluon LBS Analysis Part 2B – “Never In-Focus Technology” Revisit, and Celluon/Sony/Microvision Optical Path).

It has been proven beyond any doubt (see: https://www.kguttag.com/2020/05/18/teardown-shows-microvision-inside-hololens-2/) that the HL2 is using Microvision’s laser scanning mirror technology. In my 2011 and 2015 evaluations, it was evident that the red, green, and blue lasers were not perfectly aligned and that Microvision (and their 2015 partner Sony) was digitally resampling/scaling to try and align the lasers. The HL2 has orders of magnitude better laser alignment.

Still, for all the technology and likely 100’s of millions of dollars spent, the laser scanning engine produces a terrible image by today’s standards. With all the resampling and “wobulation” going on, it is hard to put an exact resolution number. In my experience, an 800-by-600 fixed-pixel display would look better and sharper than the HL2’s display.

For all the technology and money that was spent, as the saying goes, “they are still putting lipstick on a pig.”

Karl Guttag
Karl Guttag
Articles: 256

43 Comments

  1. Hello, Karl,

    Amazing analysis. In fact, it’s the only analysis I know of that deals with Hololens 2’s technology.

    1) Do you know of any technology that could do it better? It’s easy to say that something is not good enough, but the question is always: Is there something that could make it better, or is it the best technology available today?

    2) Why do you think “Still, for all the technology and likely 100’s of millions of dollars spent”? Microsoft only paid Microvision $15 million for the development. Microvision may have spent an additional $50 million (but not Microsoft), as mentioned in this article: https://seekingalpha.com/instablog/47545695-high-tech-stock-review/5415152-possible-effects-of-new-ftc -Investigation-versus-Microsoft-on-Microvision

    Thanks again!

    Kind regards

    • It is amazing to me how little critical thinking and analysis are applied to sometimes, even from technical people. My biggest expense was on an Olympus mirrorless interchangeable lens camera that I bought several years ago for this purpose and some lenses (the Olympus 25mm F1.8 lens is the best I have found for taking pictures of Hololens although you can do a pretty good job with the Olympus 14-42mm kit lens).

      1. Hololens is trying to work with their diffractive waveguide. Once you assume a diffractive waveguide, the options get very limited. I think they could do better with LCOS today. The most impressive spec (and have heard good things about) is the new Compound Photonics LCOS. Its not clear that even if MicroLEDs were available (and they are not today), they could couple enough light into the Hololens type diffractive waveguide.

      2. The money spent on Microvision was just the downpayment for royalties and support. As far as I know, Microvision had only developed the mirrors for making a front projector. It looks like Microsoft spent the big money on perfecting the technology to align the lasers, develop optics to couple it into their waveguide, and get the manufacturing going. They also had to spend a lot of money on custom and high precision optics. Considering all the delays and problems Microsoft has had making the HL2 and all the junk units they were making, they likely have spent/lost hundreds of millions in development.

  2. An excellent and decisive teardown of the HL2.
    Thank you, it explains everything i saw when we used it for a couple of months. It is an impressive piece of technology with display limitations, what a pity the marketing department tried to fudge the numbers.

    Thank you Karl.

  3. “You simply can’t paint a 1-pixel wide line with a 2-pixel wide brush.” – True, but you assume the brush is indeeed 2-pixels wide. I don’t know about LBS but when it comes to DLP or LCoS there are pixel gaps as well as variations of luminance from center of the pixels to its edge. You know this. DLP470TP takes advantage of this and draws extra pixels in those gaps. You end up with significant overlap regions, but claiming pixels have completely overlapped and the image is not 4K is simply not true.
    Faux-K is not really 4K for projectors when there are only two 1080p images shown by shifting the glass plate between two states, but when 4 1080p images are displayed at 4 different offsets it is by all intents and purposes really 4K. You could argue the MTF is not good at all due to overlap/bleed and I’d agree, but the fact would remain that a 1-pixel wide black and white strip would still be visible as a strip rather than a gray region, albeit the blacks being dark grey and whites being bright grey. I’m sure pixel shifting will look better on microLED with much smaller emitter dimensions/bigger pixel gaps. Not really sure about LBS but as far as “Faux-K” is concerned with regards to 4-state DLP pixel shifting, it’s fake news. DLP470TP delivers.

    • If you look at the pictures in the article, the laser scans are about 80% of the width of a pixel. With all the various shifting, they are more or less randomly crossing pixels.

      If you look at the 1/30th of second pictures you get an idea of the resolution when looking at the single-pixel lines. You can just barely tell they are there on the horizontal lines (less than 10% modulated) but are not there at all on the vertical lines, at least in the center of the screen.

      Then you have the issues that you see the temporal artifacts on the HL2. I have not tested the DLP470TP and its mirrors are much faster than the scan rate of the HL2 so they probably don’t have those artifacts.

      • Criticize pixel shifting in HL2 all you want, or any near-eye display for that matter, but when it comes to saying “shifting a low-resolution pixel 4-ways improves the resolution” ” is categorically untrue” in general is , to use your words, categorically untrue and the people at Optotune, Texas Instruments and few others, including upcoming microLED (very small emitter diameter vs pixel pitch) pico projector developers aren’t going to be happy about and may even consider slander. Just a friendly suggestion that maybe you should reword that paragraph to make it about HL2 pixel shifting rather than pixel shifting as a concept and a more general technique.

      • Leo,
        Please get the quote right, I wrote (with added bold emphasis), “The marketing people try to claim that sifting a low-resolution pixel 4-ways improves the resolution by two times in X and Y or four times the pixels. But this is categorically untrue.”

        Pixel shifting does improve the effective resolution some, many would say by about 1.4x (approximately the square root of 2), but not by 2X in each direction. The main advantage of pixel shifting is in terms of removing “screen door effects.”

        It is also a very different matter if the emitters/mirrors are much smaller than the size of a pixel. If you sift say an emitter that is 1/2 the size in each direction by 1/2 pixel then you are just time multiplexing.

        It also does make a difference if the field rate (or mirror change rate in the case of DLP) is much faster than 60 Hz. At the low rates that the HL2 is doing the pixel shifting, the effect is one of flicker/blinking and image breakup.

        In the specific case of the HL2, the shifting seems to mostly be about filling holes due to the scanning process. They have a case of an almost random sampling of the pixels by the scanning process which by Nyquist would put their resolution at about half the scan lines. The shifting process gets them back to so about 1.4X/2 the number of scan lines (or about 600 pixels for 854 scan lines).

      • Hi Karl,
        with all due respect,

        “The marketing people try to claim that sifting a low-resolution pixel 4-ways improves the resolution by two times in X and Y or four times the pixels. But this is categorically untrue.”

        This claim is still false and my response was against it which you still deny.
        As far as the video projection industry is concerned, virtually nobody is using it to reduce screendoor but to achieve 4K. Screendoor is not a serious issue in the video projection world. This is exactly and only way DLP470TP 1920×1080 DMD achieves 4K. It’s not 1080p x1.4, it *is* 4K. You can argure how bad the pixel overlap is or how the MTF is poor but it is 4K, you can see each pixel. Texas Instruments and others are not lying. As I explained, the “Faux-K” myth came from the old 2-state pixel shifting era where the pixels on the screen were indeed not 4K and now people who don’t understand how tech works parrot the term because the DMD source does not have 4K pixels even when shifting is 4-state.

        Again, your claim is pretty much slander towards Optotune and TI, among others.
        Here’s your evidence:
        https://imgur.com/a/NJSOPta

      • Thanks very much for the photos of what I assume is the DLP pixel-shifting 4K. I would agree that most consumers wouldn’t know the difference, but then most consumers would not know the difference between very good 1080p/2K with HDR and 4K. As we used to say, changing the sound system will affect the perception of resolution.

        We will probably disagree on this, but I think that your pictures support the case that it is somewhere between 2K and 4K. It looks best in the isolated 4 lines in the background target (your image https://i.imgur.com/1yFk5je.jpg). You can clearly see 4 lines and from a distance the look about right (I put your picture on a high-resolution monitor and then backed up from the monitor to see the effect). But if you look at the long horizontal lines, they look more like a single fat line with a screen door effect. Also, the two long horizontal sets of black lines on white lines look different.

        If you look at the white lines on black, they look like they are solid with a screen door effect. The mark-to-space-ratio is about 90% to 10%. Also, note that both the set of white lines are different colors. The color effect is particularly dramatic in https://i.imgur.com/bm60N2c.jpg

        The test pattern is constructed such that the long sets of a pair of horizontal 4 lines are each on odd or even lines (there is a 2-pixel gap between the two sets of 4 lines). I would be curious to see what would happen if the vertical test patterns also odd and even. This pattern, as the name implies was designed to check out interlaced scanning.

        I suspect that there will be some temporal artifact if you move your head. I notice that at least some of the projectors doing this have different frame rates base on whether they are doing to pixel shifting. As the “4K” resolution, they have a lower frame rate.

        ALSO, can I have permission to use your photos if I decide to do a follow-up article on this issue?

      • Sorry Karl but I can’t give you permission to use photos and participate in this false “Faux-K” narrative.
        This isn’t about opinion or what the average consumer will feel is 4K. We have facts here. The fact is if you have 1-pixel wide strips in a 4K image, you will see it with a 4-state pixel shifting DLP and you won’t with another imager with 1080p or 1440p or somehwere before 2160p pixels, where the strips will be merged. If the strips are visible, regardless of their thickness, their blurriness, then the resolution has been met and we are simply discussing about how bad the micro-contrast (MTF) is which is a different question. https://i.imgur.com/uwTiiuS.jpg

        If you interpret black pixel strips as a screendoor than that’s due to your experience working with these technologies. A consumer is not going to intentionally zoom or get close to an image and assume a black pixel is actually a gap between pixels and it’s not going to reduce the detail of the image they are able to see.

        The color variation of the lines you mentioned is due to the rolling shutter camera not merging the different color monochrome subframes into one like our eyes and not due to the DMD itself. There is some chromatic aberration due to cheap projection lenses.
        These projectors can do 4K at 60Hz by shifting 240Hz 1080p frames or can display 1080p frames at 240Hz. 60Hz is enough for video projectors.

        I can agree with you about how bad HL2 is but as far DLP470TP is concerned, if the individual pixels can be perceived, it’s 4K and not up to our personal opinion regardless of how sharp or big/small they look from up close.

  4. If only the military where spending a10th as much time as you checking if it’s worth buying before spending huge money on it…

  5. Karl,

    For the benefit of us simple folk who just want to see what the Hololens 2 image looks like, could you not just include some of those “representative full-color, full-frame pictures, for use later” you took.

    I mean, given that you tease us up front with “the HL2 unit I have is not nearly as bad as some of the pictures I have seen posted” but then end with “the laser scanning engine produces a terrible image by today’s standards” and “In my experience, an 800-by-600 fixed-pixel display would look better and sharper”, couldn’t you just throw us little people a bone and show us the photos so we can judge for ourselves.

    It’s not that we don’t appreciate all the fancy analysis and wiggly lines (we do) and know that you are very smart (you are), it’s just that we wouldn’t mind seeing with our own eyes some “”representative full-color, full-frame pictures”. That would be fabulous especially given, based on your report, the crappy ones we’ve been shown to date are NOT representative.

    C’mon, be a mensch and let us have a little peek so we can decide for ourselves. What have you got to lose?

    • David,

      I was going to sneak one into the article, but there is so much talk about “rainbows” with the HL2 that I thought it might upstage the information on how the technology works.

      Compared to color control which varies significantly as you adjust the visor on the HL2, the resolution is relatively cut and dry.

      In terms of “resolution”, it is not going to get better than the all green pictures in the article. Due to chroma aberrations, image quality is only going to get worse with full color.

      • It would hardly be “sneaking” it into the article to have included one.

        Anyhow, it’s probably too late. The alarm bells have already started ringing, unfortunately.

  6. You can get a lot done with 800×600 pixels though. For now this seems to be the state of the art.

    • 800×600 is not nearly state of the art, there are many true 1080p and beyond devices available. I just turned back on the HL1 that I have and in terms of resolution and sharpness, it blows away the HL2. The FOV is bigger, the ergonomics and touch interface are better on the HL2, but the image quality in multiple ways is much better on the HL1. I’m thinking about taking some new pictures with the HL1 for a direct comparison. BTW, the “sweet spot” for where the image looks good is much better on the HL1 as well.

      • This is what I wanted to ask – is it worse than HL1. Now that you’ve answered, will you hazard a guess as to why they would release an inferior display for a 2.0? The mind boggles!

        Thanks for yet another amazing analysis.

      • Thanks,

        I think it was a bit of the “grass being greener on the other side of the hill” and that Hololens has always been a bit of an R&D project that “escaped the lab.”

        The HL2 has a wider FOV (by about 1.4x linearly) and seems to be brighter by about 1.5X to 2X (I will have to calculate it as my light meter is not reading right with laser scanning). The black/transparency where this is no image at all is better on the HL2. The “official” position from Microsoft that I heard was FOV and brightness. But there are companies like Waveoptics and Dispelix getting much high resolution and I think they are at least as bright.

        The HL1 is definitely sharper and has higher effective resolution. The color control is vastly better on the HL1 as well (it is bad on the HL2. The waveguide has a much wider “sweet spot” where the image looks relatively good. And note, the HL1 has what is considered a mediocre LCOS device (there are at least 3 or 4 LCOS devices I would look at before the one in Hololens 1).

      • Yes, mind boggling, to the extent that words and phrases like incredible, ridiculous and maybe not actually inferior come to mind. If only it came from a source without a long, documented history of hostility to LBS who in 2018 described people who predicted that MSFT would use LBS in H2 as Alices living in Wonderland.

      • No, that’s not me but yes, as you point out, I have been a Microvision investor for a long time. And I’ve long enjoyed reading your analyses, Karl, and not just re. LBS. You have real talent. But you do yourself and your readers a disservice when it comes to LBS, such that it leaves them questioning the objectivity of your overall conclusions. By any measure, you have savaged the technology and its proponents for years, despite its relentless and demonstrable improvement over time. When fundamental assertions or predictions you make are disproven by events, you refuse to acknowledge the error or at least give the technology its due. Instead, you mount a new attack from a different angle. One would hope that a skilled technologist such as yourself would acknowledge this, even grudgingly, but you simply refuse. No technology is perfect and all have tradeoffs, but clearly LBS is in its ascendency. It is constantly improving and enabling remarkable new applications. Is that not true? Is that evidence not staring you in the face? You should of course point out its current limitations and areas that need improvement, but at least give the thing its due.

      • It may be hard for you to accept it, but the image quality of the HL2 is very bad. It is not just the flickering display and fuzzy image, the color control is terrible (things will not get better when I show full-color images). I have been going back and forth with the HL2 and HL1 this morning and the display’s image is vastly better on the HL1. The HL2 has a wider FOV and maybe blacker blacks (hard to tell with the waveguides issues) but that is about it. I “savage” LBS as you put it because it produces a bad image and because there are many problems with it that may be impossible to solve.

        We will see if Hololens goes the way of Pioneer, Sony, Celluon, Ragentek/VOGA V and Motorola (if you go that far back) and any of the other “one and done” products.

        Every so often someone gives it another go with LBS, like the movie Ground Hog Day. Unlike what the Microvision investors want to think, everyone in the industry knows about Microvision. There have been dozens of R&D efforts with it. LBS is a favorite of researchers. This time Microsoft tried giving it a go and spent likely hundreds of millions (on top of the >$500M Microvision has lost) with significant manufacturing problems and program delays to build a very poor quality display by any objective measure. I call it “tickling the dragons tail,” many a person has tried to tame the dragon and they all end up fried to a crisp.

        For most industrial/enterprise AR uses, you really don’t need a very high-quality display. Your “black” is whatever you are looking at in the background. So the fact the image is crappy may not be that noticeable.

  7. Hi Karl,
    Thank you for the fantastic analysis. I have a quick question about the resolution. In your calculation, the number of scans equals to the number of cycles. However, in my understanding, there are two scan lines in a single cycle, which means the number of scans should be twic of the number of cycles. Please correct me if I am wrong about this. Thanks in advance!

    • I thought I was clear about how I was “counting” scan lines, but I would be happy to clarify some more. Quoting from the article:

      With the HL2 displaying an image with pixels that closely matched the scan lines in the center of the display, it then becomes a simple matter to calculate the total number of scan lines visible by seeing how much of the test pattern is evident. Doing the math, it turns out that the HL2 has roughly 854 visible scan lines per field, counting the dual-stacked lasers as two “scans” as well as counting both directions of the scan. This is the “maximal counting” with one “cycle” back and forth of the horizontal mirror counting as four scan lines. As will be shown, the effective vertical resolution in “pixels” of a laser scanning display is less than the number of scan lines.

      The lasers are turned on in both directions of the scan. Thus with one “cycle” of the mirror, each scan scans both left and right. But the scans are not evenly spaced except near the center of the display. Then there are two lasers “stacked” per color. So with one back and forth motion of the mirror, one could say there are 4 scans (counting as high a number as possible).

  8. Karl,
    Can you think of any companies that would buy
    MVIS? Or a piece of the company?

    • There are many companies that could buy Microvision, this is the only thing that supports the companies stock price above near zero. Whether they “would” buy is strictly conjecture. All the “fundamental patents” on laser scanning displays are long past expired. Microvision is a 26-year “startup” and patents only last about 20 years. So now you get into a more crowded field of improvement patents and whether they can’t be gotten around. And all this assumes that some company has some executives that can be convinced that Laser Beam Scanning has a long term future AND that they need Microvision. Most people in the AR display field greatly favor MicroLEDs as the long-term display technology.

      As I have pointed out in this series of articles, even with Microsoft spending likely hundreds of millions more on LBS with the HL2, the image quality is very poor. I have seen big companies with big egos spend large amounts of money on worthless technology. Google spent big on Magic Leap (after Google Glass) and Qualcomm bought the technology behind Mirasol (https://goodereader.com/blog/electronic-readers/the-rise-and-fall-of-qualcomm-mirasol-e-readers#:~:text=Qualcomm%20bought%20his%20company%20in,four%20products%20were%20ever%20released.&text=In%202013%20Qualcomm%20announced%20that,discontinued%2C%20after%20losing%20%24300%20million.) BTW, as a cautionary tale, it eventually cost the Qualcomm CEO his job for Mirasol vanity project.

      The other heavily promoted concept by Microvision is the use of LiDAR. I don’t know what if anything new Microvision brings to LiDAR. The words most associated with LiDAR are “crowded field.” Seriously, do a word search on: LiDAR “crowded field” and you will get over 16,000 hits. So the question in LiDAR is what does MVIS have a late entrant to the field that any of the dozens of other companies working in the field needs.

      As for the down-shooting front projector stuff, it is not very practical as I have pointed out many times.

      Still, there are a lot of people out there with more money than knowledge about displays, so it is not impossible.

  9. Hi Karl,
    Do you think transparent OLED is a possible choice for HMD display system?
    Xiaomi released the world’s first transparent OLED screen TV two days ago,and it got me thinking about this question.

    • Not to be too harsh as it is a common mistake to ignore the focus issue. As far as I see it, transparent OLEDs do nothing for Head-Mounted Displays (HMDs). I think you may be ignoring the issue of the apparent focus distance to the eye. The human eye can only focus at about 25 cm (~10 inches) when young an healthy and gets worse with age. The focusing muscles in the eye only relax when focusing at about 2 meters.

      Normal eyeglasses have the center of the lens at about 13mm or 1.3cm from the eye. Some AR wear push this to 18mm to 25mm but this is getting pretty far out from the eye and looks funny (see for example a side view of Nreal). So to see an image from something that is only 1.3cm to 2.5cm away there has to be some significant focusing optics. Typically these optics are in the “projector” with the waveguide or combiner optics moving the image to the eye. Typical waveguides (Hololens 1&2, Magic Leap, Lumus, Digilens, Waveoptics, and Dispelix as common examples) only work if the image is collimated which means it focuses at infinity. To the eye, there is not that much difference in focusing at 2 meters and infinity and the eye can cope.

      In the case of Hololens, they put aa lens on the protective shield between the waveguide and the eye to move the focus to 2 meters and have a compensating lens on the waveguide so it does not defocus the real world. But the optics would be too big move the focus from being at say 1.3cm to 1.8cm to say 2 meters and then have correction optics for the real world in the space available.

  10. […] Following the link to Microsoft’s “Recommended font sizes” will lead you to a section that recommends font sizes in the 0.65° to 0.8° or 14.47 to 17.8 points when viewed from 45cm to be clearly legible or about double the 8-point font size claimed in Microsoft’s 2019 announcement (link to my discussion of this claim here). Microsoft also says to “Avoid using light or semilight font weights for type sizes under 42pt since thin vertical strokes will vibrate and degrade legibility.” This wiggling is a by-product of the 4-Way interlacing discussed in Part 1 of this series. […]

  11. […] Following the link to Microsoft’s “Recommended font sizes” will lead you to a section that recommends font sizes in the 0.65° to 0.8° or 14.47 to 17.8 points when viewed from 45cm to be clearly legible or about double the 8-point font size claimed in Microsoft’s 2019 announcement (link to my discussion of this claim here). Microsoft also says to “Avoid using light or semilight font weights for type sizes under 42 pt since thin vertical strokes will vibrate and degrade legibility.” This wiggling is a by-product of the 4-Way interlacing discussed in Part 1 of this series. […]

  12. Hi Karl,

    Could I ask about the resolution of the Wobulation(pixel shifting)?
    As you said, many would say the wobulated resolution would be 1.4x times(approximately the square root of 2) higher than before.
    I wanted to find the exact reason for this, but I couldn’t find a document that was correctly described as root 2. Do you know the basis for Wobulated resolution?
    Could you let me know the reference or paper?

    Thank you Karl

    • I discussed the issues of Wobbulation and resolution way back in 2012 (https://kguttag.com/2012/02/09/ti-dlp-diamond-pixel/). All the papers I know of on wobulations/pixel-shift are promoting the concept and generally don’t talk about the problems. The 1.4x is simply my observation based on experiments and simulations (as shown in that article) and a rough estimate. The result varies significantly based on the content.

      There is a fundamental issue that you can’t draw a single sharp pixel with a “brush” that is 2 pixels wide with wobulation. The net effect is a slightly blurry blob that covers an area of 3×3 to 4×4 pixels.

      In terms of smoothness and reduction of jaggies (aliasing) wobulation works well most of the time and may look as good at a display with 2x the pixels. But if you have something like line pairs, you tend to get a blurry mess at full resolution (back to painting 1-pixel wide lines with a 2-pixel wide “brush”).

      Another big problem that can occur and I saw it with DLP wobulation is a temporal (time-based) breakdown. If your eye happens to be moving in the direction of the wobulation, you can see the flash of the image at half the resolution for a split second (very distracting when you see it). The human vision system does not work like a camera as in taking essentially a series of snapshots. The problem is worse if something happens to be moving onto the screen at the “right” rate, but additionally, the eye is constantly moving (saccades).

      I would suspect this “flash breakdown” of the wobulation is worse with non-persistent display technologies like DLP and Laser Scanning than it would be with more persistent technologies like LCOS and OLEDs, but I have not tested this theory.

  13. […] Most laser beam scanning (LBS) displays to date have used a raster scanning-like approach. The horizontal scanning is typically a much faster sinusoidal scan in the Kilohertz (typical 5kHz to 54kHz depending on the resolution) and a slower, somewhat linear, driven fast return vertical scan. I discussed the Microvision scanning processed as far back as 2012 (in Cynic’s Guild to CES — Measuring Resolution) and the many problems and the Hololens 2’s more complex variation of it, 2 in Hololens 2 Display Evaluation (Part 1: LBS Visual Sausage Being Made). […]

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading