Hololens 2: How Bad Having Tried It?

Introduction

I working on pipeline of articles AR systems and technologies, including the many things I saw at CES 2020 in January and Photonics West (PW) in February plus some related topics. Between CES and PWI have been going in circles trying to figure out what to write about first. Having gotten to see with my own eyes the Hololens 2 (HL2) two times, I decided to start with my observations on it.

I was able to try the HL2 the first time for 20 minutes, and while I could view any content, I was asked not to take any pictures. I was able to confirm a few things, but it will take more time, taking pictures, and making some measurements to more completely evaluate the HL2. The second time I got to try the HL2 was at a public demonstration at Photonics West was the content was tightly controlled and lasted about 10 minutes. This second time at least let me see a different unit that had similar image quality to the first unit.

Most of the people getting units to date have a vested interest in not reporting bad things about Hololens. Many are hoping to develop for Hololens, and they don’t want to risk hurting their relationship with Microsoft.  

Additionally, I want to go point by point through some information/misinformation that was tweeted by Alex Kipman, Microsoft Technical Fellow of AI and Mixed Reality. Kipman was tweeting about the photos through the optics of the Hololens 2 that were the subject of this blog’s article, “Hololens 2, Not a Pretty Picture.” Kipman’s tweets were posted on the same day as my article, December 18th, 2019.

Whether the HL2 is “Good Enough” Depends on What You Are Doing

Compared to the worst television being sold today, the image quality of the HL2 is terrible just about any way you could measure it. Color uniformity and saturation are poor, the image flickers, and small text is hard to read. As I will discuss, there is also an issue with flicker.

There are applications for a mixed reality headset in enterprise/business applications, as demonstrated by the likes of Toyota working with the HL2. Generally, these applications require either hands-free use or are involve visualization where the user must move around a room. The ability to use SLAM to lock visual content to the real-world is of interest to many companies, particularly for industrial use. I’m a bit more dubious of the use in sales and marketing of products in showrooms to the general public, where it will likely just be an expensive gimmick once the novelty wears off. In industrial applications, my concern is with safety, in particular, how it may obscure the view and perhaps cause eye strain issue.

First Reaction to Seeing Through the Hololens 2

The first time I used a single HL2 was unplanned, but I was able to navigate to the test patterns on this blog at www.kguttag.com/test. I had generated some patterns at 1440p, which is the native resolution of the Hololens 2. Using test patterns with which I was familiar helped me identify issues with uniformity, color, color saturation, resolution, and flicker.

The human visual system does a lot of processing to produce the image that we think we see, including a form of automatic white balancing. Thus we have different color temperatures with things like a cool-white and a warm-white, and both will look “white” if that is all we see. But we will notice if we put up a solid white area and the colors vary.

Ergonomics:

In terms of the physical comfort of one’s head and neck, the HL2 is a big improvement. Much of this comes from balancing the weight and the ability to flip up the display. The hand tracking and gesture control are vastly improved over the HL1, and this leads to much less and arm strain when making gestures.

Still, for all the talk about ergonomics, the fact HL2 is using 120 Hertz interlaced (60Hz full-frame refresh) is ergonomically poor and should be disqualifying IMO. Both based on my calculation (see my article on Hololens 2 Interlacing) and my observation, the refresh rate falls far below the ISO-9241-3 standard’s recommendation for flicker ergonomics set in 1992 (see the Appendix: Some History of Flicker in Computer Monitors).  Humans vary widely on the ability to perceive flicker and in their adverse reactions to it. Some problems with the interlace refresh cause perceptible problems like lines occasionally disappearing (to be discussed later). Some of the problems are less obviously perceptible but can cause eye strain, soreness, and nausea with use.

Ouch, Hololens 2  Made My Eyes Hurt

In the longer session with the HL2, I felt mild to moderate pain in my eyes. It didn’t keep getting worse nor go away until after I took off the Hololens 2. It felt almost like my eyes were swelling. I don’t remember this type of pain in my eyes with the Hololens 1, Magic Leap or other headsets.  

My one-off experience (I didn’t notice is the second time but my exposure was shorter) and the fact that I have not heard of others reporting this problem is not definitive, but it is something I will be checking out more in the future. My best guess that it could be an adverse reaction to the flicker.

Lack of Color Uniformity – How Bad is Bad?

HL2 uses diffractive waveguides, and every diffractive waveguide has problems with color uniformity. As expected, even the best HL2 will have problems with say large areas of mostly white (such as a typical web page). So the question becomes, what constitutes a “good” versus a “bad” unit? The answer to this question is a function of the content that is shown, a person’s tolerance for poor image quality, and perhaps, their desperation to work with a Hololens 2.

Using a large white test pattern, the varying of color across the image was very noticeable. Both units I tried had significant color uniformity problems I had expected haven seen many diffractive waveguide-based headsets. With the first united I tried, I did take the time to go through the “eye calibration” that according to Alex Kipman’s tweets, should improve image quality. While it could be considered usable, the color uniformity was clearly not very good.

The two HL2 units I have used are much better than the pictures that have been posted. Still, the unit was significantly worse than diffractive waveguides from WaveOptics, for example, or even the Hololens 1. By all reports, there is a wide distribution with the HL2 in terms of color uniformity. I would like to note that if you just stick a cell phone up to a waveguide, you can exaggerate problems with a diffractive waveguide. But also remember, the pictures are being taken after a person sees problems with their own eyes. So it is not just problems with the way the pictures are taken. It helps if you have a small “mirrorless” interchangeable lens camera where you can control the focal length, f-number, shutter speed, and ISO (I use an Olympus E-M10 MARK III). It is not a perfect analog for the human eye, but it is much closer than a cell phone camera.  

Lack of Color Saturation

A screenshot of a social media post with text and a hat

Description automatically generated

In my online test patterns, I included a picture of a Christmas Elf which has nice skin tones plus some very saturated color in the background. I have been using this picture for about 10 years, and I know what it supposed to look like.

It was evident to my eye that the colors lack saturation. The Elf sub-image is repeated 4 times in the 1440p test pattern, and nowhere did it look well saturated. It seemed to be the least saturating when in the center of the screen, which is where the laser scanning process is moving the fastest.

Flickering Lines

Prior to ever seeing the Hololens 2, I had received multiple reports of “flicking lines,” and indeed, I saw lines flicking/disappearing occasionally. If you have high resolution and high contrast detail on the screen, such as text or lines in my test patterns, they will tend randomly appear and disappear. I also noticed that if you stare at what should have been a solid area that occasionally, every other line would occasionally disappear.  Also, you can see the individual scan lines if you concentrate on an area.

I believe that the main issue is a temporal issue due to interlacing (see my article on Hololens 2 Interlacing) at too low a refresh condition. Another cause could be an aliasing problem due to a combination of small movements of the head and resultant changes of the image combined with the laser scanning process and conversion to/from rectangular pixels rather than raster scan lines. The Microsoft Hololens engineers probably know all the issues and their causes, but they are not telling 😊.  

Conceptually, the human vision system is taking a complex series of snapshots with the eye constantly moving (know as saccades) and effectively blanking vision between movements. With interlaced video sources and the saccadic movement of the eye, there will be occasions where the human vision system puts the interlaced fields together wrong or even misses a field altogether, resulting in missing line.

The other likely source of the flickering is that with a scanning system, LBS scan lines are not the same as rows of pixels. Pixels in the image have to be scaled and remapped onto where the laser beam is scanning. When doing the remapping, there is also the classic resampling problem of either making the image softer/blurrier or accepting some level of temporal aliasing (moving jaggies).

Gestures and User Interface Vastly Better than Hololens 1 (HL1)

My first test for any headset is typing a WiFi password and then typing in a web address. As the WiFi was already connected, but I still needed to type in a web address. On the older HL1, typing was quite literally was painful and very time consuming with the “aim the dot and then used a pinch jester” method. The Hololens 2 has a floating keyboard where you push the keys with your finger. The way the keyboard works is even rather pleasing with how it works. While you are not going to want to type much this way, it does work well enough for things like web addresses and short messages. It still reduces you to hunting and pecking one key at a time.

There is a bit of a user’s interface dilemma with MR/XR with respect to being hands-free. Hololens (1 and 2) so far has seemed to focus on jesters you can see which while freeing up the hands requires the user to look at their hands while they interact with the display. Many people have noted that they wish they had a controller like the Magic Leap One, where you could keep your arms down and had some tactile feedback, but then this fills up the hands. Alternatives like Data Gloves and wrist muscle and electronic nerve sensors may work in some applications but not others. I would think many applications will end up relying on a cell phone or data tablet or similar device for any significant text input.

Alex Kipmans Tweets in Response to HL2 Image Problems

On the same day I published Hololens 2 Not  Pretty Picture, Alex Kipman made a five (5) part set of Tweets in response to the images I cited in the blog post (see left). I would like to go through each part and respond to them.

Kipman Part 1: Friends, we have a binocular system that forms an image at the back of your eyes, not in front of it.  Eye tracking is fully in the loop to correct comfort which also includes color.”

At best, this is a half-truth.  First, we should not that people have been complaining about the color uniformity problem due to seeing with their own eyes have done the “calibration.” Many people, including myself that have seen HL2 and other waveguides displays, have judged the HL2 to be very poor in terms of color uniformity. So if HL2 is correcting for it, they are either not doing a very good job of correction or the displays are so bad that the correction is ineffective.

It also does make a difference in what both eyes are seeing as the human visual system will tend to average, but with one eye dominating. In my limited observations, I saw a similar run-out in color in both eyes and didn’t notice a big difference between using one or both eyes (something I will want to look more at in the future).

The half-true part is that some level of color uniformity correction that is possible with waveguides. WaveOptic’s in their presentation at the Photonics West AR/VR/XR conference show a before and after with correction (see below).

Kipman Part 2: Eye relief (the distance from lens to your pupil) changes the image quality. Further out you are, worse the image quality becomes in terms of MTF as well as color uniformity.

Eye relief is going to vary from person to person based on the shape of the head, whether they are wearing glasses and other factors, including perceived comfort. One of the big advantages of HL2 over any other AR/MR headset is the amount of eye relief. It looks like Kipman is saying that HL2 that eye relief comes at the expense of color uniformity.

Kipman Part 3: Taking monocle [sic] pictures from a phone (or other camera) is completely outside of our spec and not how the product is experienced.”

While it is true that a camera can exaggerate the effects, Kipman is non-responsive to the issue with his “outside our spec.” I would like to know the color uniformity spec and how they can measure it and correct for it. How are HL2 tested for quality before shipment (based on reports the testing is pretty loose, to say the least).

It is true that a phone can be a poor model for the eye, particularly in the hands of “amateur photographers.” Phones have much smaller sensors than the eye’s retina and much smaller numerical apertures and the phone camera put in the wrong place. The phone does work differently than the human visual system. All these facts can lead to exaggerating bad effects.  But still if done properly, the right camera with proper setups, can give a reasonable representation of what the eye sees.

The size of the aperture and location of the camera will have an effect. Personally, I like using a 4/3rd (Olympus) camera as it seems to better match the eye’s parameters. Cell phone cameras/lenses/apertures are too small and full-size DLSRs or too big. One also wants full control over shutter speed, aperture, and ISO (gain) to get a representative picture.

While I recognize and agree that a camera works quite differently than human vision, you can still get a picture that fairly represents what the eye sees (once again WaveOptics can do). I think it is a marketing waffle to cover up problems to say that you can’t.

Kipman Part 4: When you look at it with both eyes, at the right eye relief (somewhere between 12-30 mm from your eyes) with eye tracking turned on, you experience something very different.

Another half-truth. Once again, people are seeing problems with their own eyes even after having been “calibrated.”

Part 5: “if you are having issues experiencing our product, first our apologies, second please get a hold of us (akipman@microsoft.com is your friend) and let’s engage on how we can solve your issues.  Team is fully leaned in and listening.”

There are numerous complaints online that Microsoft is unresponsive to problems. The field support representatives don’t know what to tell people. Also, people that have been getting HL2 to date are very “select,” and most have vested reasons not to speak out against the HL2 (including being cut-off from support altogether) and yet still complaints are filtering through.  

Alex Kipman is acting more like a marketing person than a technical expert. As I pointed out in my blog post, Hololens 2 Video with Microvision “Easter Egg” Plus Some Hololens and Magic Leap Rumors, I’m sure he is a very intelligent person at some things, but his understanding of displays seems superficial. He has also been disingenuous when he says that Microsoft invented the laser beam scanning engine when there is plenty of evidence it was developed by Microvision.

Appendix: Some History of Flicker in Computer Monitors

“Those who cannot remember the past are condemned to repeat it.” (Santayana 1905). I know some think my criticism of the flicker with the HL2 seems a bit harsh, but I have some personal history developing graphics circuits in the days when there were only CRT displays and the issues with flicker. There also must be many people at Microsoft working on the HL2 that know about the flicker issue.

Flicker Issue

The screen refresh rate (1/flicker_frequency) is the rate between any spot being illuminated with a scanning type display like CRTs and Laser Beam Scanning (LBS). It should not be confused with the frame rate, which is how often the image content is changed. Modern LCD and OLED flat-panel-displays usually change from one image to the other without a period of blanking, and so they shouldn’t flicker. Unfortunately, some backlight for LCDs and OLEDs have flicker problems due to PWM dimming at too slow a rate.

Interlaced refresh is where every other line is refreshed in every other scan of the image. It is a trick that was used in the days of CRT Televisions to try and reduce flicker. It helps primarily when the TV is viewed from far away (and thus the lines blur together) and when the content keeps changing. But as was found when CRTs were used as computer monitors, as you get up close flicker becomes more of an issue.

There is a very wide range in human’s ability to notice and/or feel ill effects from flicker. Thus for the same display, one person might have problems when the flicker where another person may have a very adverse reaction to the flicker.

Early High-Resolution Computer Monitor History

I was the lead architect of the TMS34010 (1986) and TMS34020 (1988) graphics processors, as well as the first VRAM (1984). The VRAM, a precursor to both the SDRAM and GDRAM, was specifically created to support the refreshing of (then) higher resolution CRT computer monitors.

In 1987, IBM introduced the 8514/A graphics card that supported ~87Hz Interlaced (~43.5Hz full-frame refresh) display and an IBM custom monitor with longer persistence phosphors. Up until the introduction of 8514/A, most people felt that 60Hz progressive refresh rates were necessary to avoid flicker. It turned out the many people had problems with the flicker of IBM 8514/A’s interlaced refresh even with the longer persistence phosphor monitors. For example, it was reported in PC Magazine from April 10, 1990, issue on graphics accelerators.

It so happens that in the same PC Magazine issue, it reported on page 175 (right) that “In this roundup, all the fastest adapters use TMS34010 coprocessors.”

The fact that so many people were having trouble with the flicker from the 8514 and its 87Hz interlace led to studies of the issues of flicker with CRT computer monitors. These studies resulted in ISO-9241-3 in 1992 recommendations for computer monitors. It was found that even 60Hz progressive scanning was not fast enough and that the perception of flicker also varied with screen brightness (among other factors). The ISO committee put out a recommendation based on a formula but which simplified down to about 85Hz refresh for most practical uses. See the graph below based on the ISO-9241-3 standard from the article The Human Visual System Display Interfaces Part 2 on website What-When-How.

And the ISO-9241-3 studies assumed CRT with phosphors with some persistence (say on the order of 1 to 2 milliseconds. With the HL2 and LBS there is zero persistence and thus more of a tendency to flicker and more of chance for the eye to see all or nothing in the case of disappearing lines.

Karl Guttag
Karl Guttag
Articles: 243

31 Comments

  1. Couldn’t MSFT have used the engineers they lured from MVIS and hired to design and patent around MVIS’ technology after they used it in their PROTOTYPE??? There are other companies selling LBS besides MVIS and MSFT loves to design and make in house.

    • Do you still own the MVIS shares you bought thinking MVIS is inside Hololens2??? A lot of people got caught owning during the latest news and drop while waiting for a HOLOLENS2 reveal of MVIS insider. The CEO of NREAL thought MVIS is inside Hololens2. I asked him if he got it from Reddit MVIS, where you got it from, and he said he got it from Karl’s Blog….

  2. I received my headset a couple weeks ago and I agree with pretty much everything in this post, including the flickering white lines which do seem to be somewhat sequential in rhythm. I did email the address Kipman posted in that tweet (hope he isn’t answering each of these directly) and somebody did get back to me quickly at least. They had me fill out a questionnaire and one of the things they asked me to identify was whether there were “Curved vertical bands show across the center of the display. It looks like an upside-down Atari logo”, which does happen to be the shape that this rainbow bleeding seems to take on especially when looking at something with a white background. After I confirmed that I did have this, which has still never happened on any of my HL1s, the response I got was just a description of how the waveguides can be faulty but that they were interested in the flickering line issue.

    Basically they do seem to be aware of the rainbows but as you mentioned, their tech support doesn’t really have a good answer, especially as to why the HL1 didn’t have these issues. If they could honestly just fit the HL2 guts into the HL1 body I’d be happy camper, it is a shame that the added interactivity with the hand and eye tracking can’t be enjoyed as much as they should. It certainly is more comfortable to wear but definitely not so to look at/through.

  3. I waited more than 1.5 hours in line at the Photonics West – just to get a glimpse of HL2.
    What i saw in comparison to HL1 – waveguides are less suceptable to external lighting (rainbows are less pronounced). Tracking, gestures – yes better than HL1 but not like day and night. Tracking also had some issues and wasn’t perfect. BUT what was most dissapointing (to me at least) – that the unit didn’t have a uniformly sharp image across all FOV. At least to me – top and bottom parts appeared sharp (crisp), while the center (at a normal eye’s rotation) had some fog – loss of details (in comparison to top and bottom). Was it due to waveguide or a way of scanning a laser – hard to say – but it was very obvious and did impact the way how text was perceived.
    Has anyone else experienced this because I haven’t seen similar observations online?

  4. Karl,

    thanks for sharing you insighs.
    Questions: I’m looking to take some pictures of AR headsets Hololens/Meta/Magic leap. Would you still recommend using a Olympus FourThird with a 7.5mm fisheye? I currenlty own a Sony a7 III (full frame) camera and could go for a 12mm f/2.8 fish-eye.

    Kind reagrds

    • I only used a fisheye to show the view out of a headset including how the glass’s frame blocks the person’s peripheral vision. I would not recommend using a fisheye if you want to see details in the image.

      My “go-to” lens with the Olympus FourThirds is their stock 12-42mm lens. When Shooting the HL1 and Magic Leap one I rotated it 90 degrees (i.e., portrait orientation) so it would fit in the headset. I shot the Magic Leap One with the lens at about 15mm. The openness of the HL2 and the ability to flip the display up, I’m thinking it could be shot in portrait mode.

      The big problem with using a full-frame camera is getting it to fit. Unless you can take the display out of the frame or flip it up in the case of HL2, you can’t get the camera into the area where the eye would be. I normally shoot Canon SLRs (an might have gone with the M-series but didn’t) but bought the Olympus because the distance from the bottom of the camera to the center of the lens is less than the distance from a typical side of a person’s temple to the center of their eye. Thus it would fit in most headsets if rotated to portrait mode.

      The physical geometries of the ForthThirds system are closer to the human eye compared to a full-frame SLR as well. I with APS-C size 70D, I could not get the lens where I needed it to be. You might be able to with an HL2 but I can’t see it happening with an ML1 or HL1. If somehow it did fit (say with some disassembly of the headset), in theory, you would want 15mm x 2 = 30mm lens to get roughly the same FOV, in portrait mode and could use 50mm or thereabouts in landscape orientation.

      • I 3-D printed a “hollow head mount” that was roughly sized to the human head. You can see how the Olympus camera (just) fits into the head when rotated sideways.
        https://www.kguttag.com/wp-content/uploads/2018/03/CAMFF_Back_IMG_8280-1.jpg

        BTW, I abandoned the “head-simulator-tripod”s approach as the Magic Leap one had a time-out circuit if you took it off. Instead I just handheld everything. Another very good feature of the Olympus is it build in optical stabilization that seems to work well as I had to shoot at low shutter speeds due to the LCOS display’s color field sequential operation.

  5. Hi Karl, You mention near the beginning about safety issues and obscuration on the field of view, but you don’t go any further. Could you perhaps exapnd on this? From what I’ve seen, the strong back reflections off the waveguide are enough to cause some obscuration / outside world contrast problems when looking through the displays. Is the display image itself also obscuring outside world view? Thanks in advance, as always a highly informative article 😉

    • The safety issues are of at least several types:

      1. The headset totally blocking the FOV. HL2 is relatively good in this respect. Magic Leap One is among the absolute worst. Bernard Kress in his new book and in an article I wrote about his papers (https://www.kguttag.com/2019/10/07/fov-obsession/) discusses this issue.
      2. Display blocking the user – This can be a problem of the user interface. To make things stand out, the display has to be much brighter than the real-world (post any “sunglasses” in the visor). They must “dominate” the real world light or else they will be too transparent to be recognizable. A big user interface problem can be with overlaying objects on the real-world on top of things that you need to see (say stairs or your hands).
      3. Darkening of the real-world – All AR headsets block some amount of light, most block 40% or more. I don’t have a spec on the HL2.
      4. Artifacts introduced by the combiner – Diffractive waveguides including HL1&2 have the issue of capturing light from the real world. If there are bright lights overhead, you can get some very bright colors obscuring your view.
      5. Attention Distraction – This is the classic problem that if you are concentrating on what is on the display, you are tending to ignore the real world.

  6. Just wondering about your opinion on my question I’ve asked a couple of times you haven’t responded to…Don’t you think that the engineers MSFT lured from MVIS could have designed new patents around MVIS’s patents and made their light engine in house?…Why would MSFT hire MVIS engineers to work for MSFT and design new patents instead of just using MVIS engineers and paying a royalty?

    • I responded on 3/2/2019 in the CES Meeting and Commuting thread. I have copied my response below:

      It looks to me like Microsoft got a deal from Microvision where they paid for royalties that were a drop in the bucket to their program. Most likely they have a deal that would survive Microvision. Why risk stealing it when they could get it from Microvision at what was for them a trivial cost? My guess is that Microvision was pretty desperate and thought that having Microsoft using their technology would boost their business.

      I think most people in the industry think Microvision is involved in some way. The question is whether it makes a difference for Microvision.

      The problem for Microvision is how many dollars per unit could they be getting? If Microsoft is taking on all the engineering and manufacturing risk, maybe they kick about $20 to $50 unit on top of say a yearly fee. I don’t see Hololens 2 shipping many units. I also think that the long term roadmap for everyone goes through MicroLEDs.

  7. I used Hololens2 at SPIE’s AR VR MR Show in SF recently after using Hololens1 the prior year. I too was surprised at seeing text at the edges blurred. People were having a hard time with gestures controlling the display and one of the two were overheating. Certainly not for the masses for a while…like Kipman said.

    • Hololens 2 is far from a product that is ready for the market. It looks like a series of cascaded decisions. Hololens 1 was announced too early and was getting long in the tooth. They canceled the original HL2 as being too small a step (based on reports) and tried to push up what is now called the HL2. But then they announced the HL2 in Feb 2019 long before it was ready to go to production. They then felt pressure to announce shipping before it was really ready to ship. To make even these shipments they are setting their quality standards so low that it seems like if it puts us an image they ship it. It is not clear how good/bad they can be with their best units and whether they can “dial it in” with production.

      On top of these manufacturing issues, there are the designed-in problems with using Laser scanning in general and having far too low a scan rate specifically.

  8. Karl, I have some questions.
    As a gamer, I can say that our community today values more CRT TVs and Monitors than LCD and OLED ones due to instant input times (no input lag), no motion blur and very low persistence.
    I could say that the fact that it don’t have pixels per se is also fantastic for image quality and upscaling.

    I was looking for a successor to our beloved CRT and then I found LBS. After many days of avid reading, I’ve found you and your writings.

    You absolutely trashes LBS, completely contrary to everything I’ve read about it (no input lag, no motion blur, no persistance, no pixels, 100% color reproduction).

    I understood your arguments, but then, I have some questions:
    1. Is LBS incapable of doing progressive scan? Can it only do interlaced?
    2. Why is the color reproduction so bad in what we’ve seen so far? It’s LBS’ fault? MicroVision’s fault?
    3. The flaws we’ve seen with this technology are a result of itself or due to MicroVision’s incompetence?
    4. Can it be saved? Can it be good? Can it be the best?

    Thank you very much.

    • I’m not Karl 😉 but…
      1. Interlacing is a hack that originated with old-school analog TV transmission as a way to reduce the amount of radio bandwidth to send a picture with a certain perceived vertical resolution. In LBS the same hack can be used (it isn’t inherently required) but is employed as a way to avoid having to scan the beam as fast as you’d otherwise need to achieve a certain resolution. Beam scanning at high speeds is hard because you’re physically moving mechanical mirrors (albeit very small ones with at little mass as possible) and you want the positioning to be precise and repeatable – obviously this is harder the faster you’re throwing the mirrors around.
      Interlacing the display just makes your problems easier without being too visually objectionable.

      Old-skool TV cathode-ray tubes used magnetic deflection (electromagnet coils) to move the electron beam, which has its own engineering challenges at high speeds but largely solved 50 years ago.

      • Thank you.

        So, this mirror is a problem due to its speed and inherent mechanical fragility (fragile as in “prone to not work correctly under non-perfect circumstances”).
        We need a way to redirect the beam.

        And also, we should get rid of mirrors altogether. As Karl pointed out, they also reduce efficiency.

        I’ll be researching more.

      • I think I figured it out, pretty fast actually.

        For the combining mirrors, who turn the RGB beams into a single one, we could use an specfic lens, simple as that:
        https://global.canon/en/technology/s_labo/light/003/02.html

        The “infinite focus” aspect of it would be lost, but I don’t care. This technology should be used to make TVs or fixed projectors, with a calculable light combining distance.

        About the mirror scanning:
        It would be great if we could use something like the Cathode Ray Tube, to make a “Laser Beam Tube”, but this approach was a dead end.

        My solution:
        Make an irregular disc. This disc’s spin (like a wheel, spinning very fast, achieving incredible rpm) would redirect the combined laser’s light to different point and do the scanning.

        No mirrors, only 2 lenses.
        1 for combining the lasers
        1 spinning irregular one to redirect the beam

        Thoughts?

    • First, sorry to be so long in responding. I was traveling followed by getting a bad case of the flu.

      There are similarities and differences between Laser Beam (raster) Scanning (LBS) and CRTs. I discussed some of the major differences back in 2012 (https://www.kguttag.com/2012/01/09/cynics-guild-to-ces-measuring-resolution/). Fundamentally, LBS is moving a mirror with mass and the horizontal process is usually near “resonate” with the electronic pulses providing energy and minor correction. This means that the LBS horizontal scan speed starts at near-zero on the far left and right sides, and accelerates to maximum speed in the center. In the case of a CRT electron beam, they can “flyback” the beam much faster than the active scan.

      With the LBS “interlace” they also illuminate in both directions. The result is a bit of mess where the gaps between scan lines vary between the two fields. See this figure from the Microvision patents that I colorized to show more clearly the two fields https://www.kguttag.com/wp-content/uploads/2012/01/patent-showing-interlaced-scanned1.png. It should be noted that the resolution is not really doubled with the interlacing. Additionally, the scan process is so nonuniform that there is significant resampling required to map “pixels” onto the scan lines.

      Directly to your questions:
      1. It could do a progressive scan. The issue is that they could only drive the lasers less than 1/2 the time and they would have to admit that the resolution would be 1/2 as much for a given mirror speed. Or they have to make the mirror go 2X faster which has its own set of problems plus makes controlling the lasers that much more difficult.

      2. There are several factors causing issues with color control. The laser beam speed is highly variable from near zero on the left and right sides of the image to maximum speed in the center. The faster the beam moves, the harder you have to drive the laser for the same brightness. Thus much of the color depth control goes into speed compensation. Then you have the issue that the laser drivers must be switched on and off or between two levels very quickly during the scanning process. Lasers also have a cut-off threshold where they are “on” but put out no light which limits the ability to produce the dynamic range of colors and this threshold is different for all 3 colors.

      3. The flaws are because it is a very hard problem.

      4. I’m doubtful that LBS will be a good long term technology. It is an electromechanical scanning process that will forever have problems at higher-resolution. It can’t possibly compete with LCD and OLED flat panels for almost any application. I think it will lose out to displays technologies like MicroLED for microdisplays for near-to-eye use. The recent resurgence in interest for near-eye has to do with etendue coupling into small optics (a big top for another day), but while addressing this issue, it brings a lot of technical baggage with it and is part of what is causing the manufacturing problems with the Hololens 2.

      • It seems indeed that are deep problems. It’s not only about the mirrors or its speed, but the lasers as well.
        This is a shame, this tech seemed so promissing.

        Thank you very much.

      • I have said many times that like an iceberg, the virtues of LBS are obvious and above the surface while the massive problems lie beneath the surface.

  9. Hey Karl, I was wondering if you saw the news about Plessey and Facebook signing an exclusivity deal for a few years.

    What do you think this means for the future of micro LEDs and other companies like Apple, Microsoft, startups, etc. Who else do you think will be a big player in the micro LED scene?

    • First, it is not clear how “exclusive” the deal is with Plessey and Facebook. The report is somewhat ambiguously written as Facebook did not buy out Plessey.

      Plessey has certainly has given the impression of making better progress than most of the other players. They acted more like a “semiconductor company” than just a research lab. I considered it significant that Plessey has shown not only green and blue MicroLEDs but now has red LEDs in gallium nitride. This would seem to set them up for better process compatibility than companies with using GaS RED LEDs to get to an integrated R, G, and B MicroLED. As far as I have heard, the only other company with red GaN MicroLEDs is Ostendo. From what I have seen as of late, other players include Jade Bird Display out of China, Ostendo in Carlsbad California, and Lumens LED out of Korea. There are several others like GLO and Play Nitride, but they don’t seem to be demonstrating much new lately in the microdisplay area.

      The really big players with LED technology have been on the sidelines with respect to “Microdisplay MicroLEDs” with pixel sizes of less than 12 microns. I suspect that they are more focused on “direct view” MicroLED with pixel pitch sizes in the 20 to 50-micron range.

      IMO, MicroLEDs are necessary but not sufficient in terms of technology for AR. There are still many tough problems including optics.

      I look at Microsoft Hololens as a very large R&D project with too much money that “escaped the lab” before it was ready. The use of laser scanning was, IMO, an act of desperation. I think Apple is going to stay on the sidelines until they find something that they feel will be a product and not just a technology demo like Hololens and Magic Leap. So far all they have done is some verbal puffery in terms of AR glasses hardware. They will keep fooling around in the labs until they have something they think is worthwhile.

      • Thank you for your informative response Karl. I always love reading your blog and eagerly await your next posts.

      • Karl,

        I’ve read your study on laser+LCoS in SID11′ and would like to know what’s your opinion in this architecture after 9 years? It seems like VCSEL technology is now in booming phase thanks to Face ID and I’ve heard that there is also some progress in RGB VCSEL. Do you think it’s still a potent micro display solution in future AR/MR devices? Thank you very much.

      • I have not seriously followed laser developments since 2011. My understanding is that it is was and still is relatively easy to make infrared VCSEL, but making native blue and green are still a bit difficult. The market need for RGB VCSEL is also very limited as projector applications for lasers has not proven to be a very large market. The pico-projector market never really materialized due to the ever-decreasing cost of flat panel display technology combined with people accepting touch screens (which tripled the area for display) as well as larger (in X and Y) phones. The theory of projectors being the next thing for phones after cameras just didn’t work out (see: https://www.kguttag.com/2013/08/04/whatever-happened-to-pico-projectors-embedding-in-phones/). AR is also proving to be a slowly developing market. Even if the displays are there, the optics for AR is very tough and I’m finding that most people believe that MicroLEDs are the display technology of the future for AR.

        Fundamentally, projectors have to create massive amounts of photons with all the issues of power and heat dissipation. Then you have the issue of what you are going to display onto or the optics to direct the light into the eye in the case of AR. Camera sensors just sit there and absorb light, at a small fraction of the power and volume. Even with various laser sensing technology, you only have to project enough light for the return sensor to sense and not for a human eye to see.

        The use of lasers for 3-D sensing has, of course, become huge. But if I can make the analogy to camera sensors, a big market for sensing does not translate into a big market for projection. Using UV (using the more eye-safe part of the UV spectrum) would give better resolution for sensing applications, but it is harder to imaging the big market for visible VCSEL lasers.

    • I have no evidence that is it not true. I would suspect that the number ship would be in the low “tens of thousands.” By all reports and my personal observations, the image quality is still pretty poor and the quality is varying substantially from unit to unit. Some people are more accepting of poor image quality than others.

      I have no numbers for the money spent by Microsoft, but most people think they are spending at least hundreds of millions per year (some have said over $1B/year) and have a total sunk cost more than Magic Leap (over $2.5B). Right now it is a kind of R&D Vanity project with no relationship between the cost of production and selling price. Microsoft can afford to heavily subsidize it. Even if you writeoff the R&D cost, it is likely they are losing money on each unit shipped, so the number of units shipped is at least partly a function of how much Microsoft is willing to lose.

  10. Hololens 2 display issue comment at SPIE talk from former Senior Director of Engineering – Hololens
    Svetlana Samoilova – https://www.linkedin.com/in/svetlanasamoilova/

    ” So is there any way to qualify, like when you’re building a product, early enough so you don’t get surprises along the way? I’m just kind of referring to HoloLens 2, not picking up on HoloLens, but the color display issues that surfaced out, I doubt that it was planned. Of course, it wasn’t, right?”

    https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11310/1131027/Panel–How-Do-We-Build-the-AR-VR-World/10.1117/12.2566419.full?SSO=1

Leave a Reply

%d bloggers like this: