Hololens 2, Not a Pretty Picture

Update 12-19-2019 – Added missing hyperlink for one of the photographs to the text.

In editing the document, an important hyperlink that pointed to where I got one of the photos was accidentally lost. The link was included label of the photograph but was missing from the text. In subsequent online discussions, it appears that this error has caused people to assume I had gotten the photo from a different online forum that cited the same photographs. I added the link to the section “Hololens 2 Problems are Now an Open Secret.”

Introduction

Someone sent me one of the pictures in this article a few days ago, but I was waiting for permission and getting some more and better pictures. But since pictures were published in the Reddit discussion, Hololens 2 waveguides in the wild. Color banding is very close to in headset experience. today, I decided to write about it today.

I hope to get some much better pictures in the near future, but it could be after CES in January 2019. From the pictures available, you can tell the horrible color uniformity problems, but they are not of high quality and with the right input to show the resolution issues. I also hope to prove/show some of the other issues like flicker and the disappearing lines temporal artifact that will be discussed below.

I wrote this article in a hurry to be timely, so there may be more than the usual typos and grammatical errors.

Hololens 2 Problems are Now an Open Secret

It is now an open secret that Microsoft is having serious problems making the Hololens 2. There are threads on various groups discussing the problems. In addition to the thread with the pictures, the Hololens Sub-Reddit discussion “The elephant in the room: HL2 display” cites and Hololens 2 waveguides in the wild [edit: this second link was included in the photograph but was accidentally left out]. Color banding is very close to in headset experience. confirms the information presented this blog’s articles Hololens 2 is Likely Using Laser Beam Scanning Display: Bad Combined with Worse and Hololens 2 (HL2): “Scan Lines” Making Text Hard to Read and Quality Issues with Waveguides.

Prior to the Reddit threads, there was a Facebook thread reported in by this blog in Hololens 2 (HL2): “Scan Lines” Making Text Hard to Read and Quality Issues with Waveguides. That thread was quickly deleted after I publish, but I had already the content in the article. The new pictures confirm what the drawing showed.

The Pictures – Yes, It Really is This Bad

Below are the two pictures from the Reddit thread (the second one was originally published on Twitter). Both public and private information sources are telling me that the first picture is “typical” of what people see with the Hololens 2. There are units that are somewhat better and those that are considerably worse (as described on the Facebook post above). Yes, Microsoft is really shipping products this bad.  

https://kguttag.com/wp-content/uploads/2020/09/14413-2019-12-02-hl2-waveguide-fb03.png

When Smart People Do Something that Looks Dumb, the Alternative Seemed Worse

I’m often asked why otherwise smart people did something that looks dumb, and my response is “because doing the alternative seemed worse.”  In this case, Microsoft had to choose between shipping obviously defective devices or it becoming clear they could not ship.

Based on all the evidence, Microsoft felt pressure to announce the Hololens 2 (HL2) before they had figured out how to manufacture it. Worse yet, they were having not just one but at least two serious yield problems, namely the laser scanning engine and the butterfly waveguide (see Hololens 2 is Likely Using Laser Beam Scanning Display: Bad Combined with Worse).

Likely a Hiearchy of Customers and Binning

I would suspect there is a hierarchy where very important customers (ex., the US Army, and Toyota) are getting the best units. That “less important” companies are getting the units from the lower “bins.” These lower binned units may put up an image but have poorer image quality.

The yields are so low and the costs are so high that they don’t want to trash headsets were the laser display engine works and so are “functional.” Perhaps the waveguide problem is showing up late in the assembly process, say at the point of attachment to the rest of the optical assembly. Once it is glued together (as all optics like this must be glued together), there is no chance to rework. They are left to choose between shipping bad looking units that “function” or shipping nothing at all.

All this put together, I suspect that Microsoft is shipping a higher percentage to bad looking, but functional units to “lesser” companies. But it is also possible that a large percentage of units have serious image quality problems. We don’t currently have a large sample size of published images.  

Background Information on Hololens 2 quality issues

I’ve known for over a year that they were having serious problems with the laser scanning engine. It sounds like they may have even had some infant mortality problems, which is the worst kind of problem as it means units could go dead after glues expensive waveguides to them.

Another open secret that I have discussed many times is that Microsoft got the Laser Beam Scanning (LBS) technology from Microvision. Strangely in spite of this being an open secret, Microsoft keeps claiming to have invented the LBS engine (see  Hololens 2 Video with Microvision “Easter Egg” Plus Some Hololens and Magic Leap Rumors), in spite of claiming to have invented it, is using Microvision’s laser beam scanning technology.  

I have multiple sources that have told me that Microvision was having trouble making the fast scanning mirror. It sounds like Microsoft thought it could solve the problems, but apparently, the problems were bigger than Microsoft could readily solve.

Until recently, I was less aware that they were having serious yield/manufacturing problems with the diffractive waveguide as well.

Hololens Diffractive Waveguides, Bad or Really Bad

Even “good” diffractive waveguides are notoriously bad (it is baked into the physics) in terms of image uniformity with colors shifting across the field

The question with Hololens diffractive waveguides is not whether it produces a good image. Realistically, the best Hololens (1 or 2) looks very bad compared to the cheapest TV you could buy today. You would return a TV as defective if it looked at bad as the Hololens 1 or 2.

So we are already grading on a curve when it comes to Hololens’ image quality. What a “dumb consumer” would call a bad looking image is considered “good” by Hololens standards.  

As nothing looks very good, what is considered bad by Hololens’ “standards” becomes subjective. A person with multiple units can judge that some look worse than others.

Reports of Another Temporal Artifact, Disappearing Lines

I have reported for some time that the HL2 has flicker caused by too slow a refreshed rate, and I continue to get confirmation. Another temporal artifact issue has been reported, that of “disappearing lines.”

In talking with someone that has been using the HL2, they said that in addition to having problems reading the text, occasionally, every other line disappears. I believe this to be caused by the Interlaced scanning process combined with eye movement. The human visual system blanks out vision while the eyes are moving (proven in many studies). If the eyes happen to be moving when one of the interlaced fields is presented, the human visual system will see momentarily two of the same field and miss the lines from the other field.

I expect there are other artifacts yet to be reported cause by the Interlaced scanning process. As I first reported in Hololens 2 First Impressions: Good Ergonomics, But The LBS Resolution Math Fails!, the scanning rate of the Hololens 2 is too slow for the claimed resolution not to cause problems.

Some Good News for Hololens 2 – Ergonomically Great Compares to Magic Leap

The one comment I keep hearing and reading about the HL2 is that ergonomically, it is great compared to the Magic Leap One and better than the Hololens 1. There are developers that can look past the image quality problems and see uses for the Hololens 2. They particularly like how it can be worn with glasses and that they are fairly open.

Magic Leap has recently announced a pivot to the enterprise market. The problem is that the Magic Leap One has a terribly ergonomically design compared to even the Hololens 1 no less the better ergonomically designed HL2. Magic Leap went for steampunk styling and marketing hype and made a complete mess of utility. I don’t see people taking the Magic Leap One’s pivot to enterprise seriously.

Conclusions and What to Expect Going Forward

It is hard to say how long it will take for Microsoft to solve their manufacturing problems. Based on announcing it in February only to ship very limited and yet still pretty bad units 10 months later, they are obviously having much bigger problems than they expected.

I would expect that eventually, they will reduce the worse case problems with the waveguides. But they will still be diffractive waveguides and still have serious color uniformity problems as they did with the Hololens 1 or perhaps worse due to the wider FOV. By consumer standards, they will still look bad, but they could be useful for industrial purposes where image quality is less important.

The refresh rate is still too low by a factor of 2 to reduce flicker and a factor of 4 to both reduce flicker and support the resolution that Microsoft has claimed for the HL2. I think this will be a problem that will plague the product in some applications. There are some people that are very adversely affected by flicker (including causing headaches and nausea) and this problem is built-in by design.

I am also concerned about the long term effects of shining a laser beam into the eye for very long periods of time. The HL2 is claiming to display about 2560 x 1440 = ~3.7 million pixels (not all of which are visible at the same time due to digital IPD adjustment). Assuming they are displaying a 500 nits average image, it means at any instant in time the laser beam has ~3.7 million x 500 nits = ~1.8 Billion nits or brighter (since the spot is smaller than a pixel) than looking directly at the sun. The only thing that keeps it from blinding you is that the beam is rapidly moving.  I’m sure they have circuitry to shut down the laser if it stops (assuming it works 100% of the time), but that is the obvious problem. I have not seen any study on the long term effects of scanning a laser into the eye for many hours a day, 300 or so days a year. As an analogy, smoking a cigarette for a short period of time may not give you lung cancer; it is the long term effect that causes the problem.

Karl Guttag
Karl Guttag
Articles: 244

52 Comments

  1. Does it also make you fat, Karl?

    Your glee is showing. It’s quite unseemly. One gets the distinct impression that you so badly want them to fail, you can’t even remember to proof read. There are plenty of reports of good image quality from others, so what we are talking about here appears to be, as you acknowledge, the usual temporary difficulties experienced ramping new technology to scale.

    • Let’s see some objective pictures of these “good units.” I do point out in the article that Microsoft is likely cherry-picking the best units for their key accounts and that they are likely to be units from a lower “bin.”

      I also point out that I expect that they will improve the quality of the waveguides. But there are things that will never be fixed with a design using diffractive waveguides and LBS, particularly with 120Hz interlaced scanning. These problems will not go away with manufacturing ramps.

      This is a product that Microsoft was dropping hints at in 2018 but kept delaying the date. They then announced it in February 2019 and 9 months later they sort-of announce they are shipping, but not really. With the pictures becoming available, we can now see why.

      With the Billions spent on Hololens 1 and 2 and the millions on marketing, why worry about my little blog?

      • Hi Karl,
        Thanks for this really useful, we have 2 HL2 and both are equally as bad when it comes to flickering rate and the “colour fan”. We did not experience any of these issues with the HL1 and are now questioning if we should stay with the old tech until this issue is solved.

        Have you received any reports of any headsets that have good colour grading? You mention tier one and tier two is this a fact? or just hope?

        I would like to see some screenshots of good units

        KR

      • I don’t have a very large sample size and there are not a lot of people reporting results. Part of the problem is the limited units shipping and part of the problem is that those that are getting units usually have a vested interest (sometimes a contractual one) to not give results publicly.

        I recently got to see a HL2 with my own eyes. I did the “calibration step” for my eyes and the color uniformity was poor. I also found the color saturation to be significantly worse that the HL1. Unfortunately, I was not allowed to take any pictures. I noticed the flickering text. I also found it made my eyes sore (not sure the cause).

        What I have heard so far is that the HL2’s color uniformity seems to range from not great to very bad. Diffractive waveguides at their best have problems with color uniformity. Going with a wide FOV makes color uniformity more difficult. I would expect that they would improve it over time, but there is a limit as to how good it can get. I would think that it will eventually look similar or a little worse than the HL1.

        As for the flicker, I think this is “baked in by design of the HL2. It may have different effects on different people.

    • I agree that Karl tends to focus the main parts of his blogs directed towards hololens largely on the negative. He didn’t even care to include Alex Kipman’s tweets replying to the pictures that surfaced with this issue. I quote from Alex’ twitter response:

      “Friends, we have a binocular system that forms an image at the back of your eyes not in front of it. Eye tracking is fully in the loop to correct comfort which also includes color. Eye relief (the distance from lens to your pupil) changes the image quality. Further out you are, worse the image quality becomes in terms of MTF as well as color uniformity. Taking monocle pictures from a phone (or other camera) is completely outside of our spec and not how the product is experienced. When you look at it with both eyes, at the right eye relief (somewhere between 12-30 mm from your eyes) with eye tracking turned on, you experience something very different.”

      Which of course Karl tried to shoot down in a reply to said tweets, calling it marketing bunk. Sure, obviously colors aren’t going to be perfect, we know this. But this post makes it out to seem that Microsoft is just shipping products that are “really this bad” not even taking into account what Alex stated. Somewhat spreading biased information in my opinion.

      Regardless, I’m interested to see for myself once I get my hands on my own device because there are reports from either side of the coin saying opposing things.

      • First, I just realized that I had left a link out of the text that went with the photograph (the link was only in the photograph but not in the text in the original article). My source for the photograph came from a Reddit posting: https://www.reddit.com/r/HoloLens/comments/ecbbii/hololens_2_waveguides_in_the_wild_color_banding/. The Alex Kipman quote you made came from a LATER Twitter citing the same photograph that I did not see until AFTER I posted the article: https://twitter.com/cesarberardini/status/1207326255974166528. So your implications that I was hiding something or afraid to deal with Alex Kipman’s comments are misplaced. I would have gladly used and responded to Alex Kipman’s comments if I had known about them at the time I wrote the article. I commented in the Tweet thread you cited only AFTER I publish the article.

        Second, I hate to break it to you, but there is no Santa Claus. As in, you are either disingenuous, parroting what you saw elsewhere, or very gullible (or some combination). They certainly could compensate somewhat, and when looking at both waveguides that had mirror problems, it would look a little more uniform. There are reports that it looks better when wearing so there may be some compensation being used. But there is no way they are making something this bad go away.

        There are now many reports of people seeing with their own eyes include social media posts such as https://www.reddit.com/r/HoloLens/comments/ebqtg6/the_elephant_in_the_room_hl2_display/, and the drawing in the Facebook thread I cited (but was later taken down, thankfully I captured it) in this article: https://www.kguttag.com/2019/12/03/hololens-2-hl2-scan-lines-making-text-hard-to-read-and-quality-issues-with-waveguides/.

        If you read through the posts, you will also note that people are reporting getting units were the second batch of units is worst than the first they got. There are discussions by people getting pre-production units saying they were hoping Microsoft had fixed the problems.
        So if it is doing everything as Alex Kipman claims, it is not working for MANY people.

      • I’m aware that Alex’s tweets were posted after you published this article. However, I was going off the notion that since you have “updated” this articles information and others in the past (take the article you referred to in your reply to me: “Hololens 2 (HL2): “Scan Lines” Making Text Hard to Read and Quality Issues with Waveguides” where you updated the article after publication with the status of the facebook tweet that was later deleted), then you would have updated this article with this new information. I don’t think that you’re afraid per se to deal with this. I do however, think it makes it so that the basis of what this article stands on wouldn’t hold as much weight, since monocle through the lens camera pictures obviously don’t do the waveguide proper justice in regards to Alex’s statement.

        I’m not trying to come across as disingenuous, I know you’re smart in this field of technology. Which is why I find it odd you’re dealing with it in the way you are. I also know this problem won’t completely disappear, hence why I stated in my reply above “obviously the colors aren’t going to be perfect”, and I don’t expect them to be. I’m also not denying that people are getting sub-par units. I’m simply stating my opinion on how your articles tend to shine a light mainly on the negative feedback rather than a healthy combination of both positive and negative. Saying that your private sources say this is “typical” of what people see on the Hololens 2. Meanwhile, in the reddit post which you pulled the through the lens photos from for this article, the top comment states: “We have a “metric **** tonne” of devices, this is not an issue as illustrated. Return the device for a replacement!”. Also how you asked David above in your reply “Let’s see some objective pictures of these “good units.” ” even though you knew by the time you wrote that comment that getting a picture that depicts the waveguides properly isn’t possible.

        This is really just my two cents, and an outsiders perspective. Maybe you can take something away from it or not. All I know is that I’m not the first outsider to voice my opinion on this particular trend I notice in your articles.

    • David,

      I have an HL2. Karl’s on the money. Everyone here wants AR to not suck. Quit fanning out about a device you don’t own, thanks.

      • Are you and Alfred actually concerned about the long term impacts of using lasers as the light source? I doubt you would have bought the device if you did. I doubt Karl is actually concerned either, and it certainly had nothing to do with an article on image quality, so it begs the question why he threw it out there. It was gratuitous and designed to elicit concern; a wish of doom. It was unseemly.

      • My blog is a bit of a stream of consciousness. I included the discussion of long term exposure to laser scanning because the article involved LBS and I had recently discussed the issue with someone else.

        I also would not write that I was concerned unless I actually thought there might be a problem. I am hoping that someone will point me to a study that confirms or rejects the idea.

        Yes, I thing LBS is a bad idea for many reasons. You may not like them, but I think I bring up legitimate concerns. Do you not believed the HL2 has 60Hz flicker and the disappearing line artifacts and that the resolution is going to prove out to be less than claimed? So far, LBS has a track record of failure. I think the time for raster scanning displays for a broad market ended with CRTs.

      • From their research, others including Oculus, Microsoft and Apple appear to disagree. Let’s deal with things one at a time. I’ll do it in the context of Microvision (MVIS) technology, which is essentially MEMS mirror scanning, of which you are the leading critic and which, frankly, is the primary ongoing basis of your criticism of Hololens 2, specifically MSFT’s decision to adopt MVIS technology:

        1. Waveguide issues, the leading culprit of the problems you cite in your current article. According to Robert Scoble, MSFT should have used Lumus’ reflective waveguides instead of refractive waveguides. Here’s Lumus’ Ari Grubman agreeing: https://www.youtube.com/watch?v=MgGi2tkfUlM
        However, this is not a MVIS MEMS mirror scanning issue. MEMS mirror scanning works with reflective waveguides too. See this Microsoft patent application.
        http://www.freepatentsonline.com/y2018/0172994.html

        2. How about lasers? Maybe that is the problem. Speckle from coherent light sources, etc? Safety issue [raised, but not demonstrated]? Brightness? Better to wait, as you have said, for microLEDs. No problem, microLEDs also work with MEMS scanning mirrors. See this Oculus patent application: https://patents.google.com/patent/US20170236463A1/en

        3. Resolution? FOV? Brightness? The current MVIS MEMS Mirror scanner delivers 1440p at 120 Hz. You have argued that it is actually 1440i (interlaced) not 1440p (progressive) and therefore causes flicker for some users. That doesn’t seem to be a widespread complaint but, assuming it becomes so, can it be resolved without unduly sacrificing resolution? What about using arrays of lasers (or microOLEDs) instead with MEMS mirrors for super high resolution foveated AR displays? See the following MSFT patent application for that very thing. It uses up to 9 RGB beams with a single MEMS mirror scanner, allowing tremendous gains in FOV, brightness and resolution. Here is a breakdown of the patent from MVIS reddit, with a link to the patent application included: https://www.reddit.com/r/MVIS/comments/9lhft5/microsoft_wide_fov_ar_patent_application/

        Parenthetically, one suspects that would allow progressive rather than interlaced scanning at 120Hz as gthere would be no need to stretch out the scanner for resolution purposes if there are multiple light sources.

        There seems to be a pattern here. The limitations and problems you describe are not rooted in scanning light with MEMS mirrors, but rather other aspects of the display mechanisms; and the solutions your propose are compatible with MEMS mirror scanning. I really don’t think your beef ultimately is with MVIS and their technology, despite your robust criticism of their early picoprojectors.

        I’ve haven’t yet addressed Apple’s AR patent applications which also cite MEMS mirror scanning, but they’re there. I do tend to agree with you though that Apple doesn’t seem to create that much novel IP in core AR technology, including MEMS mirror scanning. They do seem largely to write patent applications that incorporate it though.

      • A) Your statement is the kind that Microvison partisan would make. It is a fallacious argument that “others including Oculus, Microsoft and Apple appear to disagree.” In what way do they “disagree?” They also file patents using other technologies, so are they disagreeing with themselves? What the Microvision proponents have are random patents that mention LBS.

        Researchers have been playing with LBS and filing patents for over 25 years. The Microvision proponents had cried wolf so many times was one of the reasons why I was initially dismissive of Microsoft using LBS. Hololens 2 has all the hallmarks of something that escaped the lab of researchers who thought pouring money on the problem would solve it. A few LBS products have made it to market and swiftly failed.

        1) Yes, the leading culprit of the problems shown in the pictures is the waveguide. But I also have information from multiple sources that the scanning mirror has been another big problem that has led to the delays in production and availability. If the LBS engine is dead, they won’t even bother to ship the product. The fact that the are shipping products with such defects shows that they are very constrained on supply.

        2) The reason the LBS has potential safety issues is that the brightness of the entire display is concentrated at a single point. At a given instance in time the beam is brighter than the sun. In the case of a full MicroLEDs array, the brightness of a given pixel is going to be literally over a million times dimmer. In the case of the Occulus patent you cited, the scanned MicroLED array is going to have to be somewhere in between, but still likely more than one to two orders dimmer at any instant in time than a scanned laser beam for the same average brigthness.

        3) You are making and unsubstantiated and I believe a provably false claim that it supports 1440p at 120Hz. The claimed horizontal scanning rate of 56KHz absolutely does not support 1440 lines of progressive scanning. The simple math shows that they are doing 1440 scan lines at 120Hz interlaced. You are also making the false equivalence of scanned lines on a non-rectangular pattern as the same as lines of pixels. If Microsoft could make enough units to all them to me independently. I don’t have time to deal with your other flights of fancy at this moment about what somebody will do someday because you saw a patent with no demonstratable product. There is no suggestion that HL2 is using multiple scanning lasers which would greatly increase the complexity of an already problematical subsystem.

  2. Thanks Karl, for your informative review, it also helps with FOMO for many. Curious to see if the display improves over time compared to what else is out there. You said there may be typos due to the timely nature of your review.

    As always folks, make up your own mind in reality.

  3. Hello,

    I received two HoloLens2 this week and I was shocked at the quality of the display. Especially, if you look at a white hologram, it’s hard to read the writing! The rainbow effect is really strong with both glasses and you can see it in the display itself. Unfortunately, it’s not easy to take pictures through the lenses.
    The HoloLens2 is more comfortable and I like the way it reacts to gestures and can track hands.. but like this, I think the headsets are unusable.

    Best regards from Germany!

    • Thanks for the confirmation.

      I can believe that some of the bad effects could be reduced with compensation as Alex Kipman claims, but I seriously doubt that they can make it look uniform. Part of his explanation sounds like he is invoking magic. One could accept that Microsoft has invented software that fixes the problem if there were not so many reports, including yours, that say they see the problem with their own eyes. If Microsoft has such miraculous software, I would suggest they publish some proof, and not just give some marketing spin. Many people now are saying they have problems and their Microsoft representatives are telling them that they are in short supply and that they can’t get better units.

      Alex Kipman as far as Hololens goes seems to be much more of a marketing person than a technical expert (as I pointed out in https://www.kguttag.com/2019/10/10/hololens-2-video-with-microvision-easter-egg-plus-some-hololens-and-magic-leap-rumors/). I’m sure he is a very intelligent person at something, but he does not seem to understand the laser beam scanning displays and is all over the place with the numbers he quotes. He is also disingenuous when he says that Microsoft invented the laser beam scanning engine when everyone in the industry knows it was developed by Microvision (I can’t understand why they maintain this fiction). And then we have Microsoft playing fast and loose with the definition of how FOV is measured (2X the area when everyone thought they mean 2X linearly).

      It would be very simple for Microsoft to solve all this. They just need to get the units in hands of people that can objectively evaluate them and publish the results. With the obvious manufacturing problems, an apparently wide distribution of the image quality being shipped, the moderately tight control of who gets units at all as well as the better versus worse units, and so forth, we are left with very few published pictures and comments on social media. Microsoft brought this on themselves by announcing in February 2019 the HL2, and then announcing they were “shipping” in November 2019 when they were still having severe manufacturing problems.

      When I take pictures through headsets (such as I did with Hololens and Magic Leap), I am aware there are issues between what the camera sees and what humans see. I trying and make sure that the camera pictures are close to what I see and if this is not possible (a classic problem is roll-bars with scanning displays) try and point out what is different.

      Then we get into the usual factors. Some people have higher standards than others. Some are biased one way or the other. People are also looking at and considering different content; problems with uniformity will show up best with a mostly solid white image.

      Part of the issue is that diffractive waveguides inherently have problems with color uniformity. Some are much better than others, but they ALL have uniformity problems. So you start with the “best” having poor color uniformity and then it becomes a question of how bad is it?

      • “One could accept that Microsoft has invented software that fixes the problem if there were not so many reports, including yours, that say they see the problem with their own eyes.”

        How would that even work? It would need to be calibrated to both, the particular sample of the waveguide, as there are variances from one to another, and to the user, as the effect will change based on distance from the eye and IOD.

      • I probably should have cast a bit more doubt about it working in my response. The point was that if people were seeing the problem with their own eyes, then it is not true.

        There are two things that could be a work that might help but I doubt would fix uniformity problems. They could, based on the pupil location, try and compensate for the variation in the color uniformity of a characterized system. In other words, they could (but I doubt they really do), characterize each display with optic and try and pre-compensate for the color variation. I don’t think they could make it look anywhere near perfect, but they might be able to somewhat improve the color variation.

        In terms of both eye factoring in, the human visual system tends to do a color averaging between the two eyes when looking at a single object. One eye will still dominate, but there is some averaging that will go on. So if the waveguides color uniformity problems are mirrored or at least not additive, it will reduce the perceived color uniformity problem somewhat.

        Overall, I believe Alex Kipman is saying is “plausible but largely untrue.” He is telling something that has some basis in fact and suggesting that it will solve problems that are greater than the effect will solve. The fact that so many people are seeing the problems with their own eyes suggests that Mr. Kipman is at least grossly exaggerating. You also get into the issue that binocular disparity is being used to queue the brain as to the stereo depth. What if there is an object that is in a color that is “dead” in that region of one eye. If what Mr. Kipman was saying is true, then Microsoft would be telling this to the customers that are complaining rather than relying on them to see a Tweet from Alex Kipman someplace.

    • Regarding the part about Gestures and Human factors. Any gesture system that requires the user to hold their hands out in front of them for extended periods of time is horrible ergonomically. It a big reason why touch screens don’t work well on vertical computer displays. A persons arms need to hang down with your elbows in or the hands have to rest on something. But holding the hands out in front of you is a form of slow torcher. The HL1 was absolutely terrible, so I can believe that the HL2 was much better. It is also hard to be very accurate unless the wrist can rest on something.

      For all these well-know reasons, there is a lot of work going on in detecting hand motions based sensors in the wrist area (so people can wear a watch-like device rather than a “glove”). These would allow your hand to work for giving input/gestures without you having them in view.

  4. Good points about the long term effects of laser light. I assume they are not using some form of coherence reduction? Also, the blue wavelength in blue LEDs is known to be harmful. I assume the same holds true for their blue lasers?
    Scanning worked well enough in the days of vacuum tube televisions, but there were no moving parts. This technology is the equivalent of the mechanical televisions from the 1920’s. An interesting laboratory experiment, but of no practical value.

    • Thanks, What damages the eye is, to say the least, very complicated. There sure be no UV light with the lasers. The main problem is the energy in the light effectively “cooking” the retina and damaging it with heat. I also don’t believe that coherence light is inherently more dangerous than incoherent light once it enters the eye.

      What I am concerned about is that the safety standards and their testing, as far as I am aware, is for incidental or short term exposure. As I wrote in the article, you won’t get lung cancer from smoking one cigarette, the problem comes from the long term use. At any instant in time, the laser beam hitting the eye is brighter than the sun and if it were to stop for a short period of time, it would cause permanent damage. They do have circuitry to detect that the beam has stopped and to shut the laser down, so hopefully, it will work better than the Boeing 737Max’s MCAS control. My bigger concern is for having very high instantaneous exposure for short bursts over a long time of exposure (say 8 hours a day for a year).

      To be clear, I don’t know that long term exposure is a problem, but I have not seen any study (good or bad) for continuous exposure to the eye with lasers.

    • “Also, the blue wavelength in blue LEDs is known to be harmful.”

      No, it’s not.
      That’s pure fiction, perpetuated by snake oil salesmen.

      Give me one good citation or metastudy that says blue LED light is harmful.

      • UV light is well known to be damaging to the eye (or any organic substance) and the shorter visible wavelengths of blue are also not considered good for the human eye. There are some studies suggesting that too much visible blue light is also not good for the eye
        (see: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6288536/). There is a lot of talk of it causing macular degeneration, but I’m not sure if it is conclusive. But then there are others that claim that blue is not a problem (see: https://www.aao.org/eye-health/tips-prevention/should-you-be-worried-about-blue-light). It does get a bit difficult to separate the financial incentive in selling “blue blocker lenses” and the scientific data.

        Blue can also be dangerous to the eye in that it can get to high levels without you noticing it and averting your eyes. There is a similar problem with deep red light.

  5. Friend working at [French airplane company] got to try the HL2 and when I asked how it was compared to HL1 he just send me this saying “it summed it all up”….

    • Thanks for the confirmation.

      I have no doubt that “2nd Tier” (as in not the U.S. Army and a few others) are likely getting worse units. So their observations may not be of the best units. But still, many reports are surfacing that HL2 is having some very serious problems. Not only do they have manufacturing problems, but the units they ship also have some severe image quality problems.

  6. There is a huge gap between reality and expectations in AR and modern tech in general. This blog is a step to close it.
    Thanks, Karl.

  7. Hi Karl,

    I think you’ve nailed the whole LBS issues quite well already.

    I think I have some new material you might be interested in commenting on in a new post since it involves your belowed Magic Leap now.
    Remember the startup Kura ( https://www.kura.tech ) that wanted to sweep a single row of microLED pixels to generate a 4K, 8K or even 16K image? Well, it seems like Magic Leap has hoarded that patent as well: https://patents.google.com/patent/US10175564B2/en?oq=10175564

    I think your opinion would be invaluable regarding how practical the “rotating actuator” to sweep the LED strip image up and down would be, as well as the speed, latency, frame warping during head motion and more importantly the brightness we can expect since at every point in time only one row is displayed.

    Thanks.

      • Thanks, I was already aware of the Facebook application with scanning MicroLEDs (I dug up US 2017/0235143 after I saw the Kura announcement) and was planning on using it in my Kura article (should be out soon), but didn’t notice that there was a resultant patent in late December 2019. While the Magic Leap application is also somewhat related, I don’t think that rotating a single row of LEDs is very practical.

        You should note that Kura is planning on scanning a 2D array of LEDs. The horizontal dimension would have say 8K LEDs but the vertical array would be some fraction of the vertical resolution. They would scan vertically (assumed with a mirror) such that each element in the vertical dimension would over V-resolution/N pixels.

      • Thanks Karl,
        Looking forward to your article.

        If Kura plans to do 2D array of LEDs, even if Y dimension is very low res, like 16 pixels high, I’m still not sure how much that would help with cost reduction. But I’ll leave you to make your points in the actual article.

        The only time I’ve seen anything similar done with LED strips for wearable display as well as video projectors have been the Virtual boy with bulky rotating mirror and annoying audible noise associated with it and a russian company using a rotating mirror-coated hexagonal prism (rotating polygonal mirror) combined with a galvo scanner and high power multimode lasers to do raster graphics with modified laser show projectors. With the latter the resolution was low, the high RPM rotating prism was dangerous and very noisy and probably not scalable for a head-mounted device but the brightness was impressive. Can’t find their website anymore but the site for the polygonal mirror is still up: https://precisionlaserscanning.com/polygon-scanners/ . I think ADLIP (All Dome Laser Image Projection) worked in a similar way.
        Kura may be using rotating polygonal mirrors instead of rotating flat mirror. In both cases I don’t see how they can keep it compact.

        Also note that Kura is using LetinAR’s “pin mirrors”, not a traditional waveguide: https://letinar.com/technology/
        Not entirely sure how this works except that the mirrors are tilted to get image from the top. Their patents aren’t exactly clear either. I tried replicating the effect but it had the same issues as with pinhole glasses: seams and distortion between pinhole views which get progressively worse away from a “sweet spot” eye-to-mirror distance. Possibly not too noticeable in an AR demo with bright background but for higher PPD, darker scenes or VR usage would be a problem, unless I misunderstand the tech. https://youtu.be/X0cx2e7fPn0?t=7

        Thanks.

      • Sorry for double posting but I wanted to clarify what I meant when I said I don’t see how cost would be reduced.

        Let’s say they want to do 8K by 4K image as on their main website page by scanning a narrower LED strip.

        If they went with 7680×4320 perceived pixels and used just a 7680×16 pixel strips, they would end up with 2 million pixels. That’s as many pixels as on a 1920×1080 microOLED microdisplay which are pretty expensive. So while I see this being cheaper than an insanely expensive 8K microOLED panel, it seem it wouldn’t be cheaper than a 1080p microOLED panel which is already too expensive for consumer use.

        Adding more pixels vertically would require even more total pixels/higher price and going less would get it closer to a 1D array like in Vitual Boy which you mentioned is not practical.

      • Sorry for double posting but I wanted to clarify what I meant when I said I don’t see how cost would be reduced.

        Let’s say they want to do 8K by 4K image as on their main website page by scanning a narrower LED strip.

        If they went with 7680×4320 perceived pixels and used just a 7680×16 pixel strips, they would end up with 2 million pixels. That’s as many pixels as on a 1920×1080 microOLED microdisplay which are pretty expensive. So while I see this being cheaper than an insanely expensive 8K microOLED panel, it seem it wouldn’t be cheaper than a 1080p microOLED panel which is already too expensive for consumer use.

        Adding more pixels vertically would require even more total pixels/higher price and going less would get it closer to a 1D array like in Vitual Boy which you mentioned is not practical.

  8. Just recieved our HL2. Regretfully we will have to return it as it appears we have recieved one of the defective headsets. How bad is the problem? Well, it’s subjective and regetfully any attempts I have tried taking pictures but it has not accurately captured the problem. In my case the photos I have taken make the problem look far worse than it is while wearing the headset. But the three pictures at the top of the article are in the ball park- the second picture showing the AR piano is closer to the effect we are seeing and it is not consistent; i.e the banding is different in each eye.
    Other colleagues don’t notice it immediately until I point it out, then they can’t ‘unsee’ it. The effect is very noticable when looking at webpages or anything with a large white area.
    It is a shame, as the comfort of use and the interactions are very impressive. BTW, I chased down the ‘3 X more comfortable’ remark. It was a metric Alex Kipman explained in an interview. What was meant was that if you found wearing the HL1 comfortable for ten minutes, you’ll find the HL2 comfortable to wear for thirty minutes. I never had a chance to use the HL1 , but my colleagues who have worn both mostly agree with the sentiment- it can be worn for a lot longer- time will tell if the ‘3X’ is correct.
    So, in summary: more comfy, better interctions, but need to get it replaced.

    • Thanks for the feedback.

      I recently got to see a Hololens 2 for myself for about 20 minutes. I was asked not to take pictures so I didn’t. I would generally agree with your comments and add to them. One can see individual scan lines and I notice is that lines definitely flickered/disappeared.

      Additionally, my eyes (not head) became sore in only about 5 minutes of use. I don’t know why but I suspect it was due to flicker. Humans are extremely variable in their response to flicker. The same amount of flicker that may not be noticed by some people, will cause adverse effects on others. Likely it is going to take studies with many people to figure out what is causing this problem. No doubt Microsoft is aware of the issue, but I have not seen it reported anywhere yet. I did not have this problem with the Hololens 1.

      It can be tricky to get a photograph that is representative of what it looks like to the eye. It is important to get the camera in the right position and with an f-number similar to the eye. Even then, the eye is very relativistic when it comes to judging both color and brightness as the surrounding pixels influence how both colors and brightness are perceived. Additionally, both eyes contribute to what you see as a final image, although one eye is usually dominant.

      While the user interface is vastly better with the Hololens 2. The “push a key” keyboard is a massive improvement over the Hololens 1. But hey are still requiring the users to be able to see their hands which is terrible ergonomically. A person can’t comfortably keep their hands held up in view for very long without support. Also, an unsupported hand cannot touch things in mid-air very accurately.

      The unit is definitely better balanced, but I did not have time to evaluate the comfortability of the unit compared to the Hololens 1. I did not have eye soreness with the Hololens 1.

      • Hi Karl,

        I agree with all of your points with the exception of the issue you have around using your hands for the UI

        Having used HL1 extensively, played in VR with oculus and now used HL2 lots (we have 2 that we are developing on- and in talks with MS over what is an obvious design flaw) I simply don’t agree with your aversion to using your hands.

        Your position seems to be from the POV of a desk jockey. The unnatural thing is to be sat at a desk with your arms on a keyboard- leading to back and shoulder issues. There is nothing unnatural about gesturing with your hands- just watch people in conversation, or look at the numerous jobs involving just that- from touch screens to painter and builders.

        Lets focus on the issue at hand which is the appalling screen.

        KR

      • It is one thing to use your hands where you can see them for short periods of time. But a person can not sustain this as a user interface for very long. If all you need to do is the occasional input, then it works. This is a well-known fact with computer screens since at least the early 1980s when I learned about it when looking at why Xerox Parc (pre-Lisa and Mac) used a mouse rather than a touch screen or pen.

        It is a lesson that is being relearned with vertical touch screens. Touch screens work on phones and tablets, but not so much with a conventional laptop or desktop monitor.

      • This seems to be more of an issue with imagination than ergonomics. It used a mouse as touchscreens were hideous in the 80’s, and lets not start on CTS and pointing devices/keyboards.
        The UI in MR can be anywhere, any surface and any orientation. why would you have it at arm’s length in front of you? and in truth any of the conventional UI’s such as a keyboard will be very occasional use within MR, why on earth would you have a hololens on to write an email???
        In my experience of using the devices for 7 hours a day for days on end it is something I have not experienced and have never had comment on. It is a creative tool, not something to fill in forms with.

  9. Karl,

    Ive been enjoying your blog a lot and have learned a lot for it. Its also one of the reasons I am studying optics right now.

    I was wondering if you planned on writing any posts about Kura. Their tech looks interesting, but them being such a minuscule team and having almost no public funding confuses me quite a bit. They also recently updated their site with a new HMD render that is quite a bit more bulky than their original.

    • Yes, I have been working on and off about Kura for about a month. It kept getting pushed back as I learned more and other events happen. It should be out soon.

      • Thats great. Really looking forward to reading it. Do you know if they have any relationship with LetinAR? The diagram on the Kura site makes the core system look quite similar to what Letin has shown in the past.

      • As far as I know, there is no relationship between Kura and LetinAR. Kura and LetinAR are both using “pin mirrors” to reflect light to the eye. Kura shows using TIR bounces to route the light to pin mirrors whereas LentinAR has a more direct path with layers to route light to the lower mirrors when necessary (upper mirrors can “shade” lower mirrors).

      • Gotcha, thank you for that. Do you think its possible Kura has what they claim, or do you think any product they produce will likely have massively lower specs.

      • I did not know that, thanks for the info.

        The updated info on their site also makes clear their design is slightly different (maybe designed around) LetinAR.

        I am still unclear how a narrow strip of LEDs can make a full display without mechanical moving parts and how a pin mirror will not suffer from same artifacts as pinhole arrays.

        So looking forward to your blog post.

      • Kura mentions some sort of hybrid scanning system on their website, I wonder if its more than marketing fluff

  10. “I am also concerned about the long term effects of shining a laser beam into the eye for very long periods of time”

    Does the Magic Leap also have this same potential health issue?

    • No, it does not use laser scanning.

      I think it may be too strong to call it a health issue, but it is something that should be studied and with a more definitive answer.

  11. […] Рис. 8: слева — схема работы системы дополненной реальности на основе дифракционного волновода [18] , справа – типичная проблема дифракционных решеток: радужная окраска [19]. […]

Leave a Reply

%d