Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
For today, I’m going to show a single image as seen through the left and right display in the Hololens 2 (HL2). I have verified that the pictures below match what I see with my own eyes reasonably well.
The human vision is “subjective” and works on relative comparisons and will not perceive changes in intensity and color if they are gradual. The camera is “objective” and is not influenced by gradual changes in intensity and color like the human visual system.
The test pattern totally fills the FOV and is mostly white and is meant to show up problems with color uniformity. As the HL2 color uniformity gets worse towards the outsides of the images, this pattern is going to show up color changes from the worst part of the HL2’s waveguides.
While Hololens 2 Display Evaluation (Part 1: LBS Visual Sausage Being Made) and Hololens 2 Display Evaluation (Part 2: Comparison to Hololens 1) of this series covered the laser beam scanning process, this article will be showing the color uniformity which more a function of the HL2’s optics and waveguides.
While a bit of a side note, I’m also going to point out and discuss a sub-pixel size diagonal texture I am seeing in the high-resolution images.
The three pictures below show the left and right displays, as well as an image the left and right images, are merged with the Photoshop “lighter” function (on a per-pixel, per-color-channel basis). The merged image very roughly simulates what it looks like through both eyes at the same time. To overlay and take the lighter of the right and left pictures, the Left image was slightly transformed to match the right display’s picture. The camera and lens combination could capture the whole vertical field but cuts off about 10% of the right side of the left display and 10% of the left side of the right display.
The images above give a good idea as to the color uniformity, or lack thereof with the HL2 as the white background shows up problems with color uniformity. In both the left and right display there is a somewhat distinct roughly trapezoidal area that is somewhat brighter and with better color uniformity. This area is outlined with a dashed red line in the figure below. Likely this pattern is caused by the “butterflied” waveguide as discussed in Hololens 2 is Likely Using Laser Beam Scanning Display: Bad Combined with Worse.
The following shows the same 3 pictures above but using the full resolution of the camera (click on the images). These images are more than 4,500 pixels wide with a camera pixel equal to about 3 by 3 pixels on the test pattern. As explained in Hololens 2 Display Evaluation (Part 2: Comparison to Hololens 1), the HL2 image was set to about 29 pixels per degree, which make 1-pixel height in the test pattern equal one laser scan line in the center of the HL2’s display.
Since the left and right are two different display systems, you can start to notice commonalities. For example, if you look in the area around the black circle with the “43” in it, you will see that there are cyan regions on either side cause buy the red laser not turning on after going to black/off. Thus even if you look at the image with both eyes, you see the cyan where it should be white.
I have been using the “Elf” picture for many years in my test patterns because it has a great mix of colors with solid red, green, and blue objects as well as skin tones. My eye is “trained” from years of use on what this image is supposed to look like and when the colors are not right.
One thing I noticed immediately both on the HL2, as well as another HL2 I tried previously, is that the Elf picture was desaturated. Also, there was also sort of a haze (frosted look) look to the image. Below I have shown how the Hololens 2 compares to the original test pattern and even the Hololens 1.
I plan on expanding upon the poor color control on the HL2 in upcoming articles.
Something I regularly notice in the pictures is a diagonal texture. Show below is a full resolution section from near the center of the right display. These pictures were shot at 1/15th of a second or which is 8 field times. And because the texture is not blurred, it suggests that it is not moving from field to field. This closeup shows the level of detail the camera can detect.
I suspect that the diagonal texture might be the pupil expander, EPE 306 shown in the Microsoft Patent 10,025,093 below.
A “pupil expander” is, in essence, a tiny rear projection screen. The expander is to slightly diffuse the scanned scan laser image to create a real image that can be seen, but in this case, very tiny. Because the expander/screen is so small, the textures that cause the pupil expansion have to be extremely small.
I was a little surprised to see the texture in a long exposure as I was thinking that they might be vibrating the pupil expander to reduce laser speckle. One of the known ways to minimize laser speckle significantly is to vibrate a projection screen or pupil expander. But if the element with the lines in it were vibrating, the lines would be blurred out. There are other techniques for speckle reduction, including vibrating different layers or sending the image through a phase-shifting liquid crystal that is electronically-driven.
I saw a somewhat similar diagonal line “texture” in a Pioneer laser HUD based on a Microvision laser scanning projector that I took apart back in 2013.
The picture on the left shows how Pioneer pupil expander diffuses light. It was about 23mm by 72mm, which would be huge compared to what would be in the HL2. This pupil expander had several layers sandwiched together. You may notice the slight blurring that the pupil expander causes. The job of the pupil expander is to turn each ray of light into a cone of light rays.
The next picture (right) shows what happens to light with lit from behind. There is a decidedly diagonal texture.
The final picture (below) shows an image projected through the pupil expander and how it causes diagonal lines in the image. The pupil expander in the HL2 must be much smaller and with a much finer (microscopic) texture.
You might also notice the multi-colors graininess of the image above caused by laser speckle. I don’t see such a coarse-grain laser speckle in the HL2 as the Pioneer HUD exhibits. I will say from personal experience, having helped build an LCOS based laser projector, that reducing laser speckle is extremely difficult. There is a tendency with speckle reduction methods to make the speckle smaller. I think there is a bit of a haze in the HL2’s image that might (not sure) be due to the residual speckle after their speckle reduction.
While a camera works differently than the eye, I tried to make sure the pictures in this article give a fair representation of the color uniformity issues with the HL2. Care was taken to get the best possible images by carefully locating an Olympus Micro 4/3rds camera with a 25mm prime lens. The color uniformity problems are with the HL2 and not the camera or its setup.
Using both eyes does help some but also does not solve all the uniformity problems. You can readily see the brighter and whiter trapezoidal area in the center of the screen, as well as the significant coloration elsewhere with the human eye.
It is fair to say that most AR content is likely to be sparse on a transparent/black background. So the lack of color uniformity with the HL2 may not affect some applications. But in areas where the color change dramatically, it will mean that specific colors are missing. If, for example, the application puts up to say a red warning indication on the display in an area that does not show red, it might not be seen.
I’m not positive that the diagonal texture in the image is the pupil expander. I think it is a possible explanation. I decided to mention it because it shows up in the high-resolution images of the HL2 and because I had seen a diagonal texture in a laser scanning pupil expander with the Pioneer HUD.
You can’t just take a cell phone or a large DLSR camera and hold it up to an HL2 and get a representative picture. Compared to the HL1, there is a tiny “sweet spot” with 6-degrees of freedom, where the image looks like what my eye sees. I use an Olympus 4/3rd camera because the lenses are small enough to allow them to get to the sweet spot while I can change the lens to get the right magnification.
In my setup, I have both the HL2 and the camera on separate tripods. I adjust the HL2 to get the image where it needs to be to get the right size image in the correct location. I then position the camera to get it to the tiny “sweet spot” where the picture looks like it does to my eye using the “preview mode” on the back of the camera and some test shots.
Most of the pictures I use are taken through the left display, which generally looks the same or better than the right display. Due to the shape of the camera and the HL2, the camera must be upside down to get to the correct spot. I made a custom rig (above left) bent out of 1-inch aluminum stock and tapped 1/4″-20 screw threads (standard camera tripod screw size), so I could flip the camera upside down without interference.
The pictures are shot against a blackout drape seen in the picture above. I found it helped to some tape on the blackout drape outside where the image would be, so the HL2’s SLAM had some reference. Without the tape, it was hard to get the image to be virtually placed on the drape.
Thanks!!! So, the problem is more the waveguide or the projector?
The overall uniformity I would think is mostly a problem of the waveguide and perhaps the alignment projector, optics, and waveguide. I’m told but have not been able to confirm that there may also be issues with the use of lasers, which are very narrow band light sources with diffractive waveguides.
The problem with a lack of saturation of the colors is most likely a problem of the laser scanning projector. I have looked at “gray ramps,” a series of steps in brightness, with the HL2 and they look very poor. I plan to publish pictures of the gray ramps in an upcoming article.
Thank you, Karl. I always enjoy your analysis. 1 question. How did you manage to keep the display on without actually putting it on your head and it having it scan your eye.
Thanks,
I have the HL2 set up to stay on for 30 minutes (longest I could set it) without movement. I log-in/register with the headset using my eyes. I either put it on my head or I stick my head into the HL2 while it is on the tripod holder. The Tripod has a large “L” bracket on it with some “fingers” that hold the Hololens by the overhead strap and some velcro straps to the L-Bracket. In this way, it is held from above with the rig. When on my office chair, I can duck under and then raise my head into the headset while the HL2 is on the tripod. I use the tripod (and moving it) to adjust most of the degrees of freedom and then use the HL2’s tilting visor to get the vertical framing right. I then use my eyes to get the image sized and aligned. I can do some fine-tuning with the preview mode of the camera.
BTW, for the HL1, there is not enough room for the duck under. I get the image roughly correct and then used the camera’s preview mode to do the alignment.
A key factor is that I have sized the images I use to the right number of lines (based on studying test pictures). I then adjust the image so it just fits vertically so it is the correct distance away.
hahaha. i see what you mean. that’s some Mission Impossible approach right there. thanks!
[…] I noted in Hololens Display Evaluation (Part 3: Color Uniformity), one the first things when I first tried a Hololens 2 (HL2) was that colors were washed out (see […]
FYI, MS support told us that a calibration step is done during production for the color uniformity at the highest brightness setting. They might be planning on improving that calibration to work at other brightness levels.
Thanks for the information. I don’t see a lot of variation in colors stepping at the brightest settings. It falls off a lot at the lower settings (particularly the lowest 2 to 3 steps. Personally, I find the 3-steps down from the brightest to be the most comfortable for indoor use so I used it for most of my tests.
[…] noted in Hololens Display Evaluation (Part 3: Color Uniformity), a person with normal vision sees is the combined image from the left and right eyes (photos from […]
[…] diagonal propagation seems to explain the trapezoidal shape I noted in Hololens Display Evaluation (Part 3: Color Uniformity) as seen in the picture […]