304 North Cardinal St.
Dorchester Center, MA 02124
304 North Cardinal St.
Dorchester Center, MA 02124
In the last article, I showed what the real world looks like through the Magic Leap One (ML1). For this article, am going to share some pictures I took through the ML1 optics displaying test patterns.
Above left is a crop of the original test pattern scaled by 200% compared to a picture of the same portion of the test pattern taken through the ML1 (for reference, the whole test pattern linked to here). This test pattern with the various features is a tough but fair way to check out different image quality aspects. The single and two-pixel wide features are meant to test the resolution of the display. A hole was left in the larger pattern to allow an iPhone 6s Plus displaying part of the test pattern to show through as a reference. There is additional information on how the picture was shot in the appendix at the end of this article.
Most of the Magic Leap demos have colorful but smaller objects which work as both “eye candy” as well as serving to hide the lack of color uniformity across the FOV. The use of faces with skin tones in the test pattern is there because people are more sensitive color of skin. The test pattern has large solid white objects across the FOV to identify any color shifting.
I used the Helio web browser to display the images, and some of the image resolution issues could be due to the way the ML1’s Helio browser scales images in the 3-D space. I tried capturing the test patterns and displaying them in the ML1 gallery, and the results were considerably worse. I viewed the same test pattern with Hololens with its browser, and it is noticeably sharper than the ML1 although the Hololens is a bit “soft” as well. It would be good at some time to go back and separate the browser scaling issues from the optics issues, but then again, this is the way the ML1 as a whole normally displays 2-D images.
I have looked at detailed content on the two different ML1s, and none of it is sharp, so I think that these images fairly represent the image quality of the ML1. Even if the scaling engine on the ML1 were poor, the degree of flare/glow and chroma aberrations which are caused by optics would suggest the resolution of the ML1 optics is low.
I only tested the “far focus” (beyond ~36 inches) mode as it would have been very difficult to test the near depth plane focus mode. I could sense that the near focus plane was sharper than the far focus plane and as the diagrams from the Magic Leap patent applications suggest (see right). The far focus planes go through the near focus plane exit gratings to get to the eye which might be part of the problem. I would have liked to have tested the near focus plane as well, but there was no way to scale the test pattern that would work, nor was there a way I knew of to keep the headset in near focus “mode.”
The pictures below were taken through the ML1’s right eye optics with my annotations in red, green, and orange. You may want to click on the images to see detail. To be fair, closeup camera images will show flaws that may not be noticed by the casual observer. Generally, projected images look worse than direct view displays because of imperfections in the optics, but in the case of the ML1, the diffractive waveguides appear to limit the resolution.
While there are differences between how the human eye and a camera “sees” an image, it gives a reasonably good representation of what the eye sees. A camera is “objective/absolute” whereas the human visual system is more subjective/adaptive and judges things like brightness and color relative to a local area and makes the background in the picture seem darker than it does “live.” The artifacts and issues shown in the photo are visible to the human eye.
Overall the color balance is good in the center of the image. You will notice a color shift in the skin tones of the two faces in the test pattern, but it is not terrible until you get to the outer 15% of the image where there is significant color shifting to blue and blue-green as can be seen in the photo.
Issues with the ML1 Image:
In order to see more detail, the picture on the left has the camera zoomed in by over 2X to give more than five camera samples per ML1 pixel (click in the picture to see it at full resolution). The part where the iPhone shows through has been copied and moved to line up with the text in the ML1’s image. The iPhone’s image shows what the text should look like if the ML1 could resolve the image.
Text on the ML1 by any is noticeably soft. The ML1 is less sharp than Hololens even and less sharp than Lumus’s waveguides by an even wider margin. The one pixel wide dots and 45-degree lines are barely visible.
I was expecting the color uniformity problems and image flare/glow based on my experiences with other diffractive waveguides. The color in the center of the FOV is reasonably good on the ML1.
But I just can’t get past the soft/blurry text. I first noticed this with the text in Dr. G’s Invaders teaser (on the right) which is why I set out to get my own test pattern on the ML1. I don’t know yet how much this softness is caused by the dual focus planes but I suspect it is a reason why the ML1 is blurrier than Hololens.
As some time in the future, I hope to be able to bypass the 3-D scaling to directly drive the display to better isolate the optical from any scaling issue. I would also be curious if I could lock the device into “close focus plane mode” and test that mode independently. As I was currently driving the ML1, very soon after I take my eye away from the ML1, it switches back into far focus plane mode (which is why I did not run a test in the near focus plane mode). If someone wants to help with this effort, please leave a note in the comments or write to firstname.lastname@example.org.
I used an Olympus OM-D E-M10 Mark III mirrorless camera. I specifically chose this camera for taking pictures of headsets due to its size and functionality. On this camera, distance from the center of the lens to the bottom of the camera is less than the distance from my eye’s pupil to the side of my head so that it will fit inside a rigid headset with the lens centered were my pupil would have been. In portrait mode, it has 3456 pixels wide by 4608 pixels tall which is over two camera samples per pixel of the ML1’s spec’ed 1280 by 960-pixel LCOS device. The camera has 5-axis optical image stabilization which greatly helps in taking hand-held shots which I was required to do.
The “far focus” of the ML1 is set to ~5 feet (~1.5 meters). I put a test pattern on this website and use the ML1’s Helio browser to bring up the image. I then moved the ML1 headset back and forth until the test pattern filled the view which occurred when the virtual image was about 4 feet away.
The picture on the right shows the setup of the iPhone when viewed from an angle. It gives you an idea of the location of the virtual image relative to the phone. This picture was taken by the camera through the ML1, and only the red annotations were added later.
From other experiments, I knew the “far focus” of the ML1 is about 5 feet. I set up an iPhone 6s Plus in a “hole” in the test pattern put there to view the phone. To have the phone in focus at the same time as the virtual image, I set the phone behind and adjust the phone’s location until both the phone and the ML1 images were in focus as seen by the camera. I then scaled the iPhone’s display to have the text the same size as the displayed on the ML1 as seen by the camera. In this way, I could show what the text in high-resolution test pattern should have looked like through the camera, and it verifies that the camera was capable of resolving single pixels in the test pattern.
The iPhone’s brightness set to 450 cd/m2 (daytime full-brightness) so that it could be seen after being reduced by 85% as seen through the ML1, so the net was only about 70 cd/m2. I took the picture in camera RAW and then white balanced based on the white in the center of the ML1’s image which makes the iPhone’s display look a bit shifted toward green. The picture was shot at 1/25th of a second to average out any field sequential effects.
For reference, the image on the left is a pass-through frame capture taken by the ML1 from about the same place. With pass-through, the ML1’s camera and the exposure of the test pattern can be set independently. In this image, the ML1’s camera appears to have focused on the far background which puts the iPhone out of focus, but you can get a feeling for how bright the iPhone was set.
Interestingly, I saw some different scaling artifacts in this pass-through image than I saw in the image in the camera; in particular, thin black lines on a white background tend to disappear.
The pass through’s image is biased to favor white over black. Looking at the 1 pixel wide features under the “Arial 16 point,” the black 1-pixel dots and lines are all but lost, and even the two pixel wide ones to their left are almost gone.
I would like to thank Ron Padzensky for reviewing and making corrections to this article.