Nreal Teardown: Part 3, Pictures Through the Lens

Introduction

Part 1 and part 2 of this Nreal Teardown series discussed what was happening inside the Nreal AR headset. In part 3, we are going to look at the photographs taken through Nreals’ optics. I shot all pictures against a dark background in dim lighting to see the display and optics characteristics.

The camera is sometimes going to catch things barely visible to the human eye. If you are looking through a pair of AR glasses, the starting “black” you see is whatever comes from your view of the real world. Despite the flaws pointed out below, if we are only talking image quality in a dark room, the Nreal headset looks very good compared to other headsets.

Limitations

As discussed in the prior articles, the Nreal lacks brightness, transparency, and view up. With only about 120 nits at it brightest setting. Hololens 2 has about 500nits, and Lumus Maximus targets 4,500 nits (~30 times brighter). Nreal is only ~23% transmissive and thus is like wearing dark sunglasses. Whereas the Hololens 2 is about 40% transmissive and Maximus is about 85%.

The lack of nits means it won’t work in brightly lit situations, particularly outdoors. The blocking of so much light makes it impractical to use in moderate to low-lit environments. The result of a very limited lighting range where Nreal and similar OLED-based birdbaths can reasonably be used.

Image Quality First Impression

Overall, the image quality of the Nreal is good for an AR headset. I make the “AR headset” qualification because it would not be good compared computer monitor, HDTV, PC, tablet, or a modern smartphone. The basic birdbath with OLED microdisplays results in an overall good image.

Before I start pointing out the issues, I thought I would start with the picture without comments. The picture below is representative of what the human eye will when wearing the Nreal. It would help if you backed up from your display until the diagonal image is ~52 degrees. The picture has about 2 camera pixels for every pixel in the display, so if you blow it up on your monitor and look closely, you will see things in the picture that not visible to a human. With the 17mm lens used in the full-frame pictures, the camera is sampling at ~96 pixels per degree.

Below is the same picture where I have marked up some issues I have seen. Most of these issues a very minor and would hardly be noticed.

  • Vertical out of focus blur, mostly in red below any white object. Some around any white/bright parts of the image.
  • Glow from Lens – I commented on this last time that you can definitely see the lens, and the more bright content, the more you see it. It is not terrible, but it is visible with this design.
  • Slight “pincushion”/curve of the bottom of the image.
  • Slight blue cast/shift on the nose side (right in the picture) and a slight yellow cast on the temple side. An ordinary person can see this, but the camera makes it more obvious.
  • A slight reduction in the image on the nose-side (right in the picture) toward the top of the picture. I put in a dotted blue reference line for comparison. Once again not a major problem.
  • Some blur and double images in the far corners.

The test pattern comprises (mostly) repeating rectangular “targets” with a number in a circle. The first digit gives the row and the second digit gives the column of the repeating target—the number help in identifying where on the whole image a crop is taken from.

Below I have put together crops from 9 targets from the full image above so you can compare them next to each other. Among other things, whit makes the color shift on the right side more apparent.

Cropped Corners and Middle/Center Rectangles

To blow things up more (smaller crop of the same image to give a bigger thumbnail), I have cropped just the small sets of one (1), two (2), and three (3) pixel wide lines.

An out-of-focus glow and some double image above the letters and line becomes more visible. it joins the more obvious read glow below the letters. This glowing causes a loss in contrast/sharpness (sharpness is contrast at an edge).

Sony OLED Color Sub-Pixel Arrangement

Another thing you may notice in the close-up crops above is a slight ripple in the edges of the lines. The camera is catching the “screen door effect” of the Sony sensor. A sharp-eyed person can discern this ripple if they concentrated or a slight loss of effective resolution. It is at the margin of typical vision and not a sginficant problem.

The Sony ECX335 1080p Micro-OLED has red, green, and blue hexagonal subpixels arranged in an overlapping triad like an old shadow mask CRTs TVs. Sony’s Trinitron CRT, somewhat ironically, pioneered CRTs with RGB subpixels side by side. The triad organization results in closer to round emitting structures rather than long skinny rectangles. The triad structure is more efficient and easier to manufacture.

Below I used a longer focal length lens to magnify the pixels another ~2.5x optically to see the pixels better. In this image (click on it to see more detail), you can actually see the individual pixels. The camera is “blind” to the color subpixels due to its own color sensor pattern size. The camera’s Bayer filter pattern colors are overlayed over Sony pixel to scale for both the 17mm lens (used for full-frame pictures) and the 42mm lens used for closeups.

I have included some comments on the issues with “spatial color,” such as the Sony Micro-OELD, along with some alternative technologies in the appendix.

White Background Image Compared to Hololens 2

A mostly white background makes the slight color shift from yellow on the left to blue on the right (for the left eye), the pincushion effect on the bottom, and tilt on the right side a little more apparent.

To keep it in perspective below is how bad the Hololens 2 does on the same type of test pattern. The Hololens 2 has one of the worst images I have ever seen in an AR headset.

Nreal vs Hololens 2 Close up

Below, I have taken a crop of a full-frame image of both the Nreal (top) and Hololens 2 (bottom) to see the difference in resolution. Both pictures were taken with the identical camera and lens so the magnifications are as close to equal as I could make them.

Nreal with a 1080p display and 52-degree diagonal has about 42 pixels per degree. Hololens 2 is nowhere close to delivering on Microsoft’s claimed 47 pixels per degree. As far I know, this blog is the only place that has called Hololens out for this verifiable false specification and I have never seen Microsoft defend it.

Tracking Image From the OLED to the Eye

The following set of photographs so how the image changes as it progresses through the optics. To understand the pictures, it is helpful to once again look at the block diagram explained in parts 1 and 2 and copied below.

View of OLED with Polarizer and Glass Lens

For full frame picture below, I removed the beam splitter (see two pictures on the right) and the camera is alight with the OLED with the lens and pre-polarizer.

I should note that the lens for the full-frame images was a 17mm Micro-4/3rds lens the allowed the camera’s sensor plane to get close enough to the Nreal’s optics to work. That lens cannot focus close enough (neither can the eye) without the optical power (magnification and change of focus) of the curved Mangin Mirror. For the more direct view of the OLED, I use an Olympus 60mm Macro lens that could focus close enough to give sharp images. After the 13mm, the focus and geometric shape vary radially, and I focused the lens on the center of the image using a moderate f-number to help bring the corners more into focus.

Direct View of OLED with Polarizer and Beam splitter

The most obvious thing you will see is that the ~13mm wide-angle lens has significantly barrel-distorted the image. The image in the corners is smaller, and if you look at the full-size image in detail, you will also see multi-pixel-wide chroma distortion (color separation) toward the outer parts of the image (close-up comparison later).

Front View with Polarizer Removed

Next, I removed the front polarizer and the glasses’ lens cover to see the image after the beam splitter (view with and without the polarizer on the right). The light is going through the Mangin mirror, which might distort it very slightly.

The most obvious problem is that we now picked up the reddish glow below everything white in the image. Since the effect is “linear” and not radial, the culprit is the beam splitter I dissected in part 2. The Nreal beamsplitter has 3 layers of films on top of a glass substrate.

View of Image through Curve Mirror with Front Polarizer Removed

Tracing Through The Optics Corner and Center

Below I have cropped the corner and center from three views of the image as it progresses through the optics. The top row shows the corner starting with the OLED plus pre-polarizer and glass lens, then the view through the beam splitter and partial mirror (with the front polarizer and lens cover removed), and on the right, the output, the eye.

Upper Left Corner and Center of Image as the Image Moves Through the Optics

Looking across the first row, you can see how the curved mirror is correcting the geometric and chroma distortion caused by the glass lens on the OLED. Looking at the second row, you can see how it starts with no glow below the white circle, the glow starts after the reflection off of the beam splitter, and then the glow gets a little worse after the second pass through the beam splitter.

Comparing the second-row crop after the reflective pass to the final image (lower right). You can also see a white glow above the circle and an overall lowing of contrast a bit. Some of the contrast loss might be in the Mangin mirror and its coasting, but the beamsplitter is the more likely cause.

Conclusion

The Nreal headset has among the best image quality of the glasses form factor AR headsets. It seems to have cut a few corners. In particular, the beam splitter component could be better. Also, by going to two lenses, they could have the lens(es) from being seen. There are also brighter Micro-OLEDs from Sony and eMagin (and maybe others) available but they will cost more.

In the end, I would expect similar performance from similar birdbath designs such as Lenovo’s AR, AM Glass, and Qualcomm’s reference design. Some might use brighter displays for a brighter image, but probably not bright enough for outdoor uses where you want thousands of nits. Like Nreal, I would expect these designs to block about 75% of the real-world light and totally block the view out much above the eye line.

At the same time, Hololens 2 is showing that you can sell an AR headset to the “enterprise market” with terrible image quality, blocking 60% of the real-world light, and with about 500 nits (peak area as the image is so non-uniform) in a fairly bulky “cut open helmet.” What the Hololens 2 got right was things like eye-relief so you can wear glasses, reasonable comfort, and straps over the head, and being wireless.

Appendix: A little On Spatial Color Versus Alternatives

I just want to touch on the issue with using spatial color in AR here as it could easily be a large article in its own right.

“Spatial color” (spatially separate color sub-pixels) is used in every direct view display from watches to computer monitors, large VR and AR headsets. But it is rarely used in smaller glasses-like AR headsets and AR headsets requiring high brightness. It becomes very difficult to support the extremely small pixels sizes required by small AR headsets using spatial color.

The size issue with spatial color is that you need to fit three color sub-pixels in the space of a single pixel. As the headset becomes small, everything tends to need to shrink including the pixels. At the same time, there is the need to have these very small pixels produce a large amount of light.

The average pixel pitch of the Sony Micro-OLED is ~8 microns or about 64 square microns in area. The Compound Photonics field sequential LCOS device used in Lumus’s Maximus has ~3-micron pitch and ~9 square microns in the area or about 1/7th the size. With FSC, a single mirror is used to generate all 3 colors by quickly sequencing red, green, and blue LEDs or lasers. Texas Instruments DLP FSC microdisplays are also still being used in AR, most notably in the newly announce Snap Spectacles development kit. Thin Waveguides commonly use DLP and LCOS displays.

Micro-LEDs (inorganic) have the issue of needing to integrate different LED structures to create spatial color. Jade Bird Display uses an X-Cube with 3 separate chips, but this has both size and optical issues. Ostendo has demonstrated a stacking approach to Micro-LED called the Quantum Photonic Imager (QPI), so the various colors are stacked rather than spatial.

With Laser Beam Scanning, red, green, and blue lasers are combined and then scanned. There are no “pixels” per se. In addition to Hololens 2, the Laser Scanning Alliance (LaSAR Alliance) was formed recently to promote and mutually support the development of laser scanning displays, emphasizing AR. Laser beam scanning is being used both with waveguides (Hololens 2 and Dispelix) and with holographic mirrors (example, North Focals)

I like to make the point that different optical designs will work better or worse with different technologies. Micro-OLEDs, for all practical purposes, don’t work with thin waveguides and tend to be used with larger, simpler optics such as the birdbath. It is not clear that MicroLEDs (inorganic) will work well (efficiently) with waveguides, although they have been demonstrated both with Vuzix and Waveoptics waveguides. Thus most Waveguides (diffractive and reflective) still use LCOS and DLP microdisplays where they can extract a lot of light from smaller displays with high resolution

Karl Guttag
Karl Guttag
Articles: 260

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading