304 North Cardinal St.
Dorchester Center, MA 02124
304 North Cardinal St.
Dorchester Center, MA 02124
Ari Grobman, CEO, and Aviv Frommer, Executive VP R&D of Lumus, stopped by on their way to see several companies in the U.S. with their latest Lumus Maximus prototype (hereafter Maximus). And better yet, they said I could take high-resolution pictures through the lens.
The Lumus Maximus prototype has impressive overall image quality, including the field of view (FOV), resolution, color uniformity, and brightness (over 3,000 nits!), in a glasses-like form factor.
As a prototype, there are issues that Lumus knows about and are planned to be fixed in the final product. The Maximus prototype only has the displays in the glasses with no batteries, processing, cameras, and SLAM. External cables drive the glasses with video and power. It is, after all, a prototype.
The Lumus Maximus uses a Compound Photonics (CP) 2048 by 2048 pixel LCOS microdisplay. While I have heard some very good things about CP’s LCOS, I hadn’t seen it until the Maximus. While this article will be concentrating on the Maximus Glasses, I have to say that CP’s devices should change the perception of LCOS. It has high contrast with great color, small (3-micron pixels) for a small device and display engine, high reflectivity for good efficiency. On top of all these capabilities, it supports a very high field sequence rate to prevent colors from breaking up with head motion.
Some may remember Lumus’s first demonstration of the “Maximus” concept back at CES 2017. Lumus first pioneered 1-D expanding waveguides in the year 2000. But 1-D expanding waveguides require a bigger optical engine. With 2-D expansion and in combination with CP’s small LCOS devices, greatly reduce the size of the optical engine.
Lumus points out that the 50-degree diagonal FOV with a square aspect ratio is just the starting point for their 2-D expanders. They can scale the waveguide both up and down to support other aspect ratios and FOVs.
There is a lot of “wow factor” when you first put it on. The image is particularly large in the vertical direction for a thin waveguide-based display with a square (1:1) aspect ratio rather than the more common 16:9 HDTV-like) or 3:2 of Hololens (more on the aspect ratio later).
You will notice that the image has some pincushion effect. This is a prototype using simpler to make spherical optics. The production version will have aspherical optics to correct this distortion and other optical issues. Lumus currently has software to remove the distortion digitally, but I asked to see it without the correction to see the optics’ capability better.
Below is a picture was taken directly through the Maximus optics showing the whole FOV of the Maximus.
The color and brightness uniformity of the image, while not perfect, is vastly better than any other waveguide-type optics I have seen. To understand how much better it is than the current state of the art, you need to compare it to other devices.
Below are three pictures with the same camera and lens so you can tell the relative FOV resolution. Below-left is the Maximus is displaying 2K by 2K. In the middle is the Hololens 2 displaying 1280×854. On the right is a 1K by 1K pattern digitally pre-scaled to 2K by 2K and displayed on Maximus. You will need to click on the image to see the differences in detail (the combined three pictures are ~11 thousand pixels wide).
Below I used a longer focal length lens (42mm) to optically magnify the center of both the Maximus and HL2 by about 2.5X. The 1, 2, and 3 pixel-wide lines on the Maximus were further magnified another 2x compared to the Hololens 2 (still at 1x). These are the “best case” areas I could find of both the Maximus and HL2 displays. The Maximus can show single-pixel wide lines at 60 pixels per degree, where the Hololens 2 is failing at even 30 pixels per degree. The Maximus appears to have about 4X the horizontal and vertical angular resolution, at least in the center of the image.
Below is a comparison of the Maximus displaying a high-resolution image on a white background compared to a similar but much lower resolution image on the Hololens 2. These were shot with the same camera and lens and fairly show how they look in person.
Shown below is the upper right corner of the white-on-black test pattern, showing both the distortion and some double image. As stated earlier, the Maximus prototype is using simpler spherical optics, whereas the final product will use aspherical optics to correct these issues.
Also, the camera is capturing issues you can’t see with the naked eye. For example, I doubt almost anyone would notice the double images in the corners as A) they are usually in the corner of your vision which is much lower, and B) they are hard to see even with your center vision as they are so small.
Images on a dark background will have somewhat better contrast than those on a white/light background as more total light is injected into the waveguide. You will see this issue with every near-eye display optics. Maximus does better than other waveguide-based optics. Shown below are center crops from black-on-white and white-on-black test patterns.
Getting a completely apples-to-apples comparison of efficiency numbers is next to impossible. Not only is efficiency hard to measure, but there are also dozens of variables that could affect the comparison, including the field of view and eye box size. Still, it is widely acknowledged that Lumus waveguides are much more efficient than diffractive waveguides.
Lumus, in their AR/VR/MR 2021 conference presentation said they expect Maximus will achieve 650nits/lumen. The closest comparable specification I could find is the WaveOptics Oden waveguide with a similar 56° FOV is rated at 50nits/lumen pegging Maximus at over 10x as efficient. As state above, this may not be a totally fair comparison, but it is the best I could find and matches what I hear from others in the industry.
The efficiency advantage is a major factor in headset design. Efficiency translates into battery power consumption, and more important but less obvious, heat dissipation. While casual followers of AR seem to worry about battery life, those designing AR headsets are often more worried about heat management without fans or large heat sinks. Heat tends to build up in small AR headsets.
When looking at the display engine and optics, it is important to take a holistic view and not focus on the size of any one element. You also have to consider the display’s performance, including resolution, image quality, and brightness.
I took the display engine and optics from Lumus’s website. I compared it to a teardown picture for the Hololens 2 that I wrote about in Hololens 2 Display Evaluation and adjusted them to the same scale. The Maximus’s display engine appears to be about 1/4th the volume of the Hololens 2 despite being much higher resolution and about 6 times the brightness.
Aviv Frommer’s presentation (behind SPIE paywall) at the 2021 AR/VR/MR conference shows more detail about the engine. I have included another view from the Lumus website and a picture of the waveguide from the Schott news release about their making of the Maximus waveguide.
It is fairly impressive that the Maximus can display over 3,000 nits through the waveguide with a small set of LEDs on a single PCB. They use a “light pipe integrator” rod to homogenize (uniformly mix) the red, green, and blue light from a single LED PCB. It appears that they have a birdbath-like optics structure. An “injection prism” is built into the waveguide structure to inject the light at the right angle to support TIR (Total Internal Reflection) in the waveguide.
The light pipe homogenizes light from the LEDs. Other optics (not shown) will collimate and shape the light. A polarizing beam splitter will polarize the light (there may also be a pre-polarizer) and direct it toward the field sequential color Compound Photonics 2K by 2K LCOS microdisplay. The LCOS device will selectively change the polarization of the light for each pixel to control the brightness. The “properly” polarized light will then pass through the beam splitter to a curved mirror to collimate the image. There is likely a quarter waveplate (not shown) that will cause the light reflected off the mirror to reflect off the beam splitter to the output. There are some other optical “tricks” that Lumus uses, so the above is an outline of the more obvious structures.
CP has taken field sequential LCOS to a whole new level. I have designed LCOS devices in the past, so I have some direct experience. Forget the low contrast and color breakup reputation sometimes associated with the LCOS of Hololens 1 and Google Glass (see below).
CP also has a 1920 by 1080 device with a 0.26″ diagonal in the same technology. CP could, if volume warranted, make other resolutions.
CP’s very high color sequence rate color field sequencing rate helps prevent the color breakup seen with say the Hololens 1. They support up to a 240Hz frame rate to reduces the “electron to photon latency” and also support a “GPU mode” that could further reduce this latency.
CP’s LCOS uses a very high contrast VAN-type liquid crystal (LC). Usually, VAN has slower switching speeds than the more common Tn LC, but CP has figured out how to coax high speeds, and high field color rates, out of high contrast VAN LC.
CP has a very small 3-micron pixel while maintaining high reflectivity and very good contrast.
Shown on the right is a table taken from the Compound Photonics website. They also support up to a 1440Hz field sequence rate to prevent color breakup. I believe the Hololens 1’s LCOS was four to eight times slower. With the Maximus, you had to shake your head pretty quickly to see even a minor breakup. Compound Photonics supports 240Hz frame rates and direct GPU update modes to reduce the “electron to photon” latency.
The use of CP’s LCOS with Lumus’s more efficient waveguide technology has helped the Maximus achieve a class-leading brightness with over 3,000 nits today, and they expect to have over 4,000 nits soon.
On the left is a picture I took of Aviv wearing the glasses displaying about 200 to 300 nits indoors. Notice how you can see his eyes due to the high transmissivity (~85%) of the waveguides and no external darkening lenses. The two insets of his eyes are from the same picture. You can even see the image he is looking at reflected off his cornea.
Seeing the person’s eyes is an important social aspect. People naturally want to see a person’s eyes (the eyes are the proverbial “The Window Into their Souls”).
Below is a comparison of the Maximus to the Hololens 2 for the view of the eyes in similar lighting conditions (there was lighting from the front in both cases). Note how the eyes are easily visible with the Maximus, where you can barely see the eyes with the HL2.
The degree of transparency means that the glasses are not severely darkening the real world. The light to the eye from the world is reduced by ~15% on the Maximus, whereas the HL2 blocks about 60%, or about 4 times, the real-world light. The HL2 is not the worst offender, most other headsets, particularly the birdbath-based ones such as Nreal, block more than 70% of the light. Most rooms are lit, assuming the people will not be wearing dark sunglasses, so this is a big plus for the Maximus.
Additionally, there is much less blocking peripheral vision around the eyes, particularly upward. There is some slight blocking in the upper corner near the temple due to the display engine, but overall, it is a very open design.
I have an older Lumus 720p (DK-52), WaveOptics diffractive Waveguide, and the Nreal product sold in Korea by LG (hint – I have torn the LG unit down to see what is inside). I took the following 4 pictures back to back with the camera and a diffused white backlight at identical settings to give you an idea of the transmissivity of four sets of waveguides. Lumus is ~85%, WaveOptics specs >70%, Hololens 2 is 40%, and Nreal ~25% transmissive.
It should be noted that both the Hololens 2 and the Nreal have light-blocking in addition to the display optics. Part of the reason for additional blocking of the real-world light is to reduce the forward light projection.
Most waveguides have an issue with light projecting foreword given the wearer a cyber appearance. The HL2 (below right) is famous for this problem. About the same amount of light projects forward as projects to the eye on the HL2, and over a fairly wide-angle, you get the glowing eyes effect. The front projection of the HL2 would appear worse without the light darkening front shield of the HL2.
The Maximus has some front projection, but it is about an order of magnitude less than the HL2. Lumus says the current prototype front projects about 5% of the light, and with improved AR coatings on the LOE (the “slats” in Lumus’s waveguide), they expect the front projection to be reduced to just 1%.
When I first started to put on the Maximus, one thing that struck me was that light seemed to nearly fill the whole area of the waveguide/glasses. I didn’t capture this effect myself, but it turns out Lumus’s Maximus Introduction video has a camera moving in on a dismounted Maximus waveguide that shows the effect. I captured a 3 frame sequence about 23 seconds into the video (see below). From far away, the image looks “choppy,” but it domes together into the image as the camera moves to roughly where the eye would be. Note how much of the waveguide (outlined in green dots) fills the waveguide; this is the “eye box.”
Lumus showed me a demo of the same bird (but a different frame in the bird video) that I photographed (below), showing the final image’s high contrast, detail, and saturated colors.
If you look at the Maximus glasses, you may notice that they already have fairly tall lens openings compared to typical glasses. They need to support the 35.4 vertical FOV with a big enough eye box to support eye movement, different people’s interpupillary distances (IPDs) and eye locations, and shifting glasses placement relative to the eyes.
Below is a simplified view of the FOV and eye box and a simplified view of how “projects” onto a waveguide. The FOV is two angles (horizontal and vertical) that form a rectangular projection that intersects the plane of the waveguide. The eye box is measured in millimeters at the eye and determines how much the eye can move relative to the display (accounting for both eye and glasses movement). Factoring together the eye box size and the FOV (Eye Box + FOV) forms a larger rectangle projection. The waveguide’s light exit area has to be at least a big as the eye box + FOV projection at the location of the waveguide, or the image will be cut off when the eye moves relative to the waveguide.
The Maximus has a 2-D pupil expander that effectively takes a tiny input image and replicates/expands it to fill the whole eye box in two stages. The first stage expands the pupil horizontally, and the second stage expands it vertically. The net result is the eye box roughly drawn on the glasses (below right). I have also drawn (roughly) the projection of the FOV and Eye box on the waveguide.
By the time you allow for a practical amount of eye movements relative to the waveguide, the projection of the eye box at the waveguide pretty much fills most of Maximus’s waveguide. Thus, if you want a much bigger FOV allowing for a reasonable/practical eye box, you would require bigger glasses at the eye relief of typical glasses. This seems to be a point lost on people expecting enormous FOVs from a glasses form factor.
While not shown in the simple diagram above, the Maximus supports pantoscopic tilt (in the vertical direction) and wrap angle, and the eye box is not a simple rectangle.
All waveguides input light that is focused at infinity and this causes the output to be focused at infinity. This issue is discussed in Bernard Kress’s excellent reference book “Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets (I highly recommend getting the PDF/Digital version). The Maximus prototype output is focused at infinity as well.
Typically, it is preferred to have the focus at 2 meters for “general use. I will avoid the very deep discussion of vergence-accommodation-conflict (VAC). Here is a link to an article I wrote about VAC in 2016 concerning Magic Leap, but you can find hundreds of articles elsewhere on the subject. It is a very active topic in AR.
Hololens 1 and 2 both use a dual-lens arrangement (see figure above). A -1/2 diopter lens is built into the plastic shield that protects the waveguide. This first lens moves the focus from infinity to about 2 meters. A second lens, glue near the waveguide, essentially pre-corrects the view of the real world, so the real-world focus is not moved.
Lumus and/or its customers could use the same dual-lens trick as Hololens to move the focus. Another option for Lumus would be to move the focus with a dynamic liquid crystal lens like Lumus’ partner Deep Optics produces. I wrote a little about Deep Optics back in 2018.
It has also been announced that Lumus is working with Luxexcel to develop 3-D printed optics that would encase a Maximus waveguide. Below is a slide from Luxexcel’s presentation at the 2021 AR/VR/MR conference (behind the SPIE paywall). With the Luxexcel technology, it would be possible to do the change of focus of the virtual image and correct do prescription corrections at the same time.
Many, including Lumus, find that a more square FOV is better for Augmented Reality as it is a better match to human vision. We have already seen Hololens 2 change to a 3:2 aspect ratio from the HDTV-like 16 by 9 aspect ratio on the Hololens 1.
As Bernard Kress, Partner Optical Architect at Microsoft Hololens, has pointed out many times, including his SPIE Paper Digital optical elements and technologies (EDO19): applications to AR/VR/MR (image from that paper below), there is a “Fixed Foveated Region” in the center 40-50° of the FOV where the eye will tend to stay as see with the highest resolution. It is also in the region where the two eyes have binocular overlap and sense depth.
Even if the final image is more square, it will be highly desirable to adjust the IPD electronically rather than have custom waveguides for each IPD or, worse yet, a bulky and unreliable mechanical adjustment found on most VR headsets. Allowing for electronic IPD adjustment essentially means having a wider image that can be cropped on either end based on the user’s IPD.
I sense that while a square image in some ways technically matches the eye better, people naturally seem to want wider images. Thus I think the final images will tend toward the 3:2 or 4:3 aspect ratio plus some additional width for IPD adjustment.
This article made several comparisons between the Maximus and the Hololens 2 (HL2). The HL2 is the best-known AR headset and uses laser scanning with 2-D pupil expanding diffractive waveguides with laser beam scanning displays. The HL2 is widely regarded as the most successful (by some measures) AR headset to date.
Microsoft has spent billions of dollars on Hololens, and only part of that went into the displays. Still, it looks like they spent billions of dollars just trying to perfect the Waveguides and the laser scanning engine. Even if they wanted to keep their proprietary (that they got from Nokia originally) waveguide, the whole laser scanning engine was a huge step backward in image quality (the “grass must have seemed greener”). They should have used a better LCOS device in HL2. The HL1 used one of the worst available at the time of its design, and by the time of the HL2 design, there were much better LCOS devices available.
The Maximus uses Lumus’s new 2-D expanding reflective waveguide with LCOS displays. So it is a good opportunity to compare and contrast the various technologies. I have a lot of experience with the HL2, with many articles about it. Additionally, I have ready access to an HL2 for making comparison pictures.
Hololens 2 is a complete product with SLAM (Simultaneous Localization And Mapping) cameras. It has a shield to protect the waveguides, built-in processing, batteries, diopter adjustment to 2 meters, and a complete computer system with wireless communication plus a bunch more. As products go from just displays to complete products, they spiral up in size. I wrote about this in Starts with Ray-Ban®, Ends Up Like Hololens. As the Maximus technology finds its way into products, it might have more than a passing resemblance to the HL2 than Ray-Ban® glasses.
On paper, HL2 and Maximus displays have similar specs. HL2 claims (falsely – see here and here) to have 47 pixels per degree (PPD), where Maximus claims to have 60 PPD (it comes close through much of the image, as the pictures above demonstrate). HL2 has about a 52° diagonal image in a 3:2 aspect ratio, where Maximus has ~50° in a 1:1 aspect ratio. The Maximus has over 3,000 nits of brightness (and plans on more than 4,000 in the final product) compared to the HL2, with about 500 nits in parts of the image. The Maximus blocks only about 20% of the real-world light, where the HL2 blocks about 60%.
Just as a display, Maximus blows away Hololens 2. Listing some of Maximus’s big advantages:
The industry “knock” on Lumus is whether they can make them affordably. Lumus think their new relationship with Schott Glass is a serious breakthrough for them in terms of manufacturing. Schott, which also makes optical glass for diffractive waveguides as well, showed at the SPIE AR/VR/MR conference that reflective (aka Lumus) waveguides can use lower index (which should be less expensive) glass technology.
As Lumus CEO Ari Grobman put it, “Schott would not be working with Lumus if they didn’t think they could make it in high volume.” My old semiconductor experience suggests to me could be at least as cost-effective as diffractive waveguides if they had a fraction of the manufacturing development money spent on Hololens 1 and 2 diffractive waveguides.
I aligned the Olympus D5 mk.3 camera to take the best pictures possible in the confined area of glasses. When showing the full FOV with a 17mm lens, the camera has only 1.7 camera pixels per pixel in the display, which a little less what is necessary to fully resolve the finest detail in the Maximus. Each full FOV image is about 3.5K by 3.5K pixels (after cropping the 4:3 aspect ratio of the camera).
I followed up with some zoomed-in images of the center with a second lens at 42mm focal length (about a 2.5x magnification) to see single pixels. Even with the 17mm lens images, you see the full-size images on a large computer monitor, you will see defects you don’t see with the naked eye. The Maximus displays about 60 pixels per degree (1 arcminute per pixel), at least 1.5x more pixels per degree than most other AR headsets.
Lumus had already taken this blog’s test pattern for 1920 by 1080 pixels and replicated the sub-elements to build a 2048 by 2048 test pattern image to demonstrate the resolution and Field of View (FOV) of the Maximus.