Exclusive: Lumus Maximus 2K x 2K Per Eye, >3000 Nits, 50° FOV with Through-the-Optics Pictures

Introduction

Ari Grobman, CEO, and Aviv Frommer, Executive VP R&D of Lumus, stopped by on their way to see several companies in the U.S. with their latest Lumus Maximus prototype (hereafter Maximus). And better yet, they said I could take high-resolution pictures through the lens.

The Lumus Maximus prototype has impressive overall image quality, including the field of view (FOV), resolution, color uniformity, and brightness (over 3,000 nits!), in a glasses-like form factor.

As a prototype, there are issues that Lumus knows about and are planned to be fixed in the final product. The Maximus prototype only has the displays in the glasses with no batteries, processing, cameras, and SLAM. External cables drive the glasses with video and power. It is, after all, a prototype.

The Lumus Maximus uses a Compound Photonics (CP) 2048 by 2048 pixel LCOS microdisplay. While I have heard some very good things about CP’s LCOS, I hadn’t seen it until the Maximus. While this article will be concentrating on the Maximus Glasses, I have to say that CP’s devices should change the perception of LCOS. It has high contrast with great color, small (3-micron pixels) for a small device and display engine, high reflectivity for good efficiency. On top of all these capabilities, it supports a very high field sequence rate to prevent colors from breaking up with head motion.

First 2D Reflective Expanders

Some may remember Lumus’s first demonstration of the “Maximus” concept back at CES 2017. Lumus first pioneered 1-D expanding waveguides in the year 2000. But 1-D expanding waveguides require a bigger optical engine. With 2-D expansion and in combination with CP’s small LCOS devices, greatly reduce the size of the optical engine.

Lumus points out that the 50-degree diagonal FOV with a square aspect ratio is just the starting point for their 2-D expanders. They can scale the waveguide both up and down to support other aspect ratios and FOVs.

The Wow of 2K by 2K (2048 by 2048) with over 3,000 nits

There is a lot of “wow factor” when you first put it on. The image is particularly large in the vertical direction for a thin waveguide-based display with a square (1:1) aspect ratio rather than the more common 16:9 HDTV-like) or 3:2 of Hololens (more on the aspect ratio later).

You will notice that the image has some pincushion effect. This is a prototype using simpler to make spherical optics. The production version will have aspherical optics to correct this distortion and other optical issues. Lumus currently has software to remove the distortion digitally, but I asked to see it without the correction to see the optics’ capability better.

Below is a picture was taken directly through the Maximus optics showing the whole FOV of the Maximus.

Lumus Maximus 2K x 2K image with 50° diagonal (~36° by ~36°) FOV (click on image)

The color and brightness uniformity of the image, while not perfect, is vastly better than any other waveguide-type optics I have seen. To understand how much better it is than the current state of the art, you need to compare it to other devices.

Below are three pictures with the same camera and lens so you can tell the relative FOV resolution. Below-left is the Maximus is displaying 2K by 2K. In the middle is the Hololens 2 displaying 1280×854. On the right is a 1K by 1K pattern digitally pre-scaled to 2K by 2K and displayed on Maximus. You will need to click on the image to see the differences in detail (the combined three pictures are ~11 thousand pixels wide).


Maximus 2K by 2K, Hololens 2 (1280 by 854), and Maximus Scaling 1K by 1K with same lens (click on image)

Below I used a longer focal length lens (42mm) to optically magnify the center of both the Maximus and HL2 by about 2.5X. The 1, 2, and 3 pixel-wide lines on the Maximus were further magnified another 2x compared to the Hololens 2 (still at 1x). These are the “best case” areas I could find of both the Maximus and HL2 displays. The Maximus can show single-pixel wide lines at 60 pixels per degree, where the Hololens 2 is failing at even 30 pixels per degree. The Maximus appears to have about 4X the horizontal and vertical angular resolution, at least in the center of the image.

Below is a comparison of the Maximus displaying a high-resolution image on a white background compared to a similar but much lower resolution image on the Hololens 2. These were shot with the same camera and lens and fairly show how they look in person.

Lumus Maximus Compared to Hololens 2 with White Backgrounds (click on image to see full size image)

Spherical Prototype Optics and Distortion in the Corners

Shown below is the upper right corner of the white-on-black test pattern, showing both the distortion and some double image. As stated earlier, the Maximus prototype is using simpler spherical optics, whereas the final product will use aspherical optics to correct these issues.

Also, the camera is capturing issues you can’t see with the naked eye. For example, I doubt almost anyone would notice the double images in the corners as A) they are usually in the corner of your vision which is much lower, and B) they are hard to see even with your center vision as they are so small.

White on Black vs Black on White

Images on a dark background will have somewhat better contrast than those on a white/light background as more total light is injected into the waveguide. You will see this issue with every near-eye display optics. Maximus does better than other waveguide-based optics. Shown below are center crops from black-on-white and white-on-black test patterns.

Lumus Maximus may be ~10X more efficient than Diffractive Waveguides

Getting a completely apples-to-apples comparison of efficiency numbers is next to impossible. Not only is efficiency hard to measure, but there are also dozens of variables that could affect the comparison, including the field of view and eye box size. Still, it is widely acknowledged that Lumus waveguides are much more efficient than diffractive waveguides.

Lumus, in their AR/VR/MR 2021 conference presentation said they expect Maximus will achieve 650nits/lumen. The closest comparable specification I could find is the WaveOptics Oden waveguide with a similar 56° FOV is rated at 50nits/lumen pegging Maximus at over 10x as efficient. As state above, this may not be a totally fair comparison, but it is the best I could find and matches what I hear from others in the industry.

The efficiency advantage is a major factor in headset design. Efficiency translates into battery power consumption, and more important but less obvious, heat dissipation. While casual followers of AR seem to worry about battery life, those designing AR headsets are often more worried about heat management without fans or large heat sinks. Heat tends to build up in small AR headsets.

Display Engine Size of Maximus vs. Hololens 2

When looking at the display engine and optics, it is important to take a holistic view and not focus on the size of any one element. You also have to consider the display’s performance, including resolution, image quality, and brightness.

I took the display engine and optics from Lumus’s website. I compared it to a teardown picture for the Hololens 2 that I wrote about in Hololens 2 Display Evaluation and adjusted them to the same scale. The Maximus’s display engine appears to be about 1/4th the volume of the Hololens 2 despite being much higher resolution and about 6 times the brightness.

Aviv Frommer’s presentation (behind SPIE paywall) at the 2021 AR/VR/MR conference shows more detail about the engine. I have included another view from the Lumus website and a picture of the waveguide from the Schott news release about their making of the Maximus waveguide.

It is fairly impressive that the Maximus can display over 3,000 nits through the waveguide with a small set of LEDs on a single PCB. They use a “light pipe integrator” rod to homogenize (uniformly mix) the red, green, and blue light from a single LED PCB. It appears that they have a birdbath-like optics structure. An “injection prism” is built into the waveguide structure to inject the light at the right angle to support TIR (Total Internal Reflection) in the waveguide.

The light pipe homogenizes light from the LEDs. Other optics (not shown) will collimate and shape the light. A polarizing beam splitter will polarize the light (there may also be a pre-polarizer) and direct it toward the field sequential color Compound Photonics 2K by 2K LCOS microdisplay. The LCOS device will selectively change the polarization of the light for each pixel to control the brightness. The “properly” polarized light will then pass through the beam splitter to a curved mirror to collimate the image. There is likely a quarter waveplate (not shown) that will cause the light reflected off the mirror to reflect off the beam splitter to the output. There are some other optical “tricks” that Lumus uses, so the above is an outline of the more obvious structures.

Impressive Compound Photonics (CP) 2048 by 2048 LCOS microdisplay.

CP has taken field sequential LCOS to a whole new level. I have designed LCOS devices in the past, so I have some direct experience. Forget the low contrast and color breakup reputation sometimes associated with the LCOS of Hololens 1 and Google Glass (see below).

CP also has a 1920 by 1080 device with a 0.26″ diagonal in the same technology. CP could, if volume warranted, make other resolutions.

2013 Google Glass
Hololens 1 Color Breakup Example

CP’s very high color sequence rate color field sequencing rate helps prevent the color breakup seen with say the Hololens 1. They support up to a 240Hz frame rate to reduces the “electron to photon latency” and also support a “GPU mode” that could further reduce this latency.

CP’s LCOS uses a very high contrast VAN-type liquid crystal (LC). Usually, VAN has slower switching speeds than the more common Tn LC, but CP has figured out how to coax high speeds, and high field color rates, out of high contrast VAN LC.

CP has a very small 3-micron pixel while maintaining high reflectivity and very good contrast.

Shown on the right is a table taken from the Compound Photonics website. They also support up to a 1440Hz field sequence rate to prevent color breakup. I believe the Hololens 1’s LCOS was four to eight times slower. With the Maximus, you had to shake your head pretty quickly to see even a minor breakup. Compound Photonics supports 240Hz frame rates and direct GPU update modes to reduce the “electron to photon” latency.

The use of CP’s LCOS with Lumus’s more efficient waveguide technology has helped the Maximus achieve a class-leading brightness with over 3,000 nits today, and they expect to have over 4,000 nits soon.

The View In and Out

On the left is a picture I took of Aviv wearing the glasses displaying about 200 to 300 nits indoors. Notice how you can see his eyes due to the high transmissivity (~85%) of the waveguides and no external darkening lenses. The two insets of his eyes are from the same picture. You can even see the image he is looking at reflected off his cornea.

Seeing the person’s eyes is an important social aspect. People naturally want to see a person’s eyes (the eyes are the proverbial “The Window Into their Souls”).

Below is a comparison of the Maximus to the Hololens 2 for the view of the eyes in similar lighting conditions (there was lighting from the front in both cases). Note how the eyes are easily visible with the Maximus, where you can barely see the eyes with the HL2.

The degree of transparency means that the glasses are not severely darkening the real world. The light to the eye from the world is reduced by ~15% on the Maximus, whereas the HL2 blocks about 60%, or about 4 times, the real-world light. The HL2 is not the worst offender, most other headsets, particularly the birdbath-based ones such as Nreal, block more than 70% of the light. Most rooms are lit, assuming the people will not be wearing dark sunglasses, so this is a big plus for the Maximus.

Additionally, there is much less blocking peripheral vision around the eyes, particularly upward. There is some slight blocking in the upper corner near the temple due to the display engine, but overall, it is a very open design.

I have an older Lumus 720p (DK-52), WaveOptics diffractive Waveguide, and the Nreal product sold in Korea by LG (hint – I have torn the LG unit down to see what is inside). I took the following 4 pictures back to back with the camera and a diffused white backlight at identical settings to give you an idea of the transmissivity of four sets of waveguides. Lumus is ~85%, WaveOptics specs >70%, Hololens 2 is 40%, and Nreal ~25% transmissive.

It should be noted that both the Hololens 2 and the Nreal have light-blocking in addition to the display optics. Part of the reason for additional blocking of the real-world light is to reduce the forward light projection.

Front Projection Light

Most waveguides have an issue with light projecting foreword given the wearer a cyber appearance. The HL2 (below right) is famous for this problem. About the same amount of light projects forward as projects to the eye on the HL2, and over a fairly wide-angle, you get the glowing eyes effect. The front projection of the HL2 would appear worse without the light darkening front shield of the HL2.

The Maximus has some front projection, but it is about an order of magnitude less than the HL2. Lumus says the current prototype front projects about 5% of the light, and with improved AR coatings on the LOE (the “slats” in Lumus’s waveguide), they expect the front projection to be reduced to just 1%.

Eye Box and FOV – Physics Reality Check

When I first started to put on the Maximus, one thing that struck me was that light seemed to nearly fill the whole area of the waveguide/glasses. I didn’t capture this effect myself, but it turns out Lumus’s Maximus Introduction video has a camera moving in on a dismounted Maximus waveguide that shows the effect. I captured a 3 frame sequence about 23 seconds into the video (see below). From far away, the image looks “choppy,” but it domes together into the image as the camera moves to roughly where the eye would be. Note how much of the waveguide (outlined in green dots) fills the waveguide; this is the “eye box.”

Lumus showed me a demo of the same bird (but a different frame in the bird video) that I photographed (below), showing the final image’s high contrast, detail, and saturated colors.

If you look at the Maximus glasses, you may notice that they already have fairly tall lens openings compared to typical glasses. They need to support the 35.4 vertical FOV with a big enough eye box to support eye movement, different people’s interpupillary distances (IPDs) and eye locations, and shifting glasses placement relative to the eyes.

Below is a simplified view of the FOV and eye box and a simplified view of how “projects” onto a waveguide. The FOV is two angles (horizontal and vertical) that form a rectangular projection that intersects the plane of the waveguide. The eye box is measured in millimeters at the eye and determines how much the eye can move relative to the display (accounting for both eye and glasses movement). Factoring together the eye box size and the FOV (Eye Box + FOV) forms a larger rectangle projection. The waveguide’s light exit area has to be at least a big as the eye box + FOV projection at the location of the waveguide, or the image will be cut off when the eye moves relative to the waveguide.

The Maximus has a 2-D pupil expander that effectively takes a tiny input image and replicates/expands it to fill the whole eye box in two stages. The first stage expands the pupil horizontally, and the second stage expands it vertically. The net result is the eye box roughly drawn on the glasses (below right). I have also drawn (roughly) the projection of the FOV and Eye box on the waveguide.

By the time you allow for a practical amount of eye movements relative to the waveguide, the projection of the eye box at the waveguide pretty much fills most of Maximus’s waveguide. Thus, if you want a much bigger FOV allowing for a reasonable/practical eye box, you would require bigger glasses at the eye relief of typical glasses. This seems to be a point lost on people expecting enormous FOVs from a glasses form factor.

While not shown in the simple diagram above, the Maximus supports pantoscopic tilt (in the vertical direction) and wrap angle, and the eye box is not a simple rectangle.

Lenses and Diopter Adjustment

All waveguides input light that is focused at infinity and this causes the output to be focused at infinity. This issue is discussed in Bernard Kress’s excellent reference book “Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets (I highly recommend getting the PDF/Digital version). The Maximus prototype output is focused at infinity as well.

Typically, it is preferred to have the focus at 2 meters for “general use. I will avoid the very deep discussion of vergence-accommodation-conflict (VAC). Here is a link to an article I wrote about VAC in 2016 concerning Magic Leap, but you can find hundreds of articles elsewhere on the subject. It is a very active topic in AR.

Hololens 1 and 2 both use a dual-lens arrangement (see figure above). A -1/2 diopter lens is built into the plastic shield that protects the waveguide. This first lens moves the focus from infinity to about 2 meters. A second lens, glue near the waveguide, essentially pre-corrects the view of the real world, so the real-world focus is not moved.

Lumus and/or its customers could use the same dual-lens trick as Hololens to move the focus. Another option for Lumus would be to move the focus with a dynamic liquid crystal lens like Lumus’ partner Deep Optics produces. I wrote a little about Deep Optics back in 2018.

It has also been announced that Lumus is working with Luxexcel to develop 3-D printed optics that would encase a Maximus waveguide. Below is a slide from Luxexcel’s presentation at the 2021 AR/VR/MR conference (behind the SPIE paywall). With the Luxexcel technology, it would be possible to do the change of focus of the virtual image and correct do prescription corrections at the same time.

Why a Square 50° FOV?

Many, including Lumus, find that a more square FOV is better for Augmented Reality as it is a better match to human vision. We have already seen Hololens 2 change to a 3:2 aspect ratio from the HDTV-like 16 by 9 aspect ratio on the Hololens 1.

As Bernard Kress, Partner Optical Architect at Microsoft Hololens, has pointed out many times, including his SPIE Paper Digital optical elements and technologies (EDO19): applications to AR/VR/MR (image from that paper below), there is a “Fixed Foveated Region” in the center 40-50° of the FOV where the eye will tend to stay as see with the highest resolution. It is also in the region where the two eyes have binocular overlap and sense depth.

Even if the final image is more square, it will be highly desirable to adjust the IPD electronically rather than have custom waveguides for each IPD or, worse yet, a bulky and unreliable mechanical adjustment found on most VR headsets. Allowing for electronic IPD adjustment essentially means having a wider image that can be cropped on either end based on the user’s IPD.

I sense that while a square image in some ways technically matches the eye better, people naturally seem to want wider images. Thus I think the final images will tend toward the 3:2 or 4:3 aspect ratio plus some additional width for IPD adjustment.

Conclusion: Microsoft spent billions of dollars on Hololens to look so much worse than Maximus.

This article made several comparisons between the Maximus and the Hololens 2 (HL2). The HL2 is the best-known AR headset and uses laser scanning with 2-D pupil expanding diffractive waveguides with laser beam scanning displays. The HL2 is widely regarded as the most successful (by some measures) AR headset to date.

Microsoft has spent billions of dollars on Hololens, and only part of that went into the displays. Still, it looks like they spent billions of dollars just trying to perfect the Waveguides and the laser scanning engine. Even if they wanted to keep their proprietary (that they got from Nokia originally) waveguide, the whole laser scanning engine was a huge step backward in image quality (the “grass must have seemed greener”). They should have used a better LCOS device in HL2. The HL1 used one of the worst available at the time of its design, and by the time of the HL2 design, there were much better LCOS devices available.

The Maximus uses Lumus’s new 2-D expanding reflective waveguide with LCOS displays. So it is a good opportunity to compare and contrast the various technologies. I have a lot of experience with the HL2, with many articles about it. Additionally, I have ready access to an HL2 for making comparison pictures.

Hololens 2 is a complete product with SLAM (Simultaneous Localization And Mapping) cameras. It has a shield to protect the waveguides, built-in processing, batteries, diopter adjustment to 2 meters, and a complete computer system with wireless communication plus a bunch more. As products go from just displays to complete products, they spiral up in size. I wrote about this in Starts with Ray-Ban®, Ends Up Like Hololens. As the Maximus technology finds its way into products, it might have more than a passing resemblance to the HL2 than Ray-Ban® glasses.

On paper, HL2 and Maximus displays have similar specs. HL2 claims (falsely – see here and here) to have 47 pixels per degree (PPD), where Maximus claims to have 60 PPD (it comes close through much of the image, as the pictures above demonstrate). HL2 has about a 52° diagonal image in a 3:2 aspect ratio, where Maximus has ~50° in a 1:1 aspect ratio. The Maximus has over 3,000 nits of brightness (and plans on more than 4,000 in the final product) compared to the HL2, with about 500 nits in parts of the image. The Maximus blocks only about 20% of the real-world light, where the HL2 blocks about 60%.

Just as a display, Maximus blows away Hololens 2. Listing some of Maximus’s big advantages:

  • Over 6 times brighter
  • About 4 times more transparent
  • About 10X more display light efficient
  • Smaller optical engine and smaller waveguide
  • 10 to 16 times (H times V) the resolution of HL2
  • Much less forward light projection.
  • Color and brightness uniformity is vastly better
  • Color quality is better
  • Contrast is better except on an almost black image (less scatter in the waveguide)
  • Much smaller engine and I would expect much lower power.

The industry “knock” on Lumus is whether they can make them affordably. Lumus think their new relationship with Schott Glass is a serious breakthrough for them in terms of manufacturing. Schott, which also makes optical glass for diffractive waveguides as well, showed at the SPIE AR/VR/MR conference that reflective (aka Lumus) waveguides can use lower index (which should be less expensive) glass technology.

As Lumus CEO Ari Grobman put it, “Schott would not be working with Lumus if they didn’t think they could make it in high volume.” My old semiconductor experience suggests to me could be at least as cost-effective as diffractive waveguides if they had a fraction of the manufacturing development money spent on Hololens 1 and 2 diffractive waveguides.

Appendix: Some Picture Taking Details

I aligned the Olympus D5 mk.3 camera to take the best pictures possible in the confined area of glasses. When showing the full FOV with a 17mm lens, the camera has only 1.7 camera pixels per pixel in the display, which a little less what is necessary to fully resolve the finest detail in the Maximus. Each full FOV image is about 3.5K by 3.5K pixels (after cropping the 4:3 aspect ratio of the camera).

I followed up with some zoomed-in images of the center with a second lens at 42mm focal length (about a 2.5x magnification) to see single pixels. Even with the 17mm lens images, you see the full-size images on a large computer monitor, you will see defects you don’t see with the naked eye. The Maximus displays about 60 pixels per degree (1 arcminute per pixel), at least 1.5x more pixels per degree than most other AR headsets.

Lumus had already taken this blog’s test pattern for 1920 by 1080 pixels and replicated the sub-elements to build a 2048 by 2048 test pattern image to demonstrate the resolution and Field of View (FOV) of the Maximus.

Karl Guttag
Karl Guttag
Articles: 260

43 Comments

  1. Hello Karl ,

    you compare new and still not available technology with technology developed and introduced to the market more than two years ago. I also doubt that Microsoft spent billions on the Hololens 2 display. Microsoft paid only approx. $14 million to Microvision for the development of the projection engines. You should correct that. Maybe the complete Hololens architecture including software etc. cost much more to develop. But not the display even with additional Microsoft development costs..

    But more important: I see only 2D pictures. Is this new headset a true AR device (so true AR) or only a smart glasses device with 2D pictures?

    Best regards,

    • As I pointed out in the article, there were much better displays when the HL2 design went down. HL1 used about the worst LCOS available at the time; there were at least 3 or 4 better LCOS devices. Yep, Microsoft only pre-paid something like $14M to Microvision (of which Microvision apparently only burned off $1.5M from the prepayment – at this rate, it will take more than 5 years before Microvision gets “new” money from Hololens). Microsoft then spent hundreds of millions trying to get the laser scanner, its optics, and the waveguide to work together. Why do you think they decided to pay a royalty and take it over themselves. The net result was one of the worst displays for its time that anyone has ever sold.

      I also have heard that while theoretically, the lasers were good for using with diffractive waveguides, Microsoft ran into massive problems and required very critical assembly. Thus all the junk that they shipped at the beginning.

      Certainly, the Maximus has 3-D, and they had videos they could show, but you can’t take pictures of it. Besides, 3-D is no big deal in terms of displaying with a binocular headset. Maximus is not a complete product, and as I wrote, by the time someone adds SLAM, Processing, and a Battery, it will likely end up looking more like Hololens than glasses, but the image quality would blow away HL2. There are strong rumors that Microsoft may be moving back to LCOS at some point (we will see). It does help explain why Microvision has become a “Lidar Company” (another crowded field where it is not clear they have anything that great) and barely seems to talk about displays anymore.

      • Hello Karl,

        thank you very much for your explanation. What I wanted to say is that I think from 2D to 3D is maybe not so easy and so good 2D quality does not mean always also good 3D quality.

        One correction: Microsoft paid $14 million for the development of the Hololens 2 projection engines to Microvision. In addition, so, on top, Microsoft paid also $10 million as prepayment for components. And it pays also additional revenues.

        I think why Microvision is now focusing on automotive Lidar is because that AR is not – and as I think it will not be for years – a mass market so it cannot generate relevant revenues and profits as Lidar likely can. As you also noted that it will take time until the prepayment is exhausted.

        Microvision has announced the Lidar specifications. E.g. the resolution is 10 times higher (in points per second) and twice as much (in points per square degree) than the resolution of the upcoming Lidar module Iris from the market leader Luminar. I wonder why everyone questions the Microvision specs and never of the competitors. I think that is not fair. Luminar Iris is a belt-driven mechanical-optical very huge device. It is so big that Luminar itself suggests to make the roofs of cars bulky to have space for it. Microvision Lidar is not much bigger as a smartphone even with the better specifications and solid-state, so no moving parts (except the MEMS), no belt drive etc.

        https://www.globenewswire.com/en/news-release/2021/04/28/2218643/0/en/MicroVision-Announces-Completion-of-its-Long-Range-Lidar-Sensor-A-Sample-Hardware-and-Development-Platform.html

        https://www.luminartech.com/products/

        Best regards

      • Microvision appears to continue to be a company that is run to manipulate the stock price. They have done a great job in manipulating their stock price.

        Thanks for the correction. Still last year they only reported $1.5 million of the $10M advance being burned off. They continue to lose about $2M/month.

        I don’t know Lidar, but I am told by someone that does that Microvision is playing games with their Lidar numbers. And that while Micfovision can make a big deal about them to the uneducated, they are meaningless. Based on Microvision’s track record with lying about display spec’s, this would seem to be par for the course with them. Somebody needs to start a blog on Lidar that does some critical analysis.

      • Hello Karl,

        I do not think that Microvison manipulated the stock price. At least not up. In the last ten years the stock price went down almost continuously from $20 to $0.20. Only after the turnaround and the new products it went up since last year. That is because they have now a real mass market product with Lidar. Smartphone projectors and AR are a nonexistent or very low volume market. No company makes money with AR. Only Microsoft after the army contract. So, the share price is still below the $20 per share ten years ago.

        If you claim it then you must claim that Microvision manipulated the stock price downwards. They allowed short selling of the stock in the past during offerings. That was not okay. But that was against the shareholders.

        I wonder why nobody questions the specifications from Luminar, Velodyne, Ouster, Blickfeld, Argo and others. Not in the press and not in their conference calls. Luminar claims that they have the best Lidar but never released the resolution in points per second. They even hide the specifications. Same for Velodyne H800. Argo made also an huge announcement but without any specifications.

        And I think MIcrovision could not lie in SEC fillings about specifications.

        Even if the Microvision Lidar has only half of the resolution of the announced 10.8 million points per second it would be with 5 million points per second still better than the modules of all known competitors and five times higher than the next-generation Luminar Iris Lidar module.

        The resolution is also not implausible. Intel claims a similar resolution of 20 million points per second with a MEMS Lidar for indoor but only for few meters range not 200 meters.

        So, I think you are no fair. You cannot rate it until the product was tested.

        And with all critics – I think you must admit that from all MEMS projector companies, including STM and Bosch, Microvision has still the best solution with the highest resolution. So, why should Microvision not also have the best MEMS based Lidar?

        I remember also a blog post by you years ago where you claimed that sensing with MEMS is not possible because of shadows etc. But a CES video shows that the MEMS sensing of the interactive projector was really good working.

        So, I will wait who is right. You or the engineers at Microvision. We will see that during the next days, weeks, or months. Not later. Orders must come in for the Lidar module in that time or a buyout. If not, you are right. If they come, I am.

        Best regards,

        Chris

      • Their stock very low stock price reflected their total lack of credibility after years of having “expectations” that were never met. By SEC standards they don’t “lie” as they put the word “expect” in front of every statement. They just had 20+ years of ridiculous expectations. Rain or shine Microvision has lost money. They close a “big” new deal (ala Sony, Celluon, and others), sell a bunch of stock, and then lose more money (rinse and repeat for 20+ years).

        With Lidar, it looks like they are able to take on a fresh set of “suckers” with money.

        Almost no one checks the specs from anyone. There are no marketing police. Microsoft has lied about the resolution of the HL2, and it can be easily verified that they lied, and I am the only one I know of that has called them out on it. Most just repeat whatever the company says as fact.

        From what I understand Microvision just made up a spec and it does not really make sense in the context of Lidar. Let’s see someone independently test the Lidars who knows about Lidar.

        Yeah, how did that touch screen projector work out? Where are all the units going into restaurants or whatever? Where is the product? Also, the shadow thing is real. You have to touch “in the right way.” It will “work” for say a piano, but not a Qwerty keyboard.

        I shouldn’t complain, I have made a lot of money on Microvision stock (and have never shorted it or any other stock, but have sold most of my shares). I just rode the management’s “expectations.”

  2. Just noting that in your JANUARY 15, 2021 article – KGOnTech Video Presentations On Augmented Reality – your chart estimated reflective waveguide such as LUMUS at “2 to 3X better than diffractive waveguide” . In this article you estimate “Lumus Maximus may be ~10X more efficient than Diffractive Waveguides” .

    Could you clarify if the >3,000 CP LCOS nits is input or output? If output , what do you estimate is the input?

    Also the contrast ratio for the CP LCOS chart states 2100:1 (depending on optical system) so I’m assuming this means output . For comparison Microsoft Advanced Optics GM, Zulfi Alam stated HL2 has a “fundamental” 2500:1 output at about 5:30 in this video –

    • Good catch. It is tough to get good comparison numbers on efficiency. This was the first time I was able to get the data waveguides with similar FOV. Lumus is also saying that they have come up with some new tricks that greatly improve efficiency.

      The Nits are measured at the output of the waveguide to the eye and not from the LCOS. Nits are somewhat meaningless off the LCOS as you would have to characterize the optics as well. Thus you see the Nits-out versus the Lumen (in) spec as in Nit/Lumen.

      I looked at the video you pointed to. They were comparing the HL1 with a contrast of at best a few hundred to one to a laser scanner which in theory is nearly infinite. Plus it was a dumb simulated image and nothing actual. From what I could see the CP contrast was more than adequate as I could see no picture frame. What Microsoft does not talk about in their video is “System Contrast” or “ANSI Contrast” which occurs when you have a mix of image content and black/clear areas. Hololens 1 and 2 have a lot of scattered light in their waveguides that kills the real contrast. If you are only putting up black, HL2 will win, but start putting up anything on the screen and the contrast drops dramatically with the HL2. Open up the side by side White on Black Image (link). Look at the center rectangle with the 23 in the circle and then look at the “1024 x 1024” in with rectangle #34 and blow them up 100%. These two pictures where exposed as close as I could get them (it is hard with the HL2 because the uniformity is so poor). Notice how much darker it is around the circle and the number within the circle — The Maximus more than 10x the “system contrast” of HL2. Almost all of this is down to the Lumus Optics, but the Compound Photonics is not limiting them.

  3. Thank you for the review Karl!
    The images indeed look great

    Two questions:
    1. What would you say the weight of the glasses?
    2. Were you able to capture images with the external scene at the background (the same way you did with in the past with ML and HL)?

    • Thanks,

      1. I didn’t weigh them as it was only a prototype without a battery or processor and cables coming out of the back. As I wrote in the article, I suspect that anyone doing anything close to a 2K by 2K display with a 50-degree FOV with things like SLAM is going to end up looking more like Hololens than Ray-Bans.
      2. In the case of HL1, HL2, and Magic Leap I had the units in my possession where I could do setups over many days. In the case of Maximus, I had the unit for a few hours. It takes a lot of time to get the best shots possible. Afterward, I was thinking, “I should have shot this or that” but you might be surprised how fast the time flies. I really wanted to nail the pictures of the display itself, so there were several loops of “shoot-download to the computer and adjust.” I am familiar with the older Lumus 720p single expansion waveguides. They are much better in terms of the view through the lens of the real world and capture less off-angle light than diffractive waveguides, but they can capture some if it comes at the louvers from a specific angle. As for how the images combine with the real world, that is a pretty simple matter of the ratio of the light of the display to the light in the real world (less any blocking of it by the lenses).

  4. man that thing is awesome …it’s already pretty excellent for a consumer device compared to everything else …but if it was 50×50 fov it would be pure gold

    • Please understand that it is a prototype without any SLAM, Cameras, processors, battery, etc. By the time you load it up with those things, will start looking more like a small version of Hololens, maybe more like the Nreal “Enterprise Edition” (for a picture see: https://venturebeat.com/2021/02/22/nreal-unveils-enterprise-edition-of-mixed-reality-glasses/) but with a much thinner front section than glasses.

      If you want a 50 x 50 FOV you are going to need VERY tall glasses, more of a ski mask size, in the vertical direction to support the FOV with some eyebox. I would expect the FOV to go wider but not a lot taller. Lumus says they can with design options trade some Eye Box for more FOV. So maybe they can get say 40 degrees vertically or a bit more. They have more room to grow wider. But once again you come up against the size of glasses eventually. A lot of this comes down to what you are trying to do.

      • the nreal enterprise edition is a bit weird …it’s still wired to a processing unit right …so they don’t put any of the really bulky warm component in the glasses it just has a different more secure head mounting which could be way simpler by connecting the temples of the consumer version with something ….I’m fine with the wire so a slimmer nreal light consumer version with better display sounds awesome

  5. Karl,
    Will you attend John Fan’s third and final webinar on future of AR VR devices at noon today?

  6. While in the future there might be better displays, the US army is now using Microvision projectors in the Microsoft Hololens 2 IVAS AR devices as the Microvision CEO confirmed today in the annual shareholder meeting:

    “Another competitive advantage comes from being able to scale our cost-effective, solid-state beam steering system for automotive use. As I mentioned in April, we launched our fifth-generation MEMS to a 200-millimeter wafer size with our MEMS fab partner. By using our proven technologies and components, rather than those that are exotic and require significant investments from our competitors. All this is built upon the high reliability of our technology that has allowed our April 2017 partner to address consumer, commercial and military markets with our technology.”

    https://microvision.gcs-web.com/static-files/e148b034-388a-487e-92ea-7dbafbbaefee

  7. Very informative and thoughtful article, as always.

    I am not sure if this may be considered an academic question but I often find the use of the terms waveguide and lightguide confusing. Intuitively, I would say lightguide = recfractive (e.g. Lumus mirrors) and Waveguide = diffractive (e.g. HL or Wave Optics gratings). You clearly use the terms refractive / diffractive waveguide, which I find a very good (because unequivocal) approach, but I still have the impression that the optical concept should determine the terminology.

    In your opinion, is there an agreed understanding of this (by the experts) or does it remain a bit ambiguous without the additional clarification?

    • Technically, neither Lumus nor the Diffractive “waveguides” are truly waveguides. They both are I guess more light guides using TIR. In the common vernacular, “waveguide” is used for a thin structure where there are “many” TIR bounces. It is the TIR part of the guide that causes them to be called “guides” and not the structures that make the light enter and exit. While physics is different than a classical waveguide, and why it is not a true waveguide, it serves the same purpose of (nearly) lossless propagation. When the structure is thicker and there are only a few TIR bounces they tend to be called free form optics or some other name.

      While not technically correct, the term “waveguide” was firmly entrenched long before I started covering the subject. It serves the purpose of describing a thin structure with a very high number of TIR reflections. We need word for structures with their characteristics. This does not bother me so much as calling 3-D binocular stereo a “Hologram” where the purpose it to make it sound grander. Also, using the terms Holograms for Binocular Stereo then confuses it with real Holograms. The term was used by Microsoft to deliberately confuse.

  8. Kopin Webinar part 3 spoke a lot on Optics but more specifically Pancake Optics. In addition, he spoke to FB oculus often too. If you google FB and pancake optics you’ll find that’s what FB is using. But lets not lose sight that Kopin has the trademark on pancake optics

    They must have some agreement to come?!

  9. Could wireless streaming of content retain some of its sunglasses look?
    It would only need a few additional cameras and the rest could be computed remotely…
    Is this something those companys think a lot about?

  10. Hi Karl, would you be able to recommend a “development kit” by any vendor which is available to individual developers?
    I’ve tried to get any Lumus DK but they seem to be focused only on OEM partners/developers.
    Would love to get a recent/up-to-date device for a side project I’m working on.

    • Smaller companies tend to be more restrictive with their SDKs. In spite of being expensive to purchase, the cost to the company of support burden for SDKs usually exceeds the price that is paid. Thus the tendency to only support potentially large companies. Companies planning on selling their products in higher volumes are more likely to have a more open-to-anyone support policy to get developers involved.

      I think you can still get the Nreal SDK and of course, there is Hololens 2. I don’t know how good the support network is for Nreal. Hololens has a large support network.

  11. […] I want to note that while the forward projection is very good for a diffractive waveguide, Lumus expects to have only 1% forward projection with their Maximus reflective-base waveguide as discussed in this blog’s Exclusive Article on the Maximus. […]

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading