304 North Cardinal St.
Dorchester Center, MA 02124
304 North Cardinal St.
Dorchester Center, MA 02124
In my last article, I discussed my thought on Tilt-5, which I saw for the first time at AWE. For this article, I’m grouping ST Microelectronics (ST-Micro), Oqmented, and Dispelix as they are all involved in laser beam scanning (LBS) displays for AR.
I received comments on LinkedIn that last week’s article, AWE 2021 Part 1 – Tilt-5 Was Magical, was strangely positive. I analyze technology for how it works for its intended purpose and not on an absolute scale. The question for me is whether it will work for its intended application. Still, there is also something about Tilt-5 that has qualities that so many, including myself, find magical. Based on several other articles and podcasts (The Voiced of VR Interview at AWE with Jeri Ellsworth is particularly good) that I have seen/heard since Tilt-5 was a hit at AWE. As I tried to point out in the article, Tilt-5 is not great for general-purpose AR if there is such a thing, but it is magical in the context of the applications they are targeting.
The other technology getting a lot of attention at AWE was Lynx AR, which I will cover in a future article. My current plan is to release articles AWE every few days, and I also plan on finishing an article I started a while ago on Avegant’s new small LCOS optical engine.
[[Spoiler] For those that think I am overly critical, this article will be back to your regular program. Specifically, Oqmented’s Lissajous laser scanning seems to me to be a poor way to produce an image but might be good for 3-D space recognition (LiDAR and other methods).
St Micro leads the newly formed Laser Scanning Alliance (LaSAR), of which Dispelix is a member (but Oqmented is not). The LaSAR appears to be a loose alliance of companies promoting and cooperating on laser scanning for augmented reality glasses. LaSAR currently has three very large companies of ST Micro (mems mirror manufacture), Applied Materials (waveguide manufacturing), OSRAM (laser manufacture), and two small companies, Dispelix (waveguide design) and Mega1 (laser scanning optics).
Based on my 44 years in the industry, I have not seen a lot of success with loose alliances, and the few that have succeeded were often competitors trying to set standards. I should also note that Oqmented, which makes a laser scanning device, listed as a member of LaSAR (at least not at the time of AWE 2021) although they have a manufacturing and marketing agreement with ST Micro.
I will “pick on” Oqmented first because they are the most interesting and different from most LBS displays. They were very nice to me at their booth, so this is nothing personal, but I don’t believe that Oqmented’s technology will ever be good for making AR displays.
Most laser beam scanning (LBS) displays to date have used a raster scanning-like approach. The horizontal scanning is typically a much faster sinusoidal scan in the Kilohertz (typical 5kHz to 54kHz depending on the resolution) and a slower, somewhat linear, driven fast return vertical scan. I discussed the Microvision scanning processed as far back as 2012 (in Cynic’s Guild to CES — Measuring Resolution) and the many problems and the Hololens 2’s more complex variation of it, 2 in Hololens 2 Display Evaluation (Part 1: LBS Visual Sausage Being Made).
Oqmented uses a Lissajous scanning process where both the horizontal and vertical scanning direction. A Lissajous pattern is what you get when you drive the X and Y of an oscilloscope or a light shining on a bidirectional mirror by two sinusoids. A short explanation of Lissajous patterns with an interactive simulation is given on a web page by Data Genetics (left).
The advantage of Lissajous LBS is that the vertical scanning can more than an order of magnitude faster, on the order of Kilohertz, versus the typical raster type LBS vertical scanning on the order of 60 to 120 Hz.
In the case of Qqmented, they try and crank up the horizontal and vertical scanning frequencies in a phase relationship to generate to cover the whole image. Below are pictures of Oqmented’s front projector taken at 1/500th of a second and then at 1/60th of a second to capture the whole image.
This is a horrible way to generate a high-resolution display image (but it might be a good way to scan for something like LIDAR – more on this use later). As you can see, even in the 1/60ths of a second picture above, you can still see the effects of the Lissajous scanning. Worse yet, when you look at the projection live, your eyes, with their saccadic motion, will capture distracting flashes of individual scans.
The problem with using Lissajous patterns to create a display the pseudo random scanning is uses to try and create a uniform image. Imagine painting a room with a small paintbrush with random strokes in different directions. Some parts of the image are scanned multiple times making them brighter while other parts are rarely, if ever, scanned making them darker. No amount of compensation/correction will correct for these problems.
Beyond the fundamental problem of uniformity, the mirrors simply Beyond the fundamental problem of uniformity, the mirrors can’t scan fast enough by over an order of magnitude to fill in all the “holes.” The mirror scanning frequencies are resonant set by the physical design of the mirror. To make the mirror go faster, they have to make it much smaller, but it is too small for the laser beam diameter and extremely difficult to manufacture.
Below is another picture of a whole scene taken at 1/60th of a second. It was blurry in the corners (I missed asking why). The projector was driven by hardware hidden under the projector (right).
This drive board for the front projector was capable of wide range of color shades. Noticeably, the small boards in tThis drive board for the front projector was capable of a wide range of color shades. Noticeably, the small boards in the AR glasses demo appeared to support much less color variation, and the demo image was on-off and many few shades in between.
The process of converting a normal image into a Lissajous scan is complicated. First, there is the issue that the original rectilinear image must be mapped onto the constantly varying Lissajous scan. The scan moves at a variable velocity based on two different sine wave functions, and the variation in speed has to be compensated for with the drive control. All of this rescaling and compensating takes away from the resolution and color fidelity. I suspect this is why the smaller AR glasses demo has a much smaller board that only supports low-resolution figures with a few different colors.
The picture (right) set encapsulates the monocular AR glasses demo using a Dispelix waveguide shown in the Oqmented booth. On the right, you can see a top view of the glasses and views from both sides and a view through the glasses with the camera exposure based on the real world’s light.
You may notice that the projector in the right temple of the glasses is rather large, but it is not yet a serious concern. This is only a prototype and Oqmented has a path to reduce the volume of the projector. What concerns me are more fundamental issues with the image quality. What is the point of making it smaller if the image quality is still less than that of a smartwatch?
Below is a typical demo image through the same glasses with a camera exposure set based on the virtual image. It is low resolution, on what I would expect from a 320 by 240 pixel display, and has very few colors. The colors that are there are not well controlled. Much of the varying color issues are likely due to the Dispelix diffractive waveguide. Some of the color control issues are likely due to control issues with the lasers.
The shutter speed was only 1/30th of a second, but you can still see the crosshatch pattern from the Lissajous scanning. Looking carefully at the picture, you can see a dotted texture to the image, visible (see enlarged inset in the upper left below). I did not get a reasonable explanation of this dot pattern. The Oqmented people said it might be the waveguide, but it looks like it is blank spaces in the laser driving process (essential “pixels” in the Lissajous scan process).
Below is a photo Below is a photo gallery with two more typical images captured through the lens by my camera (click to see larger images), one taken at 1/60th of a second and the other taken at 1/30th of a second, as with the previous image, that lack color depth and crosshatching from the Lissajous scanning.
The other AR LBS prototype headset at AWE2021 was by Displex in the LaSAR Alliance booth section. While Oqmented also used Dispelix waveguides, Displex’s own demo used an ST-Microelectronics LBS engine with more conventional raster-type scanning. The resolution was better than Oqmented’s glasses demo and did not have the diagonal lines through the images. Still, the image quality and resolution are less than a typical smartwatch.
Below is shown the glasses with them being worn (and the forward projection from the waveguide) and a picture I took through the lens with an enlargement. The glasses are sleeker than the Oqmented prototype but I think that is more due to the state of prototype development than something permanent. Both glasses are using Dispelix waveguides.
Below are some stills from a video by ST Micro that they showed in a presentation at SPIE AR/VR/MR 2021. These video clips convey the level of information that is supported. I see this as a very contrived used case that very few consumers would accept.
The Oqmented and Dispelix prototypes use the Dispelix waveguide to perform pupil replication to generate a large eye box (like Hololens 2). This is in contrast to North Focals, Intel’s Vaunt, and Bosch’s laser AR glasses (see: North’s Focals Laser Beam Scanning AR Glasses – “Color Intel Vaunt”), which scans the lasers directly into the eye after bouncing off a holographic mirror in the glasses.
Some optics are required to couple the laser scanning into a waveguide, making display engine optics bigger than direct laser scanning designs. The waveguide designs are also no longer “Maxwellian” or focus-free, and they will focus at infinity unless other optics are added on both sides of the waveguide (see: Single Waveguide with Front and Back “Lens Assemblies”).
Direct laser scanning serious practical issues, the most obvious being a tiny eye box, literally the size of the user’s pupil. If the user’s eye or glasses move by less than their pupil’s size, the image misses the eye. The near-zero eye box means that the glasses must be precisely aligned (North Focals required custom glasses for each user), and even then, the image is hard to see. Eyeway Vision plans to direct scan into the eye using a secondary mirror to steer the image into the eye, but this is a very complex design problem (see: EyeWay Vision Part 1: Foveated Laser Scanning Display).
The Oqmented and Dispelix glasses show much smaller engines than the Hololens 2 (see: Hololens 2 Display Evaluation (Part 4: LBS Optics)). But then again, they are much lower resolution with much smaller fields of view (FOV). They are also monocular prototypes with just displays and optics with power/battery, processing, and image generation via a cable..
Dispelix, best known for its diffractive waveguides used with LCOS or DLP, is particularly interesting regarding Snap buying out WaveOptics. In 2018, Apple bought holographic waveguide maker Akonia. Hololens 1 and 2 waveguides were derived from technology Microsoft bought out from Nokia. Vuzix started with a Nokia technology license, but they now claim their diffractive waveguide technology.
Dispelix seems to be very similar to WaveOptics in terms of technology and capability. Except for Dispelix and Digilens, most of the current diffractive waveguides products used internal/acquired designs (ex. Hololens and Vuzix). Lumus is another independent waveguide company that uses reflective rather than diffractive waveguide technology with better image quality and is much more efficient and brighter (see Lumus Maximus 2K x 2K Per Eye, >3000 Nits, 50° FOV with Through-the-Optics Pictures).
It is not yet a near captive market as several Chinese companies are now making diffractive waveguides and there are even some “clones” of Lumus’s reflective type waveguides.
While I think Oqmented’s Lissajous scanning is a terrible way to make a display, it could be a better way to support various forms of 3-D sensing. The most obvious use of scanning lasers is Lidar which correlates the detection reflection of the scanning process to the X and Y location and time-of-flight to the Z-distance (the speed of light is about 1 foot (0.3m) per nanosecond). Both Oqmented and ST Micro are well aware of the use of laser scanning for 3-D sensing. ST Micro had a Lidar demo in the LaSAR booth (right).
Microvision pivoted to retarget their raster-like LBS technology for use in Lidar. The raster-like process developed for display applications has weaknesses compared to Lissajous scanning when used for Lidar. The figures below were taken from Microvision patent application 2019/0331774 and a 2018 Microvision Presentation).
While the pseudo-randomness of the scan is a liability in generating an image for humans to view, it is an asset when it comes to LiDAR and similar methods to 3-D scan the real world. The virtues of Lissajous scanning regarding their 3-D space recognition (LiDAR and non-time-of-flight methods) were explained to me by a private company I met within the Valley (name withheld at their request) working in 3-D sensing.
The Lissajous scanning process quickly gets at least a few data points from the whole area with just a few scanned. Thus, in an extremely short period, it has covered the whole area of the scan at low resolution. This can be a major advantage in detecting moving objects sooner. It can also help resolved “temporal aliasing” that can plague raster-like processes caused by objects moving in one of the scanning directions and with rotating objects (the “wagon wheel effect).
Another significant advantage (as pointed out by the same unnamed company) is that the pseudo randomness Lissajous scanning is not blind to fine straight objects that might “hide” between scan lines of a raster-like scanning.
I understand that some may see my views on these laser scanning AR displays a harsh, but while some of the companies may be young, the technology behind them is the result of over 25 years and billions of dollars of development. Their images look terrible for this day and age, and there is no reason to believe that they will improve dramatically in several more years of work.
There is still a lot of work to turn them from very rough prototypes into something resembling a product. I am giving them a pass that they can reduce the size of the engines and drive boards. But we are still talking pitifully low-resolution displays..
There are fundamental physics problems with moving the mirror(s) to steer the laser beam that many, if not most, greatly under-appreciated. No less than Microsoft on the Hololens 2 spent hundreds of millions of dollars and years trying to perfect Microvision’s laser scanning. They ended up with a terrible, relatively low-resolution image (see my articles: Hololens 2 Display Evaluation (Part 1: LBS Visual Sausage Being Made) and Hololens 2 Display Evaluation (Part 2: Comparison to Hololens 1)).
What is the market for augmented reality for LBS displays when the display resolution and overall image quality are much less than a smartwatch? Ones that won’t work in many lighting conditions and requires you to wear special glasses with, at best, expensive prescriptions if you need vision correction. It is narrowed to people who need very little information, and they don’t want to look down at their wrist. They need to get past just being a novelty device and find real purpose.
I also understand the arguments for an “all-day wearable” and have written about the field of view obsession. But no one is clamoring for expensive all-day-wearable glasses that have resolution and image quality worse than a smartwatch. I would like to see these companies focus their energy on areas where they might be more productive in the long run. Think of it as “tough love.”