AWE 2021 Part 2: Laser Scanning – Oqmented, Dispelix, and ST Micro

Introduction

In my last article, I discussed my thought on Tilt-5, which I saw for the first time at AWE. For this article, I’m grouping ST Microelectronics (ST-Micro), Oqmented, and Dispelix as they are all involved in laser beam scanning (LBS) displays for AR.

I received comments on LinkedIn that last week’s article, AWE 2021 Part 1 – Tilt-5 Was Magical, was strangely positive. I analyze technology for how it works for its intended purpose and not on an absolute scale. The question for me is whether it will work for its intended application. Still, there is also something about Tilt-5 that has qualities that so many, including myself, find magical. Based on several other articles and podcasts (The Voiced of VR Interview at AWE with Jeri Ellsworth is particularly good) that I have seen/heard since Tilt-5 was a hit at AWE. As I tried to point out in the article, Tilt-5 is not great for general-purpose AR if there is such a thing, but it is magical in the context of the applications they are targeting.

The other technology getting a lot of attention at AWE was Lynx AR, which I will cover in a future article. My current plan is to release articles AWE every few days, and I also plan on finishing an article I started a while ago on Avegant’s new small LCOS optical engine.

[[Spoiler] For those that think I am overly critical, this article will be back to your regular program. Specifically, Oqmented’s Lissajous laser scanning seems to me to be a poor way to produce an image but might be good for 3-D space recognition (LiDAR and other methods).

LaSAR Laser Scanning Alliance

From ST Micro’s SPIE AR/VR/MR 2021 Presentation (condensed)

St Micro leads the newly formed Laser Scanning Alliance (LaSAR), of which Dispelix is a member (but Oqmented is not). The LaSAR appears to be a loose alliance of companies promoting and cooperating on laser scanning for augmented reality glasses. LaSAR currently has three very large companies of ST Micro (mems mirror manufacture), Applied Materials (waveguide manufacturing), OSRAM (laser manufacture), and two small companies, Dispelix (waveguide design) and Mega1 (laser scanning optics).

Based on my 44 years in the industry, I have not seen a lot of success with loose alliances, and the few that have succeeded were often competitors trying to set standards. I should also note that Oqmented, which makes a laser scanning device, listed as a member of LaSAR (at least not at the time of AWE 2021) although they have a manufacturing and marketing agreement with ST Micro.

Oqmented

I will “pick on” Oqmented first because they are the most interesting and different from most LBS displays. They were very nice to me at their booth, so this is nothing personal, but I don’t believe that Oqmented’s technology will ever be good for making AR displays.

Lissajous Scanning

Most laser beam scanning (LBS) displays to date have used a raster scanning-like approach. The horizontal scanning is typically a much faster sinusoidal scan in the Kilohertz (typical 5kHz to 54kHz depending on the resolution) and a slower, somewhat linear, driven fast return vertical scan. I discussed the Microvision scanning processed as far back as 2012 (in Cynic’s Guild to CES — Measuring Resolution) and the many problems and the Hololens 2’s more complex variation of it, 2 in Hololens 2 Display Evaluation (Part 1: LBS Visual Sausage Being Made).

Lissajous’ from Data Genetics Web Page

Oqmented uses a Lissajous scanning process where both the horizontal and vertical scanning direction. A Lissajous pattern is what you get when you drive the X and Y of an oscilloscope or a light shining on a bidirectional mirror by two sinusoids. A short explanation of Lissajous patterns with an interactive simulation is given on a web page by Data Genetics (left).

The advantage of Lissajous LBS is that the vertical scanning can more than an order of magnitude faster, on the order of Kilohertz, versus the typical raster type LBS vertical scanning on the order of 60 to 120 Hz.

Lissajous is a Horrible Way to Produce an Display Image

In the case of Qqmented, they try and crank up the horizontal and vertical scanning frequencies in a phase relationship to generate to cover the whole image. Below are pictures of Oqmented’s front projector taken at 1/500th of a second and then at 1/60th of a second to capture the whole image.

This is a horrible way to generate a high-resolution display image (but it might be a good way to scan for something like LIDAR – more on this use later). As you can see, even in the 1/60ths of a second picture above, you can still see the effects of the Lissajous scanning. Worse yet, when you look at the projection live, your eyes, with their saccadic motion, will capture distracting flashes of individual scans.

The problem with using Lissajous patterns to create a display the pseudo random scanning is uses to try and create a uniform image. Imagine painting a room with a small paintbrush with random strokes in different directions. Some parts of the image are scanned multiple times making them brighter while other parts are rarely, if ever, scanned making them darker. No amount of compensation/correction will correct for these problems.

Beyond the fundamental problem of uniformity, the mirrors simply Beyond the fundamental problem of uniformity, the mirrors can’t scan fast enough by over an order of magnitude to fill in all the “holes.” The mirror scanning frequencies are resonant set by the physical design of the mirror. To make the mirror go faster, they have to make it much smaller, but it is too small for the laser beam diameter and extremely difficult to manufacture.

Below is another picture of a whole scene taken at 1/60th of a second. It was blurry in the corners (I missed asking why). The projector was driven by hardware hidden under the projector (right).

This drive board for the front projector was capable of wide range of color shades. Noticeably, the small boards in tThis drive board for the front projector was capable of a wide range of color shades. Noticeably, the small boards in the AR glasses demo appeared to support much less color variation, and the demo image was on-off and many few shades in between.

The process of converting a normal image into a Lissajous scan is complicated. First, there is the issue that the original rectilinear image must be mapped onto the constantly varying Lissajous scan. The scan moves at a variable velocity based on two different sine wave functions, and the variation in speed has to be compensated for with the drive control. All of this rescaling and compensating takes away from the resolution and color fidelity. I suspect this is why the smaller AR glasses demo has a much smaller board that only supports low-resolution figures with a few different colors.

Oqmented AR Glasses with Dispelix Waveguide Demo

The picture (right) set encapsulates the monocular AR glasses demo using a Dispelix waveguide shown in the Oqmented booth. On the right, you can see a top view of the glasses and views from both sides and a view through the glasses with the camera exposure based on the real world’s light.

You may notice that the projector in the right temple of the glasses is rather large, but it is not yet a serious concern. This is only a prototype and Oqmented has a path to reduce the volume of the projector. What concerns me are more fundamental issues with the image quality. What is the point of making it smaller if the image quality is still less than that of a smartwatch?

Below is a typical demo image through the same glasses with a camera exposure set based on the virtual image. It is low resolution, on what I would expect from a 320 by 240 pixel display, and has very few colors. The colors that are there are not well controlled. Much of the varying color issues are likely due to the Dispelix diffractive waveguide. Some of the color control issues are likely due to control issues with the lasers.

The shutter speed was only 1/30th of a second, but you can still see the crosshatch pattern from the Lissajous scanning. Looking carefully at the picture, you can see a dotted texture to the image, visible (see enlarged inset in the upper left below). I did not get a reasonable explanation of this dot pattern. The Oqmented people said it might be the waveguide, but it looks like it is blank spaces in the laser driving process (essential “pixels” in the Lissajous scan process).

Below is a photo Below is a photo gallery with two more typical images captured through the lens by my camera (click to see larger images), one taken at 1/60th of a second and the other taken at 1/30th of a second, as with the previous image, that lack color depth and crosshatching from the Lissajous scanning.

Dispelix in LaSAR Alliance Booth

The other AR LBS prototype headset at AWE2021 was by Displex in the LaSAR Alliance booth section. While Oqmented also used Dispelix waveguides, Displex’s own demo used an ST-Microelectronics LBS engine with more conventional raster-type scanning. The resolution was better than Oqmented’s glasses demo and did not have the diagonal lines through the images. Still, the image quality and resolution are less than a typical smartwatch.

Below is shown the glasses with them being worn (and the forward projection from the waveguide) and a picture I took through the lens with an enlargement. The glasses are sleeker than the Oqmented prototype but I think that is more due to the state of prototype development than something permanent. Both glasses are using Dispelix waveguides.

Below are some stills from a video by ST Micro that they showed in a presentation at SPIE AR/VR/MR 2021. These video clips convey the level of information that is supported. I see this as a very contrived used case that very few consumers would accept.

Laser Scanning Into a Waveguide with Pupil Replication

The Oqmented and Dispelix prototypes use the Dispelix waveguide to perform pupil replication to generate a large eye box (like Hololens 2). This is in contrast to North Focals, Intel’s Vaunt, and Bosch’s laser AR glasses (see: North’s Focals Laser Beam Scanning AR Glasses – “Color Intel Vaunt”), which scans the lasers directly into the eye after bouncing off a holographic mirror in the glasses.

Some optics are required to couple the laser scanning into a waveguide, making display engine optics bigger than direct laser scanning designs. The waveguide designs are also no longer “Maxwellian” or focus-free, and they will focus at infinity unless other optics are added on both sides of the waveguide (see: Single Waveguide with Front and Back “Lens Assemblies”).

Direct laser scanning serious practical issues, the most obvious being a tiny eye box, literally the size of the user’s pupil. If the user’s eye or glasses move by less than their pupil’s size, the image misses the eye. The near-zero eye box means that the glasses must be precisely aligned (North Focals required custom glasses for each user), and even then, the image is hard to see. Eyeway Vision plans to direct scan into the eye using a secondary mirror to steer the image into the eye, but this is a very complex design problem (see: EyeWay Vision Part 1: Foveated Laser Scanning Display).

Display Engine Sizes

The Oqmented and Dispelix glasses show much smaller engines than the Hololens 2 (see: Hololens 2 Display Evaluation (Part 4: LBS Optics)). But then again, they are much lower resolution with much smaller fields of view (FOV). They are also monocular prototypes with just displays and optics with power/battery, processing, and image generation via a cable..

Waveguide Business Speculation

Dispelix, best known for its diffractive waveguides used with LCOS or DLP, is particularly interesting regarding Snap buying out WaveOptics. In 2018, Apple bought holographic waveguide maker Akonia. Hololens 1 and 2 waveguides were derived from technology Microsoft bought out from Nokia. Vuzix started with a Nokia technology license, but they now claim their diffractive waveguide technology.

Dispelix seems to be very similar to WaveOptics in terms of technology and capability. Except for Dispelix and Digilens, most of the current diffractive waveguides products used internal/acquired designs (ex. Hololens and Vuzix). Lumus is another independent waveguide company that uses reflective rather than diffractive waveguide technology with better image quality and is much more efficient and brighter (see Lumus Maximus 2K x 2K Per Eye, >3000 Nits, 50° FOV with Through-the-Optics Pictures).

It is not yet a near captive market as several Chinese companies are now making diffractive waveguides and there are even some “clones” of Lumus’s reflective type waveguides.

LiDAR and other 3-D Sensing with Lasers

While I think Oqmented’s Lissajous scanning is a terrible way to make a display, it could be a better way to support various forms of 3-D sensing. The most obvious use of scanning lasers is Lidar which correlates the detection reflection of the scanning process to the X and Y location and time-of-flight to the Z-distance (the speed of light is about 1 foot (0.3m) per nanosecond). Both Oqmented and ST Micro are well aware of the use of laser scanning for 3-D sensing. ST Micro had a Lidar demo in the LaSAR booth (right).

Quick Background on Microvision Raster-Like LBS Lidar

Microvision pivoted to retarget their raster-like LBS technology for use in Lidar. The raster-like process developed for display applications has weaknesses compared to Lissajous scanning when used for Lidar.  The figures below were taken from Microvision patent application 2019/0331774 and a 2018 Microvision Presentation).

Oqmented and 3-D Sensing with Lissajous Scanning

While the pseudo-randomness of the scan is a liability in generating an image for humans to view, it is an asset when it comes to LiDAR and similar methods to 3-D scan the real world. The virtues of Lissajous scanning regarding their 3-D space recognition (LiDAR and non-time-of-flight methods) were explained to me by a private company I met within the Valley (name withheld at their request) working in 3-D sensing.

The Lissajous scanning process quickly gets at least a few data points from the whole area with just a few scanned. Thus, in an extremely short period, it has covered the whole area of the scan at low resolution. This can be a major advantage in detecting moving objects sooner. It can also help resolved “temporal aliasing” that can plague raster-like processes caused by objects moving in one of the scanning directions and with rotating objects (the “wagon wheel effect).

Another significant advantage (as pointed out by the same unnamed company) is that the pseudo randomness Lissajous scanning is not blind to fine straight objects that might “hide” between scan lines of a raster-like scanning.

Conclusions – What Is The Point To Low Resolution, Poor Image Quality AR?

I understand that some may see my views on these laser scanning AR displays a harsh, but while some of the companies may be young, the technology behind them is the result of over 25 years and billions of dollars of development. Their images look terrible for this day and age, and there is no reason to believe that they will improve dramatically in several more years of work.

There is still a lot of work to turn them from very rough prototypes into something resembling a product. I am giving them a pass that they can reduce the size of the engines and drive boards. But we are still talking pitifully low-resolution displays..

FM: HL2 Display Evaluation

There are fundamental physics problems with moving the mirror(s) to steer the laser beam that many, if not most, greatly under-appreciated. No less than Microsoft on the Hololens 2 spent hundreds of millions of dollars and years trying to perfect Microvision’s laser scanning. They ended up with a terrible, relatively low-resolution image (see my articles: Hololens 2 Display Evaluation (Part 1: LBS Visual Sausage Being Made) and Hololens 2 Display Evaluation (Part 2: Comparison to Hololens 1)).

What is the market for augmented reality for LBS displays when the display resolution and overall image quality are much less than a smartwatch? Ones that won’t work in many lighting conditions and requires you to wear special glasses with, at best, expensive prescriptions if you need vision correction. It is narrowed to people who need very little information, and they don’t want to look down at their wrist. They need to get past just being a novelty device and find real purpose.

I also understand the arguments for an “all-day wearable” and have written about the field of view obsession. But no one is clamoring for expensive all-day-wearable glasses that have resolution and image quality worse than a smartwatch. I would like to see these companies focus their energy on areas where they might be more productive in the long run. Think of it as “tough love.”

Karl Guttag
Karl Guttag
Articles: 244

5 Comments

  1. While the image quality of LBS is poor, the use with diffractive waveguide and resulting “eye glow” makes the combo a poor candidate for a consumer product and “could be a life or death sentence for a defense helmet “.

    Bernard Kress had this to say regarding “eye glow” as a “social comfort” issue –

    Bernard Kress – Tooz Interview

    @53:50

    “And one other aspect that you might not have mentioned (re Tooz) is the absence of eye glow or reduction of eye glow which is a social comfort issue with many of the existing diffractive waveguide solutions there (ie HL2 w/MVIS) which is OK for enterprise you know – could be a life or death sentence for a defense helmet .

    But really for the consumer it’s a social comfort issue and having no light coming out the wrong direction which would occlude your eyes and you know prevent eye contact – this is great.”

    https://players.brightcove.net/689254975001/SyeYVVul4l_default/index.html?videoId=6275091420001

    Bernare Kreis SPIE –

    “Eye glow, as I said, is important for social comfort. Eye glow is the, as you remember, the field extracted inthe wrong direction, towards the world. So an image comes out from the wrong direction. They can be quite strong and medium for holographic, quite strong for surface relief waveguides, surface relief grating waveguides, medium for holographic waveguide, and weak to none for reflective waveguides.”

    https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11876/1187602/Waveguide-combiner-technologies-enabling-small-form-factor-mixed-reality-headset/10.1117/12.2614771.full?SSO=1

    Bernard Kress “eye glow” chart from SPIE presentation – https://imgur.com/EoteSu0

    Given that diffractive waveguide’s “eye glow” presents a “social comfort” issue and when combined with the low image quality of LBS it seems to me would lead to a dead end for a consumer product . Why do you think so much money is being invested?

  2. AWE is an northamerica AR technology show case, which has a goal of demonstrating to the industry on how each technology path has been doing in the past year.

    One question here: as we all know, AR has three main technology paths the industry has been exploring in the last decade, including: waveguide, Birdbath and Freeform combiners. I wonder will you describe each main technical paths’ latest progress, including their latest status and existing challenges? By organizing information by technical paths and their latest arts, it will be easier for readers to obtain a latest panoramic view of AR technology.

    i.e., has Free-form combiner dead already or they are quietly developing this AR technology under the surface? Karl, will you describe what you saw at AWE on Freeform’s latest status as well?

    Thanks

  3. Hi Karl, great analysis as per. I am wondering if you have any thoughts on the biggest possible scams in the AR industry right now? Kura comes to mind – so many years of hyping up yet no public demos I am aware of, no actual images of the device and renders showing what looks like pin mirrors but also reflective waveguides. All this with incredible-looking specifications.

    • Kura was doing some real things but I think they got way out over the skies so to speak with the scanning MicroLEDs.

      In the world of AR “scams”, the $3.5 Billion (over 2.5 times Theranos’s $1.3B) into the Magic Leap has to be right at the top.

      The other contender would have to be Microvision. They have lost about $600M plus about $90M US government money from an “earmark” pushed through by a US Senator they later put on their board. Microvision went from a low of about 20 cents a share to a high of about $28 per share and a market cap over $3B in less than a year based on hyping their Lidar which people who know Lidar tell me is nothing special, yet no profitable deals have been announced and the company continues to lose about $1M per month and some months much more. The product of Microvision seems to be selling stock.

Leave a Reply

%d