CES & AR/VR/MR 2024 (Part 4: Non-Display Devices, Holograms & Light Fields)

DisplayWeek and AWE

I will be at SID DisplayWeek in May and AWE in June. If you want to meet with me at either event, please email meet@kgontech.com. I usually spend most of my time on the exhibition floor where I can see the technology.

AWE has moved to Long Beach, CA, south of LA, from its prior venue in Santa Clara, and it is about one month later than last year. Last year at AWE, I presented Optical Versus Passthrough Mixed Reality, available on YouTube. This presentation was in anticipation of the Apple Vision Pro.

At AWE, I will be on the PANEL: Current State and Future Direction of AR Glasses on Wednesday, June 19, from 11:30 AM to 12:25 PM with the following panelists:

  • Jason McDowall – The AR Show (Moderator)
  • Jeri Ellsworth – Tilt Five
  • Adi Robertson – The Verge
  • Edward Tang – Avegant
  • Karl M Guttag – KGOnTech

There is an AWE speaker discount code – SPKR24D , which provides a 20% discount, and it can be combined with Early Bird pricing (which ends May 9th, 2024). You can register for AWE here.

Introduction

This article will discuss non-display device companies followed by companies developing Holographic and Light Field displays. It is part 4 of my combined CES and AR/VR/MR 2024 coverage of over 50 Mixed Reality companies.

As discussed in Mixed Reality at CES and the AR/VR/MR 2024 Video (Part 1 – Headset Companies), Jason McDowall of The AR Show recorded more than four hours of video on the 50 companies. In editing the videos, I decided to add more information on the companies in written form.

Outline of the Video and Additional Information

The video Non-Display, Hologram, and Light Field Companies is about 33 minutes long. The times in blue on the left of each subsection give the link to the YouTube video section discussing a company.

00:00 Headset Non-Display Components

00:14 FlexEnable (non-polarized dimming and electronic variable focus)

In 2023, I wrote about FlexEnable 2023’s CES & AR/VR/MR Pt. 4 – FlexEnable’s Dimming, Electronic Lenses, & Curved LCDs, and they were once again at this year show, so see that article for more detail. Flexenable makes passive and active (TFT transistor) liquid crystal devices on a Triacetyl Cellulose (TAC) clear plastic. Unlike most other plastics, TAC can be biaxially curved to match semi-spherically curved lenses while being non-birefringent (e.g., not affecting the polarization of light). FlexEnable can make thin, curved, non-polarized-electrochromic-dimming devices, electronic variable-focus elements, and curved LCDs.

Controlling light-blocking can be necessary for using optical AR/MR outdoors. Grass will reflect about 2,500-3,000 nits in full sunlight, and white concrete about 10,000. With electrochromic diming, the amount of light the display needs to be seen clearly can be greatly reduced. But still, you don’t want to be so much light blocked in the transmissive state when you are in the shadows or indoors so you can see the real world clearly.

Most electronic dimming devices, including those used in the Magic Leap 2 (see Magic Leap 2 (Pt. 3): Soft Edge Occlusion, a Solution for Investors and Not Users), use polarization-based LC devices. The problem with using a polarization-based dimmer with optical AR/MR is that real-world light must first be polarized, which typically blocks about 60% of the real-world light. FlexEnable uses a Guest-Host type of LC that does not require the incoming light to be polarized. They can tune the material and cell gap based on the need for transparency versus light blocking, but a typical cell would transmit ~70% (block 30%) in the transparent state and 20% in the blocking state.

FlexEable also supports LC-based electronically variable focus/diopter Pancharatnam–Berry Phase lenses. This would be one approach for companies addressing Vergence Accommodation Conflict (VAC). At least in theory, they could be used as part of a “push-pull” lens setup for use with see-though waveguides, where the push lens varies the focus of the virtual image, and the pull lens works equal, but oppositely so the focus of the real world is not changed.

The lenses can be made thinner, faster switching, and larger diopter changes by using a Fresnel structure, as Meta used in their Cambria prototype (see: Meta (aka Facebook) Cambria Electrically Controllable LC Lens for VAC?). However, this will add Fresnel artifacts from all those layers of Fresnel-like lenses. These artifacts are likely why Meta returned to showing electromechanical moving lenses at Siggraph 2023 in their Butterscotch prototype (see: In Optical AR, the Virtual Content Augments the Real World – With PtMR, the Real World Augments the Virtual Content)

03:27 Cambridge Mechatronics (CML – shape alloy lens movement)

This blog discussed Cambridge Mechatronics (CML) in detail in Cambridge Mechatronics and poLight Optics Micromovement (CES/PW Pt. 6). The basic concept (please read the linked article for more detail) is that CML uses “shape memory alloy (SMA) wires and pass a small amount of current through them to heat them to make them expand/get longer under spring tension. Because the resistivity is a function of the wire length (figures below left), a feedback loop circuit can precisely control movement. The result is an extremely small but precise “motor” for moving things like lenses. Today, many CML designs are used for image stabilization and focusing on cell phones and similar devices, but they are not limited to this application. CML’s business model involves designing custom “motors” according to customer requirements.

At AR/VR/MR CML’s presentation Shape memory alloy actuators to address vergence-accommodation conflict in XR (note: behind SPIE paywall), CML presented their relatively new Zero Hold Power (ZHP) for use in variable-focus applications in XR. With ZHP, CML can move a lens and remove all power if it is stationary.

05:40 Wearable Device’s Mudra Wrist Neural Gesture and Spatial Detection

The Wearable Device’s Mudra band detects nerve activity in the wrist to determine gestures. This information is combined with inertial information from an inertial measurement unit (IMU) to obtain relative 3-D spatial control. It’s all packaged in an Apple watch wristband form factor.

Multiple other companies are working on neural/electromyography (EMG) to detect wrist motion. Most famously, Meta has been publicizing its lab’s prototype EMG wrist controller (see Meta/Facebook’s 2021 article, which Zuckerburg has talked about on multiple occasions, including here in 2022). Meta purchased EMG technology from CTRL-labs in 2019, which it had bought from North/Thalmic-Labs earlier in 2019 (the same “North/Thalmic-labs” that also developed laser beam scanning glasses and was bought by Google).

I received a short demo of the Murda Band at AR/VR/MR 2024. While it did work, I can’t say from a short demo how well it will work with full-time use due to my lack of experience with the device and the inability to try it with the application I use. I would be curious to compare the Apple Vision Pro’s camera-based gestures. The obvious advantage of this device over camera-based gestures is that your hands don’t have to be visible.

One thing I want to point out about gesture-based control is that it can still be problematic in applications requiring “hands-free.” You can’t make gestures if your hands are doing something else. Many MR applications require hands-free operation.

Wisear (Neural/EMG control – not in the video)

Wisear takes a different approach to neural/EMG. Instead of measuring wrist signals, it uses an earbud with electrodes that measure neural activity near the ear. From the neural activity, it can sense various eye, face, and mouth gestures, thus providing a hands-free interface.

I’ve met with Wisear at AWE 2023 and CES 2024. Their current prototype (below left) does work, but it takes a while to learn how to use it. In the prototype demo, the user must make exaggerated motions for the detection to work. I would seem more for “button” than find mouse-type control. Just a thought, but it would be interesting to see it used as the selection method with eye-tracking. Wisear says they are nearing production of their “earphones” product (below right), which integrates their sensing technology into fairly normal-looking earbuds (below right).

Wisear recently announced a partnership with optical AR glasses maker ThirdEye Gen. At CES 2023, Wisear announced a partnership with Digilens for their ARGO headset.

09:30 Afference (haptic feedback)

At CES 2024, I tried an Afference haptic device (below left), which uses electronic stimulation to give haptic feedback. The company’s goal is to have a relatively lightweight device. For comparison, Haptx, also at CES, showed a haptic feedback device (below right) based on pneumatics pressurizing hundreds of actuators in their gloves. The pneumatic actuation requires pumps and valves housed in a large box/backpack, and the gloves are rather bulky and expensive (the latest price I could find was $15,000 plus a $500/month support fee).

Afference is developing a technology that will be much less bulky and expensive while providing haptic feedback. Unfortunately, there was a long line for Haptx, and as this blog primarily focuses on displays and optics, I didn’t have the time to try it. However, from what I have read, the feedback is reasonably good.

Unfortunately, my experience with the Affereence haptic device was not so good. It could be I didn’t have time for training or that the technology was immature, as they said they had a lot of development work left. But mostly, I felt mild electric shocks, similar to the old trick of testing a 9V battery with your tongue, not painful, but uncomfortable.

11:38 (True) Holograms and Light Fields

At CES, I met with two companies developing true holographic displays. I emphasize “true” because almost everything called a “hologram” is not. Most are Pepper’s Ghost (reflections off glass or clear plastic). Some are just for marketing hype; the worst offender was Microsoft’s Hololens, which does not produce nor use holograms anywhere. However, Microsoft’s size and power caused others to adopt the term “hologram” for binocular stereo images.

The term “Light Field” used in marketing hype for products that do not produce a light field has a similar problem. At AR/VR/MR 2024, I met with two companies producing (true) head-worn Light Field displays.

While it may seem pedantic to want to see terms used correctly, it becomes a confusing mess when someone actually uses the terms correctly. It also detracts from companies with devices that truly produce holograms or light fields. Microsoft at Siggraph 2017 showed a (true) Holographic display, and I wanted to ask them what they call a Hololens (fibbing) Hologram now.

12:03 Swave Photonics (Hologram Display)

Unlike most others who are developing hologram software using existing spatial light modulator technologies (most commonly phase-mode LCOS), Swave is developing a new display technology called HXR.

HXR uses IMEC’s technology for permanent storage phased change memory bit (PRAM, PCM, and other acronyms), which started development in 2007 (the basic technology goes back as far as 1970 with Intel). Many companies have tried different formulations to make PRAM a high-density, non-volatile storage device to compete with either DRAM or Flash.

From what I understand (and loosely put), IMEC chose to focus on magnetic RAM (MRAM), a competing technology with PRAM, for its future non-volatile memory technology. Then, it spun out Swave with the phase change technology.

The phase change bit enables Swave to build 250 nanometer elements, less than half the wavelengths of visible light and more than 8 times linearly, 64 times in area, of the smallest typical LCOS pixels, which are 2- to 3-microns. These very small elements will be necessary to support a high-resolution holographic display with a reasonably sized device.

Below is a diagram of the element structure and how it is programmed (below left). Swave just released a micro-photograph showing their <300nm holographic control elements (below right).

Swave is already demonstrating small working holographic devices. Swave planes to a holographic reflective combiner in their headset/glasses, as discussed in CES and AR/VR/MR Part 2 under Fourier Metasurfaces. CREAL (to be discussed later) uses a holographic reflective combiner.

The people at Swave are unquestionably extremely intelligent. However, besides the many issues with holographic displays, including image quality and processing power requirements, the big question is how they can perfect and develop a somewhat unique technology.

Some of my background in dealing with semiconductor FABs

In my past life at Texas Instruments (1977 to 1998), I saw how difficult it was to implement and perfect even minor changes in fabs. I then worked with LCOS from 1998 to 2011. LCOS used identical CMOS fabrication except for the top metal, which must be thin (thinner aluminum is shinier) and have a “contact dimple fill.” But these “little changes” caused problems that could take over a year to perfect, and many FABs would not touch. While Swave says its technology is “CMOS,” the cell does not look like a standard CMOS, is much more different than LCOS is to CMOS, and likely requires special phase-change materials (FABs hate new materials that could introduce unknown contamination). All these factors could relegate Swave to “development/R&D fabs” that don’t have the best yields because they don’t have the latest, most expensive equipment that gets amortized over high volumes and because they allow tweaking things and new chemicals that cause problems.

When the Foveon camera sensor came out in 2002, I tried to explain the issue of process compatibility to people on the DPReview forums. I explained that conventional Bayer (or similar) CMOS sensors blow away Fovean in light sensitivity, resolution, and cost. Let’s say the DPrevie people who wanted to believe in the Foveon sensor’s technical superiority (which, overall, it was not) didn’t believe me, and it led to a “lively” discussion. George Gilder must have been looking at the Foveon discussions when he wrote his book, The Silicon Eye (about Foveon). The clip above right is taken from page 270 of The Silicon Eye, published in 2005. As most people have never heard of Foveon today except for early 2000s camera buffs, I think I called it correctly. Peta Pixel, in 2022, wrote an excellent retrospective, Foveon: The Clever Image Sensor That Has Failed to Catch On, which goes into the technical reasons why the Foveon sensor failed, but they missed my point about process compatibility with existing CMOS FABs.

The Foveon story is even more applicable regarding MicroLED and the issues with implementing color. Foveon used a single “stacked” pixel versus the Bayer (most common even today) color filter. Foveon was going with a stacked diode sensor, using the device’s silicon as a (poor) color filter.

In addition to the complexity of the stacked diode process, the Foveon cell had to fit nine transistors (three per diode) and their wiring, limiting the diodes’ size and blocking light. This also meant the Foveon pixel ended up being bigger than a Bayer “pixel” (Bayer sensors companies call each color element a “pixel,” which is at best a half-truth since each one has luminance information and which is a whole other debate). Perhaps the worst problem was that silicon’s filtering did not discriminate well between colors and blocks much of the light (see the Penta Pixel article). These issues combined to make Foveon pixels have more noise at a given ISO than Bayer sensors in the same generation of technology. Ultimately, with the high volumes of color filter CMOS sensors, they could keep reducing the pixel size and improving the sensitivity of each pixel to make both smaller sensors for cell phones and larger sensors for big cameras at a lower cost than Foveon (later bought by Sigma).

14:21 VividQ (Hologram Display)

VIVIDQ at AR/VR/MR 2019

VIVIDQ is another Holographic Company. I first saw them giving a demonstration at AR/VR/MR 2019 and, years later, met with them at their office in Cambridge in 2023. While VividQ is making (true) programmable holograms like Swave, just about everything they are doing at the physical level is completely different from Swave.

VividQ uses conventional amplitude-modulated LCOS. It currently uses JVC’s 4K device, but before that, it used Texas Instruments’ DLP. My understanding of programmable holograms is very limited, but before VIVIDQ, I had only ever seen programmable holograms implemented in phase-mode LCOS, so using amplitude-based LCOS surprised me.

The next surprise is that VIVIDQ can display holograms via a Diffractive Waveguide (by Displelix). I didn’t know before that a holographic nature of the image would “survive” the various gratings in the waveguide. I knew that with Hololens 2, the Maxellian (focus-free) effect of laser scanning did not “survive” the diffractive waveguide.

The third thing that surprised me is that VIVIDQ demonstrated a working full-color holographic image with only a single white LED as the light source.

VIVIDQ, Swave, and CREAL are absolutely mind-blowing in their own ways, But . . .

The boffins at VIVIDQ are doing things I thought might be impossible. Like Swave (above) and CREAL (next), VIVIDQ has incredibly smart people who attack exceedingly difficult and complex problems. The technical intelligence of the people is off the chart. But, as is said, “If you asked an elephant to fly, you can’t complain if it does not stay up very long.” And what they can demonstrate is ONLY impressive IF you understand how they did it.

Swave could contend that VIVIDQ will be limited in resolution using comparatively large LCOS control elements. But then Swave has the huge issue of developing a completely new display technology. There are good and bad points with both of these approaches.

True Light Field Methods

A light field has a series of subfields, each with light from a different angle. Over the years, I have seen different techniques for generating a (true) light field. Roughly put, where a Hologram can provide continuous depth information to the eye, light fields present finite light subfields/light angles. Both holograms and light fields address vergence accommodation conflict (VAC), something almost every headset today ignores. I consider holograms to work in the frequency/wavefront domain, whereas light fields work in the spatial and light-angle domains.

The high-end light-field displays employ many projectors and display devices (with hundreds of millions or billions of pixels) and special screens or waveguide-like structures that direct light from each projector at a given angle. This method is expensive and typically used to generate a large image that multiple people can see. A massive number of pixels is required to have a large enough “sweet spot” where the effect works with head movement in X, Y, and Z, which are turned into many subfields.

Another method is to put microlenses over each pixel to direct the light at a selected angle, called a “spatial light field.” Practical implementations of Spatial light field display trade resolution for pixel depth. They make trade-offs in the size of the sweet spot visual image artifacts. Perhaps the most famous head-worn light field display was a research prototype by Nvidia in 2013 (shown below). As seen in the Nvidia prototype (below), they had 15 by 8 (120) subfields, reducing the resolution by 120x. While the Nvidia prototype nicely groups each subfield’s pixels, others have used microlens over each pixel such that each adjacent pixel is in a different subfield.

CREAL developed a time-sequential light field approach, presenting the various light fields to the eye sequentially. This has the advantage over spatial light fields of maintaining the display’s full resolution, but it requires a fast-switching display device. CREAL initially used DLP for the display engine but switched to its own LCOS design. The reason given for switching to LCOS was to get direct device control with CREAL’s unique algorithms. Both DLP and LCOS use field sequential color, and initially, CREAL had to give up color depth and gamut. CREAL has since been developing methods to gain back color depth while supporting multiple color fields.

17:40 CREAL (Light Field Glasses)

This blog first covered CREAL in 2019, and I saw their early prototype privately at CES 2018.

CREAL’s White Paper explains how it works and hints at how it can reduce the processing load associated with light fields and retain color depth. To understand how CREAL generates the light fields at a lower level, you should read its 2019 patent application. With its set of figures, this application is one of the best explanations I have ever seen in a patent application of a complex subject.

In short (read the two references above for more details), CREAL uses a series of small LEDs with optics to collimate light at different angles. The various LEDs are turned on to illuminate a spatial light modulator (ex., DLP or LCOS) to create a subfield. The various subfields are presented to the eye sequentially to generate the whole light field.

The advantage of the CREAL time-sequential method is that it maintains all the resolution of the display device while generating the light field. The issue becomes whether they can do it while retaining color depth/gamut and not having field sequential artifacts. Below is a series of pictures taken by CREAL through their optics, demonstrating how what is seen depends on where the camera (as a stand-in for the user’s eye) focuses.

CREAL has continued to progress toward reducing its massive hardware to a glass-like form factor. Their current large head-worn prototype (below left) uses curved mirrors tilted in a “butterfly” configuration to redirect light projected from the temples to the eye. CREAL is developing a holographic reflective combiner that will provide the same optical function in a flat form factor (below right) in their next prototype.

CREAL expects a smaller form factor in their eventual product (below).

25:28 PetaRay (Spatial Light Field Glasses)

While I am sure PetaRay is creating a (true) light field, I’m unsure how they do it. PetRay US patent application 2023/0176393 says they are using “time division multiplexing” via a series of coded apertures [to select various subfields] to maintain the full resolution of the display, which sounds similar to what CREAL is doing. Still, the process of the patent is not well described (Like it is in the CREAL patent mentioned earlier. I saw PetaRay at AWE 2023 (below) but didn’t spend much time with them (hard to do when there are so many companies). They have a booth at AWE 2024, and I will try to learn more about how it works.

The light field’s ability to present information at different focus distances was demonstrated (below left). A flying dragon could be seen flying back and forth from the cardboard castle and shooting a fireball to show how the focus changed. For the dragon demo, they used birdbath optics, but for their second system in a dental application, they used a waveguide, which showed that the light field was maintained even with a waveguide.

The obvious question is whether CREAL or PetaRay is better or further along. The short answer is, “I don’t know.” I don’t have enough objective information. Currently, CREAL is much bigger, but they claim they will soon have a form factor similar to PetaRay. Based on the images I have seen, I suspect CREAL has a higher resolution, but I don’t have any numbers. Then, there is the issue of the required processing, which can’t be determined from short canned demos where I don’t get to see the processing behind the images. Currently, both companies are both companies are still showing “colorful demos” with cartoon-like coloring, so it is impossible to judge image quality.

24:30 Holoeye (Phase LCOS Holograms)

Holoeye has been developing and supporting phase mode LCOS for generating holographic images for decades (Holoeye was founded in 1999). Holoeye was at the SPIE Photonics West main exhibition floor, demonstrating several phase mode holograms.

(Note: In the recording, Holoeye was in between CREAL and PetaRAY, but in writing the article, I decided to put CREAL and PetaRAY next to each other)

26:49 Jason and Karl Discuss Light Fields and Holograms Displays

This article provides much more detail on each company covered in the video, but at the end of the video, Jason and I spent about 7 minutes discussing Holograms and Light Field displays. You must watch this part of the video to know what we said.

Next Time Jason and I discuss Apple Vision Pro

Next Time, Jason and I will discuss the Apple Vision Pro for about 50 minutes. This discussion will largely summarize prior articles and give some “verbal color.”

Karl Guttag
Karl Guttag
Articles: 260

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading