Cogni Trax & Why Hard Edge Occlusion Is Still Impossible (Behind the Magic Trick)

Introduction

As I wrote in 2012’s Cynics Guide to CES—Glossary of Terms, when you see a demo at a conference, “sometimes you are seeing a “magic show” that has little relationship to real-world use.” I saw the Cogni Trax hard edge occlusion demo last week at SID Display Week 2024, and it epitomized the concept of being a “magic show.” I have been aware of Congi Trax for at least three years (and commented about the concept on Reddit), and I discovered they quoted me (I think a bit out of context) on its website (more on this later in the Appendix).

Cogni Trax has reportedly raised $7.1 million in 3 funding rounds over the last ~7 years, which I plan to show is unwarranted. I contacted Cogni Trax’s CEO (and former Apple optical designer on the Apple Vision Pro), Sajjad Khan, who was very generous in answering questions despite his knowing my skepticism about the concept.

Soft- Versus Hard-Edge Occlusion

Soft Edge Occlusion

In many ways, this article follows up on my 2021 Magic Leap 2 (Pt. 3): Soft Edge Occlusion, a Solution for Investors and Not Users, which detailed why putting an LCD in front of glass results in very “soft” occlusion.

Nobody will notice if you put a pixel-sized (angularly) dot on a person’s glasses. If it did, every dust particle on a person’s glasses would be noticeable and distracting. That is because a dot only a few millimeters from the eye is highly out of focus, and light rays from the real world will go around the dot before they are focused by the eye’s lens. That pixel dot will insignificantly dim several thousand pixels in the virtual image. As discussed in the Magic Leap soft occlusion article, the Magic Leap 2’s dimming pixel will cover ~2,100 pixels (angularly) in the virtual image and have a dimming effect on hundreds of thousands of pixels.

Hard Edge Occlusion (Optical and Camera Passthrough)

“Hard Edge Occlusion” means the precise, pixel-by-pixel light blocking. With camera passthrough AR (such as Apple Vision Pro), hard edge occlusion is trivial; one or more camera pixels are replaced by one or more pixels in the virtual image. Even though masking pixels is trivial with camera passthrough, there is still a non-trivial problem with getting the hard edge masking perfectly aligned to the real world. With passthrough mixel reality, the passthrough camera with its autofocus has focused the real world so it can be precisely masked.

With optical mixed reality hard edge occlusion, the real world must also be brought into focus before it can be precisely masked. Rather than going to a camera, the real world’s light goes to a reflective masking spatial light modular (SLM), typically LCOS, before combining it optically with the virtual image.

In Hard Edge (Pixel) Occlusion—Everyone Forgets About Focus, I discuss Arizona State University’s (ASU) optical solution for hard edge occlusion. Their solution has a set of optics that focuses the real world onto an SLM for masking. Then, a polarizing beam-splitting cube combines the result (with a change in polarization via two passes through a quarter waveplate not shown) after masking with a micro-display. While the ASU patent mentions using a polarizing beam splitter to combine the images, the patent fails to show or mention the need for a quarter waveplate between the SLM and beam splitter to work. One of the inventors, Hong Hua, was an ASU professor and a consultant to Magic Leap, and the patent was licensed to Magic Leap.

Other than being big and bulky, optically, what is wrong with the ASU’s hard edge occlusion includes:

  • It only works to hard edge occlude at a distance set by the focusing. Ano
  • The real world is “flatted” to be at the same focus as the virtual world.
  • Polarization dims the real world by at least 50%. Additionally, viewing a polarized display device (like a typical LCD monitor or phone display) will be at least partially blocked by an amount that will vary with orientation relative to the optics.
  • The real world is dimmed by at least 2x via the polarizing beam splitter.
  • As the eye moves, the real world will move differently than it would with the eye looking directly. You are looking at the real world through two sets of optics with a much longer light path.

While Cogni Trax uses the same principle for masking the real world, it is configured differently and is much smaller and lighter. Both devices block a lot of light. Cogni Trax’s design blocks about 77% of the light, and they claim their next generation will block 50%. However, note that this is likely on top of any other light losses in the optical system.

Cogni Trax SID Display Week 2024 Demo

On the surface, the Cogni Trax demo makes it look like the concept works. The demo had a smartphone camera looking through the Cogni Trax optical device. If you look carefully, you will see that they block light from 4 areas of the real world (see arrow in the inset picture below), a Nike swoosh on top of the shoe, a QR code, the Coke in the bottle (with moving bubbles), and a partially darken the wall to the right to create a shadow of the bottle.

They don’t have a microdisplay with a virtual image; thus, they can only block or darken the real world and not replace anything. Since you are looking at the image on a cell phone and not with your own eyes, you have no sense of the loss of depth and parallax issues.

When I took the picture above, I was not planning on writing an article and missed capturing the whole setup. Fortunately, Robert Scoble put out an X-video that showed most of the rig used to align the masking to the real world. The rig supports aligning the camera and Cogni Trax device with six degrees of freedom. This demo will only work if all the objects in the scene are in a precise location relative to the camera/device. This is the epitome of a canned demo.

One could hand wave that developing SLAM, eye tracking, and 3-D scaling technology to eliminate the need for the rig is a “small matter of hardware and software” (to put it lightly). However, requiring a rig is not the biggest hidden trick in these demos; it is the basic optical concept and its limitations. The “device” shown (lower right inset) is only the LCOS device and part of the optics.

Cogni Trax Gen 1 Optics – How it works

Below is a figure of Congi Trax’s patent that will be used to diagram the light path. I have added some colorization to help you follow the diagram. The dashed-lined parts in the patent for combining the virtual image are not implemented in Cogni Trax’s current design.

The view of the real world follows a fairly torturous path. First, it goes through a polarizer where at least 50% of the light is lost (in theory, this polarizer is redundant due to the polarizing beam splitter to follow, but it is likely used to reduce any ghosting). It then bounces off of the polarizing beam splitter through a focusing element to bring the real world into focus on an LCOS SLM. The LCOS device will change the polarization of anything NOT masked so that on the return trip through the focusing element, it will pass through the polarizing beam splitter. The light then passes through the “relay optics,” then a Quarter Waveplate (QWP), off a mirror, and back through the quarter waveplate and relay optics. The two passes through the “relay optics” have to undo everything done to the light by the two passes through the focusing element. The two passes through the QWP will rotate the polarization of the light so that the light will bounce off the beam splitter and be directed at the eye via a cleanup polarizer. Optionally, as shown, the light can be combined with a virtual image from a microdisplay.

I find it hard to believe that real-world light will go through all that and will behave like nothing other than the light losses from polarization that have happened to it.

Cogni Trax provided a set of diagrams showing the light path of what they call “Alpha Pix.” I edited several of their diagrams together and added some annotations in red. As stated earlier, the current prototype does not have a microdisplay for providing a virtual image. If the virtual display device were implemented, its optics and combiner would be on top of everything else shown.

I don’t see this as a practical solution to hard-edge occlusion. While much less bulky than the ASU design, it still requires polarizing the incoming light and sending it through a torturous path that will further damage/distort real-world light. And this is before they deal with adding a virtual image. There is still the issue that the hard edge occlusion only works if everything being occluded is at approximately the same focus distance. If the virtual display is implemented, it would seem that the virtual image would need to be at approximately the same focus distance for it to be occluded correctly. Then, the hardware and software are required to get everything between the virtual and real world aligned with the eye. Even if the software and eye tracking were excellent, there where will still be a lag with any rapid head movement.

Cogni Trax Waveguide Design / Gen 2

Cogni Trax’s website and video discuss a “waveguide” solution for Gen 2. I found a patent (with excerpts right and below) from Cogni Trax for a waveguide approach to hard-edge occlusion that appears to agree with the diagrams in the video and on the website for their “waveguide.” I have outlined the path for the real world (in green) and the virtual image (in red).

Rather than using polarization, this method uses time-sequential modulation via a single Texas Instrument’s DLP/DMD. The DLP is used during part of the time block/pass light from the real world and as the virtual image display. I have included Figure 1(a), which gives the overall light path; Figures 1(c) and 1(d), which show the time multiplexing; Figure 6(a) with a front view of the design; and Figures 10 (a) and (b) which show a side view of the waveguide with the real world and virtual light paths respectively.

Other than not being polarized, the light follows a more torturous light path that includes a “fixed DMD” to correct for the micro-tilts of the real world by time-multiplexed displaying and masking DMD. In addition to all the problems I had with the Gen 1 design, I find putting the relatively small mirror (120 in Figure 1a) in the middle of the view very problematic as the view over or below the mirror will look very different than the view in the mirror with all the addiction optics. While it can theoretically give more light throughput and not require polarization of the real world, it can only do so by keeping the virtual display times short, which will mean more potential field sequential color breakup and lower color bit depth from the DLP.

Overall, I see Cogni Trax’s “waveguide” design as trading one set of problems for another set of probably worse image problems.

Conclusion

Perhaps my calling hard-edge occlusion a “Holy Grail” did not fully convey its impossibility. The more I have learned, examined, and observed this problem and its proposed solutions, the more clearly it seems impossible. Yes, someone can craft a demo that works for a tightly controlled setup with what is occluded at about the same distance, but it is a magic show.

The Cogni Trax demo is not a particularly good magic show, as it uses a massive 6-axis control rig to position a camera rather than letting the user put on a headset. Furthermore, the demo does not support a virtual display.

Cogni Trax’s promise of a future “waveguide” design appears to me to be at least as fundamentally flawed. According to the publicly available records, Cogni Trax has been trying to solve this problem for 7 years, and a highly contrived setup is the best they have demonstrated, at least publicly. This is more of a university lab project than something that should be developed commercially.

Based on his history with Apple and Texas Instruments, the CEO, Sajjad Khan, is capable, but I can’t understand why he is pursuing this fool’s errand. I don’t understand why over $7M has been invested, other than people blindly investing in former Apple designers without proper technical due diligence. I understand that high-risk, high-reward concepts can be worth some investment, but in my opinion, this does not fall into that category.

Appendix – Quoting Out of Context

Cogni Trax has quoted me in their video on their website as saying, “The Holy Grail of AR Displays.” It is not clear that A) I am referring to Hard Edge Occlusion (and not Cogni Trax) and B) I go on to say, “But it is likely impossible to solve for anything more than special cases of a single distance (flat) real world with optics.” The Audio in the Cogni Trax video from me, which is rather garbled, comes from a MARCH 30, 2021, AR Show, “KARL GUTTAG (KGONTECH) ON MAPPING AR DISPLAYS TO SUITABLE OPTICS (PART 2) at ~48:55 into the video (the occlusion issue is only briefly discussed).

Below, I have cited (with new highlighting in yellow) the section from my blog discussing hard edge occlusion from November 20, 2019, where Cogni Trax got my “Holy Grail” quote. This section of the article discusses the ASU design. This article discussed using a transmissive LCD for soft edge occlusion about 3 years before Magic Leap announced the Magic Leap 2 with such a method in July 2022.

Hard Edge (Pixel) Occlusion – Everyone Forgets About Focus

“Hard Edge Occlusion” is the concept of being able to block the real world with sharply defined edges, preferably to the pixel level. It is one of the “Holy Grails” of optical AR. Not having hard edge occlusion is why optical AR images are translucent. Hard Edge Occlusion is likely impossible to solve optically for all practical purposes. The critical thing most “solutions” miss (including US 20190324274) is that the mask itself must be in focus for it to sharply block light. Also, to properly block the real world, the focusing effect required depends on the distance of everything in the real world (i.e., it is infinitely complex).

The most common hard edge occlusion idea suggested is to put a transmissive LCD screen in the glasses to form “opacity pixels,” but this does not work. The fundamental problem is that the screen is so close to the eye that the light-blocking elements are out of focus. An individual opacity pixel will have a little darkening effect, with most of the light from a real-world point in space going around it and into the eye. A large group of opacity pixels will darken as a blurry blob.

Hard edge occlusion is trivial to do with pass-through AR by essentially substituting pixels. But it is likely impossible to solve for anything more than special cases of a single distance (flat) real world with optics. The difficulty of supporting even the flat-world special case is demonstrated by some researchers at the University of Arizona, now assigned to Magic Leap (the PDF at this link can be downloaded for free) shown below. Note all the optics required to bring the real world into focus onto “SLM2” (in the patent 9,547,174 figure) so it can mask the real world and solve the case for everything being masked being at roughly the same distance. None of this is even hinted at in the Apple application.

I also referred to hard edge occlusion as one of the “Holy Grails” of AR in a comment to a Magic Leap article in 2018 citing the ASU design and discussing some of the issues. Below is the comment, with added highlighting in yellow.

One of the “Holy Grails” of AR, is what is known as “hard edge occlusion” where you block light in-focus with the image. This is trivial to do with pass-through AR and next to impossible to do realistically with see-through optics. You can do special cases if all the real world is nearly flat. This is shown by some researchers at the University of Arizona with technology that is Licensed to Magic Leap (the PDF at this link can be downloaded for free: https://www.osapublishing.org/oe/abstract.cfm?uri=oe-25-24-30539#Abstract). What you see is a lot of bulky optics just to support a real world with the depth of a bookshelf (essentially everything in the real world is nearly flat).

FM: Magic Leap One – Instant Analysis in the Comment Section by Karl Guttag (KarlG) JANUARY 3, 2018 / 8:59 AM

Karl Guttag
Karl Guttag
Articles: 262

6 Comments

  1. I have been following your articles. My understanding regarding “perfect” occlusion requires both focusing and eye tracking. Both are difficult to implement accurately in the pixel scale and it is very difficult to implement in real life.

    • It gets back to what is trying to be occlude and the goals. A big problem is that you can only bring into focus one distance, plus range that will be “acceptably” in focus. You then need optics to put inverse the focusing to put the world back with a range of depth. Otherwise the world will be flatten in depth like it is with passthrough AR. There is also the issue of the two eyes seeing a slightly different image.

      If you just want something virtual floating in front with no relationship to the real world, then eye tracking may not be required. Eye tracking and SLAM will come in if you want the object to look like it is “in” the real world and not hovering in front of you.

      Getting everything to line up in terms of the mask and virtual image while maintaining the depth in focus of the real world to mask at a single distance is a Herculean task. It becomes infinitely complicated when there are multiple depths that need to be block.

  2. hey karl,
    It seems that achieving hard edge occlusions for optical AR is nearly impossible, as you mentioned. What do you think would be the solution? Would simply increasing the nits be sufficient, or is there another approach? Adaptive Displays?
    Thank you

    • I would take out the word “nearly” in “nearly impossible,” if we are talking the general case of blocking things at more than one depth at a time.

      I think the best that can be done is to use either whole area or “soft edge occlusion” with a non-polarizing dimming (as polarizing dimming blocks too much light and has serious problems with LCD displays in the real world). FlexEnable and others have such technology. I still think you want to have more than 1,000 nits for the virtual display so you don’t have to dim too much.

      If you want colors to stand out and be distinct, then you want about an 8:1 contrast ratio. Grass has about 3,000 nits, sand about 5,000 and white concrete 7,000 to 10,000 nits on a sunny day. You don’t want to blast 30K to 80K nits at the eye just to overcome the background. At the same time, you don’t want to dim the world so much that you can’t see in shadows. So I think a compromise will be to dim by about 80% in sunlight (like dark sunglasses). With some soft edge occlusion (potentially “smart” by using cameras), the amount the virtual image has to put out can be reduced to save power.

  3. Hi Karl,
    This is Sajjad Khan, the Founder of Cogni Trax, the startup who’s technology you are criticizing. I would like an opportunity to respond to your criticisms point wise with diagrams and proofs so that your readers can make better informed decisions. Kindly let me know. Thanks
    Sajjad

    • Sure, let’s discuss the best way to handle this. It might be best if you put something on your website that I will then reference.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading