Disparity Correction Has Become Important All of a Sudden – Plus some on Binocular Color Difference (Snap & JBD)

Introduction

Through the years, it seems the biggest unsolved vision human factors problem was the vergence-accommodation conflict (VAC), which has been discussed in many articles on this blog. The Magic Leap One attempted to partially address the problem before Magic Laep abandoned it in favor of better image quality and other factors on the Magic Leap Two (see: What About Vergence Accommodation?).

Then, with Meta’s Orion, Meta started talking about (but not supportied on Meta’s Orion AR Glasses) and filing for patents on “disparity correction.” Below are some figures from Meta’s patent application.

In a further study on Orion’s waveguide, I discovered Magic Leap’s 2022 AR/MR/VR presentation on the Magic Leap Two discusses what Meta calls “disparity correction” (see: Magic Leap Two Disparity Correction), but Magic Leap more generically as optical alignment correction.

Etymology of the Term “Disparity”

Meta was the first I saw to use the term “disparity” for the optical misalignment due to the glasses frame flexing, but I can’t find evidence that this is universal. A Berkley Researcher on the Visual Human Factors panel at AR/VR/MR 2022 who had some research funded by Meta also used the term disparity, but I don’t know who influenced who. Avegant also calls it “disparity,” but this may have been due to Meta’s influence in the field. But then Magic Leap’s nearly three-year earlier presentation didn’t call it “disparity.”

But if I look beyond the possible influence of Meta on documents published before Orion was announced, the term “disparity” in the context of AR/VR agrees with Wikipedia’s broader definition of disparity, which applies to the human visual system using the difference in view between two eyes, which senses depth.

Binocular disparity refers to the difference in image location of an object seen by the left and right eyes, resulting from the eyes’ horizontal separation (parallax). The mind uses binocular disparity to extract depth information from the two-dimensional retinal images in stereopsis. In computer vision, binocular disparity refers to the difference in coordinates of similar features within two stereo images.

Figure 1: The full black circle is the point of fixation. The blue object lies nearer to the observer. Therefore, it has a “near” disparity dn. Objects lying more far away (green) correspondingly have a “far” disparity df. Binocular disparity is the angle between two lines of projection . One of which is the real projection from the object to the actual point of projection. The other one is the imaginary projection running through the nodal point of the fixation point.

The 2020 paper “Optimizing Depth Perception in Virtual and Augmented Reality through Gaze-contingent Stereo Rendering” by Brooke Krajancich, Petr Kellnhofer, and Gordon Wetzstein discusses some of the issues and causes of “disparity distortion” with no mention I could find of glasses twisting. They discuss rendering errors and issues with pupil alignment as causes. Perhaps of additional interest is that two of the authors, Kellnhofer and Wetzstein, in addition to being university professors, worked for the MicroLED company Raxium, which Google later (2022) acquired.

I’m not trying to split hairs, but I want to note that “disparity” has a broader meaning and that there are many forms of disparity distortion (or error) other than from frame twisting. It could be that others have used “disparity” in the context of frame twisting, but Meta was the first company I have seen to use it to mean the disparity distortion due to frame twisting.

Human Physical Factors Casing Disparity Issues

In addition to issues with the flex of the headset, most people have some degree of asymmetry between the two eyes and the nose and vertically with respect to each other. This would be in addition to shifts in the glasses when worn. Because the glasses are so close to the eye, small differences have a multiplicative effect relative to viewing real-world objects that are far away.

I don’t know the answer, but it would be interesting to understand all the factors and how significant the frame twisting is relative to these other factors.

AR/VR/MR 2025 Human Vision Science Panel Discusses “Disparity”

Panel discussion: Human Vision Science and the Visual Experience in AR/VR was moderated by Bjorn Vlaskamp (Google AR) with panelist Laurie M. Wilcox, York Univ., Alexandra Boehm (Google AR), Jorge Otero-Millan (Univ. of California, Berkeley), T. Scott Murdison (Meta),  Emily Cooper (Univ. of California, Berkeley).

Emily Cooper (far left in the picture above) stated in her introduction that she has had research funded by Meta (including research papers: The contribution of image minification to discomfort experienced in wearable optics and How small changes to one eye’s retinal image can transform the perceived shape of a very familiar object). I only point this out as A) there may be a link between her and Meta’s use of the term disparity and B) because she has some interesting papers and insights on human visual factors.

I found this panel interesting for its discussion of human visual factors. These researchers discussed many different challenges for AR that can adversely affect visual comfort. Below are some of the introductory slides for the panel.

The much-discussed vergence-accommodation conflict (VAC) is shown (above right). They also discussed disparity problems (perhaps the broader definition rather than Meta Orion’s definition related to frame twist) and some of the pros and cons of monocular versus biocular with respect to disparity. 

Headsets without a small enough IPD

One of the panelists brought up the fact that women typically have a smaller IPD than men and that some headsets don’t support a small enough IPD adjustment. The point was made that if the IPD of the headset’s display is larger than the IPD of the user’s eyes, it will require the eyes to diverge when looking at virtual objects that are (virtually) far away, which the eyes and vision system typically can’t do at all or will cause pain even with short use.

Having the IPD smaller than the use will cause issues with the apparent distance, but unless it is severe, it won’t cause the level of discomfort. From a design perspective, one can appreciate that supporting a smaller IPD can be more difficult, particularly with VR headsets where the display and optics are being moved and physically run into each other with small IPDs, particularly with larger FoVs.

Meta Presentation Includes Disparity

Meta’s Jason Hartlove of Meta Reality Labs will provide a presentation of some of the key issues that Meta is trying to address in Augmented reality. These issues included resolution, dynamic range, color gamut, frame rate, latency, VAC, FoV, degrees of freedom, and binocular vertical disparity correction (for frame twist). The images on the right are taken from the slide on vertical disparity correction.

Disparity due to frame twist is only one factor

I assume that Meta plans to correct the disparity caused by frame twists by somehow manipulating the image to the two eyes. I’m assuming that this method would also want to include some form of eye tracking to correct based on the location of the person’s eyes, which may also be asymmetrical, and how the glasses are being worn. It would seem that if one is to fully correct for all the factors, there could be a significant amount of both physical hardware (sensors) and processing (hardware and power) to correct for everything.

Avegant Presentation Include Disparity

Ed Tang, CEO of Avegant, has presented their progress on small LCOS display projectors for AR for several years. This year, they were talking more about shaping their projectors to better fit into the temples of glasses (right).

But at the end of the presentation, Avegant showed the side of a micro-gimbal technology to correct binocular alignment/disparity. This is not as far of a stretch as it might seem for a company known today for optics. The founders of Avegant, including Ed Tang, had studied micromachine (MEMS) technology at the University of Michigan.

poLight and Cambridge Mechatronics Technology for Disparity Correction

In 2023, I wrote about Cambridge Mechatronics and poLight Optics Micromovement (CES/PW Pt. 6), and Avegant’s disparity correction design made me think of both of these companies, which have miniature technology for moving/adjusting optics. Both of these companies have technology used in high-volume smartphone focusing and optical image stabilization, but it is conceivable that their respective technologies could be applied to disparity correction.

poLight works (left) by using piezoelectric actuators to change the shape of optics to redirect light.

Cambridge Mechatronics uses shaped memory alloy (SMA) wires to move small optical structures very precisely (below).

I mentioned that I was seeing considerable discussion about disparity correction at AR/VR/MR to people from both companies, and they both indicated that they were looking into it.

Binocular Color Difference/Disparity

The previous discussion of “disparity” is concerned with the geometric distortion on depth impact. There is also color disparity between the two eyes. Many humans, myself included, see slightly different colors with each eye. The paper, Stereo Disparity Improves Color Constancy, discusses how, in the real world, lighting and other factors cause each eye to see the same object as having slightly different colors, and this may factor into human perception of the lighting, surface shape, and surface texture of an object.

Another issue I thought the Human Vision Science Panel commented on is the difference in color between the right and left eyes, particularly common with diffractive waveguides. I recently wrote about Jade Bird Display’s MicroLED Compensation and discussed Binocular Fusion and the Rivalry caused by the difference in color with binocular displays. Below is an image from that study.

I met with Snap at CES 2025, and the topic of why Snap rotated the waveguides from WaveOptics’s (Snap bought WaveOptics in 2021) typical orientation with light entrance on the top to the Spectacles 4 and 5 orientation with the light entering the left and right side was discussed. Snap said it was to give better color fusion (combined biocular perceived color). The pictures below show WaveOptics projecting down orientation (upper) and Snap Spectacles 5 left and right entrance orientation (lower).

The light output from diffractive waveguides varies in color and brightness from the entrance grating to the far side of the waveguide. By rotating the waveguides 180 degrees relative to each other, an averaging effect occurs.

Below are pictures I took through Snap Spectacles 5 left and right waveguide and a 50/50 average of the left and right images I generated in Photoshop to roughly simulate the “fusion effect.” The improvement is most notable in the faces. I should note that Spectacles 5 in this test did not perform “digital correction” of the color (as was done in my JBD study). Also, note that Spectacles 5 has a 46° versus the JBD’s ~30° FoV, and the Spectacles 5 has ~4.58 times as many pixels (using LCOS versus JBD’s MicroLED).

Conclusions

Many, but not all, of the problems these researchers have studied involve prolonged use. There is likely a level of imperfection that can be tolerated for “data snacking” but becomes intolerable when used continuously for long periods. I suspect this discussion of disparity will affect the plethora of AI/AR glasses products coming to market in 2025 (and as discussed in AR/VR/MR 2025 AI Glasses Panel and with coverage of AR/VR/MR and CES to on this subject to come).

I often say that VAC is one of those issues that everyone likes to talk about, but nobody has come up with a practical solution (some are still developing solutions, such as Creal). Magic Leap, in its 2022 AR/VR/MR presentation (Magic Leap 2 at SPIE AR/VR/MR 2022 – wow, it has been 3 years), said VAC was an important issue, but other factors were more important to visual comfort as the reason they didn’t continue with dual focus distances from the Magic Leap One on the Magic Leap Two. Meta has shown many different approaches to VAC through the years, all of which are impractical for optical see-through (OST) designs.

We will have to wait and see if Disparity is a “topic de jour” and if companies follow through on their solutions for disparity correction (due to frame twists or other causes).

Karl Guttag
Karl Guttag
Articles: 297

14 Comments

  1. I’d like to hereby nominate KG to the head the entire xR industry “Dpt. of Quality Control” so we can all see straight going forward, and avoid as many ‘optigrab’ moments as possible. Seriously folks. send Karl your pre-release hardware and LET HIM COOK.

    • Thanks. I think many companies find me too “analytical” and would rather release evaluation headsets to people that will say good things about it.

    • I have not seen Bosch’s laser scanning glasses since CES 2021 (see my 2021 article if you have not seen it). While they many have improved the engine in the last 4 years, the fundamental problems with laser scanning will remain. Looking at the pictures of the Bosch projector on their website, it looks pretty big and bulky for what is likely a very low resolution display with a small FOV.

      Regardless of any improvements in the laser scanning, there are fundamental problems with laser scanning directly into the eye via a holographic mirror. The biggest problem is that it is “Maxwellian” which will result in a very small eye box, meaning the glasses will have to be custom fit for any chance of seeing the image and even then it will be hard to find (one of the big problems with North Focals). Being Maxwellian, and “floater” (which all people over 30 have, but usually don’t notice with “normal” light) or eye lashes will cause shadows in the image.

  2. Is there any difference between diffractive solutions and geometric waveguides with regard to the issus of disparity?

    • Geometric, or what many call “reflective” waveguides generally have vastly better color uniformity so they wouldn’t have the color disparity/difference. I would think they would still be subject to the issue of the twist and eye location disparity issues.

  3. Are there any diffractive waveguide companies that stand out from the rest of the bunch? And if so, how do they stand out?

    • It is very hard to compare diffractive waveguides and there are many factors to consider. These factors include FOV, Resolution, Light efficiency and Brightness (for a given FOV), Color Uniformity, rainbow capture, front projection (eye glow), weight (glass and plastic), number of layers (varies from three (RGB) two (RB BG), or single), and cost. There is such a large matrix of factors and each company is better at some and worse at others it is impossible to objectively compare. They are a moving target as some companies keep improving while others don’t seem to improve much from year to year. Its also hard to compare most are not yet in volume production. The ones at conferences could be “cherry picked.”

  4. Thank you for an interesting article, Karl!

    Since you yourself highlight Polight: Based on everything you’ve seen of AR/VR/XR/MR solutions lately, and also from what you’ve heard through dialogue with what I perceive to be a large number of tech companies: Do you have any belief that we could see a shift towards more autofocus/see-what-I-see solutions in the near future, such as for example Magic Leap 2?

  5. Thank you for an interesting article, Karl! Since you yourself highlight Polight: Based on everything you’ve seen of AR/VR/XR/MR solutions lately, and also from what you’ve heard through dialogue with what I perceive to be a large number of tech companies: Do you have any belief that we could see a shift towards more autofocus/see-what-I-see solutions in the near future, such as for example Magic Leap 2?

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading