Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
After my last article, Meta (aka Facebook) Cambria Electrically Controllable LC Lens for VAC?, many people were pointing me to Half Dome 3 (HD3) and in particular a Doug Lanman, Director of Display Systems Research at Facebook Reality Lab, video: EI 2020 Plenary: Quality Screen Time: Leveraging Computational Displays for Spatial Computing. Lanman’s video does an excellent job of laying out many of the issues with trying to address vergence accommodation and has a section 33 minutes into the video on Half-Dome 3.
Additionally, Ben Lang from RoadToVR to his July 2020 and Sept 2019 pointed me to his articles on HD3. I know I attended Lanman’s AR/VR/MR 2020 talk Ben referenced in the July 2020 article. Ben’s 2019 article linked to a video by Micheal Abrash, Chief Scientist, Reality Labs, Meta, on HD3 with the same video sequence. It was also pointed out to me by several people that many were expecting that Quest 2 would be HD3.
HD3 certainly appears to be a physical instantiation of the patent applications I referred to last time. In Lanman’s AR/VR/MR presentation in July 2020, Lanman stated that HD3 is “It’s almost ready for primetime.” So then the question becomes, “will Cambria have Half Dome 3 LC lenses for vergence-accommodation?“
This article is going to touch lightly (with references for more detail) on multiple interrelated concepts:
I will then wrap up with a summary of the pros and cons for HD3 being the prototype for Cambria.
Meta’s Reality Labs is the latest in a series of name changes for the group that used to be known as Facebook’s Reality Labs, and before that, Oculus Research. Rather than jumping back and forth between the various names for the same entity, I plan to use Meta synonymously with Facebook and Oculus (at least most of the time).
Every big and many small companies working in VR and AR are trying to improve VAC. VAC occurs when the binocular disparity, which causes the eyes to verge, disagrees with the distance that the eyes are focusing (accommodation). The effects vary from person to person, but they can lead to eye strain, nausea, and headaches commonly associated with VR and 3-D Movies.
Approaches to addressing VAC include electromechanically and electrostatically moving optics, fluid-filled flexible lenses, electrostatic mirrors, multiple waveguides (ex. Magic Leap One), and multiple selectable LC screens (ex. Lightspace 3D.
Lanman’s EI 2020 Plenary video goes through some subtler issues with good information about various VR human visual factor issues.
Lanman’s EI 2000 presentation discusses depth perception and uses a graph (below) from one in Perceiving layout and knowing distances by Cutting and Vishton (1995).
Occlusion is one of the other very hard problems AR while at the same time having a major effect on perception. The graph (above left) shows that occlusion has the biggest effect on depth perception for both near and far objects. If something appears to be in front of something else, your vision tells you it is further away (and a source of many optical illusions).
Aerial Perspective occurs when far away large objects, such as mountains in the distance, look hazy due to the atmosphere. Motion parallax is how things behave when you move your head. Close things will change more than things far away relative to each other.
Having Motion parallax behave is also a big issue for passthrough AR (VR with a camera for the real world). If the camera is off-center or too far from the eye, the camera’s view will shift from what the eye normally sees. The difference can cause motion sickness, coordination, and other problems. Lynx AR claims that one of their reasons for their somewhat radical optical design (right) was to reduce the camera to eye distances to reduce this problem. Pancake optics would give a similar eye-to-camera distance. Quoting from Steve Mann: My “Augmented” Life – IEEE Spectrum, MAR. 1, 2013 about the importance of the eye to camera with passthrough AR (with my bold emphasis):
My concern comes from direct experience. The very first wearable computer system I put together showed me real-time video on a helmet-mounted display. The camera was situated close to one eye, but it didn’t have quite the same viewpoint. The slight misalignment seemed unimportant at the time, but it produced some strange and unpleasant results. And those troubling effects persisted long after I took the gear off. That’s because my brain had adjusted to an unnatural view, so it took a while to readjust to normal vision.
Lanman makes the point that much of VR (and AR) Personal Space distance of less than 10 meters. If you look at Lanman points out that much of VR (and AR) Personal Space distance of less than 10 meters. If you look at the graphs above, you will see that Binocular Disparity, Motion Parallax, Convergence (vergence), and Accommodation start to become the most important depth clues when they were not much of a factor at longer distances. In particular, Lanman singles out accommodation as the biggest unsolved problem for VR on the list of depth cues.
Lanman has a nice chart taken from an earlier paper, Focal Surface Displays, which he co-authored at Siggraph 2017. The chart does a great job of summarizing the pros and cons of various accommodation-supporting displays. HD3 is classed as a “varifocal” display. With a varifocal display, the eyes are tracked, and then variable focus optics change the focus of the image to agree with the eyes’ vergence. The focus is changed for the whole display, making everything in-focus, near and far. The problem is that things that should be out of focus must be rendered that way. While blurring may seem simple, it turns out that humans can detect if things are blurred unnaturally, as briefly discussed in the video by Lanman and with papers by Meta Research’s DeepFocus. DeepFocus is Meta’s name for software that accurately simulates retinal blurring when rendering.
Meta’s has been working on Half Dome and related projects since at least a far back as 2016. One thing you have to say about Meta research, their “prototypes” look better than many finished VR and AR products😁.
Half Domes 1 and 2 used motors to move optics, whereas HD3 uses multiple liquid crystal lenses. As the gif sequence on the right, from a September 2019 Oculus Blog and used in videos about HD3, the various LC lenses have binarily weighted focus effects. Whit 6 on-off lenses as shown, they can get 64 discrete diopter/focus changes.
The big questions then become 64 focus levels enough, how much image degradation will occur from the many LC lenses, and is it manufacturable?
HD3 appears to be related to Facebook’s patent application 2020/0348528 discussed in my previous blog article Meta (aka Facebook) Cambria Electrically Controllable LC Lens for VAC?
Fresnel structures, are notorious for causing diffraction due to the discontinuity at each step in the Fresnel. There is the issue of how well can the LC be controlled at each step in the Fresnel and any difference from being ideal will cause distortion. The structures such as wires and (mostly) transparent conductors have some impact on the light. Considering the Fresnel lens structure and the use of the LC, one has to wonder about the image quality of an individual lens, no less a stack of them.
More subtly for those that have not worked with LC is that LC’s effect on polarized light is affected by the angle of the light rays. LC structures generally work best when all the light rays are normal to the surface, and their effect on the light varies as the angle varies. In the case of the lens structures, the light rays are bent at various angles depending on the power of the lenses, and thus light rays will be moving in somewhat random directions relative to the surface.
Patent figures are much simplified, and a June 2021 SID ICDT Journal paper by Meta shows an actual device with details about its structure and performance (selected figures below). Likely, this was part of the HD3 effort. Looking at Figure 6 (below right), we can see that the image degrades considerably from the center to the outsides of the image.
The image degradation shown in Figure 6 above is for a single lens. We then need to consider the effect if several of these lenses are stacked. All the videos publicly available of the HD3 are only low-resolution camera shots of a short sequence, making it impossible to judge the overall image quality.
John Carmack, Consulting Oculus (still Oculus?) CTO gave (or at least his aviator did) a Live Q&A In Horizon Beta – Connect 2021. In answer to a question about Varifocal, Carmack seems more pessimistic than Lanman concerning varifocal. Quoting from the transcript with some minor cleanup to make it easier to read and with my bold highlighting:
26:40 [Question] How important are varifocal displays for the future of VR?
[Carmack] So varifocal is something that we’ve demonstrated with some prototype displays, and again my personal opinions do not necessarily map lots of other people here.
Where a perfect varifocal system obviously makes the system better now, there’s two aspects to this: dollar cost and volume cost, weight cost, thermal cost processing cost that you have to spend on this. But then there’s also the question of imperfect varifocal may not be that much better. There’s value when it’s done right.
But what’s the shape of the approximation to that? Because it’s possible that mediocre quality varifocal, either because you’ve only got a limited number of adjustment ranges or because the determination of what you’re focusing on is imprecise, could wind up not having most of that value or even dip negative.
If the ability to eye track onto something to determine the distance you want the focal plane to work on, if that’s not really accurate or fast responding, you could be looking at something, and it gets blurrier than if you never had anything at all on it.
So I don’t think we’re at a point right now where we can say that we have a perfect line of sight on really doing it to a guaranteed high value, let alone net value range. There are a lot of problems going from we had systems that worked well on someone in the lab, and then they spread out trying broader ranges of people. . . .
The other thing about varifocal is that I would make the point that I’m for a lot of the things like dealing with screens. If we just put our screens where the focal plane is for those use cases varifocal, it could only be a net negative. Hence, it’s only for use cases outside of that where it has potential positives, and it still may not wind up being a true positive or a net positive great.
Carmack seems to be tamping down expectations and hints at the technical disagreement between various people at Oculus/Meta concerning varifocal. He seems to clarify that eye tracking is not good enough yet. He also is concerned whether any discrete varifocal technology will work well enough.
HD3 uses pancake (folded) optics and variable Fresnel lens. Meta put out a paper at SPIE Photonics Europe, Viewing optics for immersive near-eye displays: pupil swim/size and weight/stray light, including comparisons between Pancake and Fresnel Optics (Lanman is one of 16 authors of the 18-page paper). HD3 has both pancake optics with Fresnel LC lenses.
The set of figures below left shows the diffractive effects of Fresnel lenses. Table 1 below shows the poor contrast of pancake optics. Table 2 gives a high-level summary of the pros and cons of smooth refractive optics versus Fresnel and pancake optics.
Figure 12 from the same paper shows various ghost caused by pancake optics.
As discussed in Meta (aka Facebook) Cambria Electrically Controllable LC Lens for VAC? and shown below, the display light passes through various surfaces, some of which from two directions. Any light scattered from these surfaces will then be magnified with a change of focus and translated by any subsequent optic in the light path, thus causing multiple ghosts of different sizes, locations, and focuses, as shown in Figure 12 above.
With HD3 having both Fresnel LC lenses and Pancake optics, one would expect to see a combination of problems of both types of optics. I should note that these problems may not be readily apparent in well-crafted videos.
The Pros for Cambria Being Varifocal:
The Cons for Cambria Being Varifocal
On balance, it does not seem that varifocal/HD3 is ready for a higher-end product in the next few years. Meta did state and show that Cambria would be based on pancake optics, but as Meta’s recent paper shows, pancake optics makes compromises in contrast and causes ghosts that are not consistent with a higher-end product.
There can be a wide gap between “working” and meeting customer expectations for image quality and cost. It seems particularly in AR and VR that some technology can be good enough to impress people but still be far from being a product. Solving what seems like the “last 10%” to make a technology product worthy may take more than 100 times the effort.
At some point, Meta needs to “land” some of their massive advanced R&D into products. Quest 1&2 seem to be kAt some point, Meta needs to “land” some of their massive advanced R&D into products. Quest 1&2 seem to be known more for Meta’s ability to sell at or below cost to gain market share than for pushing the technology forward. Meta’s current Ray-Ban smart glasses showed how much of an AR display could (or, in this case, could not) fit in a pair of Ray-Bans.
US20200174255A1 – Optical Systems with Multi-Layer Holographic Combiners
Hi, Mr. Karl Guttag
I been a big fan of you blogs, I wonder what do you think about this Holographic Combiner Design From Apple Published last year in July 2020.
Where they are basically using a Transmission HOE to Replicate light at multiple output angles and then use Reflection HOE to focus those light rays to create an eyebox.
The design looked really cool at least on paper, Would love to know what do you think about it.
Thanks.
Hi Karl, this is a great analysis article, as always. While to me, more likely Meta is using a Geometric Phase lens (or Pancharatnam-Berry phase lens), which could have better diffraction performance than Fresnel LC lens. It has two focal depths with high selectivity on polarization, so possibly by switching incident polarization, switchable focal depth can be achieved. Can you comment on this technology as well?
You make a very good point that Doug Laman said that they were using Pancharatnam-Berry phase lenses at this point in this video: https://youtu.be/NSf-kZ5OV5A?t=735. In this video, Lanman mentioned the work of Afsoon Jamali who seems to be associated with the Fresnel work I cited https://youtu.be/LQwMAl9bGNY?t=2073 but then this video too goes on to cite Pancharatnam-Berry phase lenses. Digging some more it turns out that Afsoon Jamali was also contributing to Facebook/Meta Pancharatnam-Berry phase work.
Thanks, but you just made more work for me as I think you are correct. :-).
Hi Karl, thanks for your input on hd3.
What about ImagineOptix (doing business with Valve and probably Apple) liquid crystal polymer optics that claim to have sub millimeter thickness with no or at least very low chromatic aberration (well… mono chromatic for now I think, rgb in progress) ? They also claim to the brightest polarizers.
Did you follow what this company is doing, and if you did, do you think it’s worth as much interest as it seems. It seem it would solve many of Meta’s issues.
>Given the many layers, and a polarization loss per layer, I feel like transmission losses would make this tech untenable for untethered systems.
>Also agree with John Carmack, gaze and fixation location aren’t solved to the degree needed to make this work.
>The Cutting Vishton paper doesn’t talk about which combination of cues actually is the best, feels like Doug oversold a plot.
> AFAICT, FB never presented anything new with GPL arch. Finite tunability of LC cells is a bitch, and nobody has really solved it. I would bet on Apple here, given their LC expertise.
>Their deepfocus stuff is mostly academic if they don’t know how to design a SOC or pipelined compute to make that work. And from what I hear, their original SoC hires quit to go back to wherever they came from (Apple?).
All this to say, agree w your conclusions! But it really depends on what FB execs want to sell as a “quality product”. Also pretty confident they are just going to reuse Apple’s definition given how many engineers they have tried to hire from them.
[…] up on Meta’s Cambria (Part 2): Is It Half Dome 3? has led to a bit of intrigue. One of my readers pointed out that as stated in the […]
[…] in a blog post, display expert Karl Guttag talks about finding Meta patents that describe a combination of pancake lenses and liquid crystal lenses. The drawings in the patent […]
[…] with some on DeepOptics (Meta (aka Facebook) Cambria Electrically Controllable LC Lens for VAC? and Meta’s Cambria (Part 2): Is It Half Dome 3?); and Magic Leap with some on DeepOptics (Magic Leap 2 (Pt. 2): Possible Answers from Patent […]
[…] Leap 2. Meta regularly presents papers and videos about their attempts to address VAC, including Half Dome 1, 2, and 3, focus surfaces, and a new paper using varifocal at Siggraph in August […]