Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Update June 14, 2023 PM: It turns out that Apple’s news release states, “This technological breakthrough, combined with custom catadioptric lenses that enable incredible sharpness and clarity . . . ” Catadioptric means a combination of refractive and reflective optical elements. This means that they are not “purely refractive” as I first guessed (wrongly). They could be pancake or some variation of pancake optics. Apple recently bought Limbak, an optics design company known for catadioptric designs including those used in Lynx. They also had what they called “super pancake” designs. Assuming Apple is using a pancake design, then the light and power output of the OLEDs will need to be about 10X higher.
UPDATE June 14, 2023 AM: The information on the battery used as posted by Twitter User Kosutami turned out to be a hoax/fake. The battery shown was that of a Meta Quest 2 Elite as shown in a Reddit post of a teardown of the Quest 2 Elite. I still think the battery power of the Apple Vision Pro is in the 35 to 50Wh range based on the size of the AVP’s battery pack. I want to thank reader Xuelei Zhang for pointing out the error. I have red-lined and X-out the incorrect information in the original article. Additionally based on the battery’s size, Charger Labs estimates that the Apple Vision Pro could be in the 74WH range, but I think this is likely too high based on my own comparison.
I have shot a picture with a Meta Quest Pro (as a stand-in to judge size and perspective to compare against Apple’s picture of the battery pack. In the picture is a known 37Wh battery pack. This battery pack is in a plastic case with two USB-A and one USB-micro, not in the Apple battery pack (there are likely some other differences internally).
I tried to get the picture with a similar setup and perspective, but this is all very approximate to get a rough idea of the battery size. The Apple battery pack looks a little thinner, less wide, and longer than the 37Wh “known” battery pack. The net volume appears to be similar. Thus I would judge the Apple battery to be between about 35Wh and 50Wh.
I’ve been watching and reading the many reviews by those invited to try (typically for about 30 minutes) the Apple Vision Pro (AVP). Unfortunately, I saw very little technical analysis and very few with deep knowledge of the issues of virtual and augmented reality. At least they didn’t mention what seemed to me to be obvious issues and questions. Much of what I saw were people that were either fans or grateful to be selected to get an early look at the AVP and wanted (or needed) to be invited back by Apple.
Unfortunately, I didn’t see a lot of “critical thinking” or understanding of the technical issues rather than having “blown minds.” Specifically, while many discussed the issue of the uncanny valley with the face capture and Eyesight Display, no one even mentioned the issues of variable focusing and Vegence Accommodation Conflict (VAC). The only places I have seen it mentioned are in the Reddit AR/VR/MR and Y-Combinator forums. On June 4th, Brad Lynch reported on Twitter that Meta would present their “VR headset with a retinal resolution varifocal display” paper at Siggraph 2023.
As I mentioned in my AWE 2023 presentation video (and full slides set here), I was doubtful based on what was rumored that Apple would address VAC. Like many others, Apple appears to have ignored the well-known and well-documented human mechanical and visual problem with VR/MR. As I said many times, “If all it took were money and smart people, it would be here already. Apple, Meta, etc. can’t buy different physics,” and I should add, “they are also stuck with humans as they exist with their highly complex and varied visual systems.”
Treat the above as a “teaser” for some of what I will discuss in Part 2. Before discussing the problems I see with the Apple Vision Pro and its prospective applications in Part 2, this part will discuss what the AVP got right over the Meta Quest Pro (MQP).
I know many Apple researchers and executives read this blog; if you have the goods, how about arranging for someone that understands the technology and human factor issues to evaluate the AVP?
I want to highlight three publications that brought up some good issues and dug at least a little below the surface. SadlyIsBradley had an hour and 49-minute live stream discussing many issues, particularly the display hardware and the applications relative to VR (the host, Brad Lynch, primarily follows VR). The Verge Podcast had a pre-WWDC (included some Meta Quest 3) and post-WWDC discussion that brought up issues with the presented applications. I particularly recommend listening to Adi Robertson’s comments in the “pre” podcast; she is hilarious in her take. Finally, I found Snazzy Lab’s 13-minute explanation about the applications put into words some of the problems with the applications Apple showed; in short, there was nothing new that had not failed before and was not just because the hardware was not good enough.
Apple’s AVP has shown up in Meta’s MQP in just about everyone’s opinion. The Meta quest pro is considered expensive and poorly executed, with many features poorly executed. The MQP costs less than half as much at introduction (less than 1/3rd after the price drop) but is a bridge to nowhere. The MQP perhaps would better be called the Quest 2.5 (i.e., halfway to the Quest 3). Discussed below are specific hardware differences between the AVP and MQP.
I will be critical of many of Apple’s AVP decisions, but I think all the comments I have seen about the price being too high completely miss the point. The price is temporal and can be reduced with volume. Apple or Meta must prove that a highly useful MR passthrough headset can be made at any price. I’m certainly not convinced yet, based on what I have seen, that the AVP will succeed in proving the future of passthrough MR, but the MQP has shown that halfway measures fail.
The people commenting on the AVP’s price have been spoiled by looking at mature rather than new technology. Take as just one example, the original retail price of the Apple 2 computer with 4 KB of RAM was US$1,298 (equivalent to $6,268 in 2022) and US$2,638 (equivalent to $12,739 in 2022) with the maximum 48KB of RAM (source Wikipedia). As another example, I bought my first video tape recorder in 1979 for about $1,000, which is more than $4,400 adjusted for inflation, and a blank 1.5-hour tape was about $10 (~$44 in 2023 dollars). The problem is not price but whether the AVP is something people will use regularly.
Meta Quest Pro’s (MQP) looks like a half-baked effort compared to the AVP. The MQP’s passthrough mode is comically bad, as shown in Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough. Apple’s AVP passthrough will not be “perfect” (more on that in part 2), but Apple didn’t make something with so many obvious problems.
The MQP used two IR cameras with a single high-resolution color camera in the middle to try and synthesize a “virtual camera” for each eye with 3-D depth perception. The article above shows that the MQP’s method resulted in a low-resolution and very distorted view. The AVP has a high-resolution camera per eye, with more depth-sensing cameras/sensors and much more processing to create virtual camera-per-eye views.
I should add that there are no reports I have seen on how accurately the AVP creates 3-D views of the real world, but by all reports, the AVP’s passthrough is vastly better than that of the MQP. A hint that all is not well with the AVP’s passthrough is that the forward main cameras are poorly positioned (to be discussed in Part 2).
The next issue is that if you target “business applications” and computer monitor replacement, you need at least 40 pixels per degree (ppd), preferably more. The MQP has only about 20 pixels per degree, meaning much less readable text can fit in a given area. Because the fonts are bigger, the eyes must move further to read the same amount of text, thus slowing down reading speed. The FOV of the AVP has been estimated to be about the same as the MQP, but the AVP has more than 2X the horizontal and vertical pixels, resulting in about 40 ppd.
A note on measuring Pixels per Degree: Typically, VR headset measurement of FOV includes the biocular overlap from both eyes. When it comes to measuring “pixels per degree,” the measurment is based on the total visible pixels divide by the FOV in the same direction for a single eye. The single eye FOV is often not specified and there may be pixels that are cut off based on the optics and the eye location. Additionally, the measurement has a degree of variability based on the amount of eye relief assumed.
Having at least 40 pixels per degree is “necessary but not sufficient” for supporting business applications. OI believe that other visual human factors will make the AVP unsuitable for business applications beyond “emergency” situations and what I call the “Ice Bucket Challenges,” where someone wears a headset for a week or a month to “prove” it could be done and then goes back to a computer monitor/laptop. I have not seen any study (having looked for many years), and Apple presented none that suggests the long-term use of virtual desktops is good for humans (if you know of one, please let me know).
Ironically, in the watchOS video, only a few minutes before the AVP announcement, Apple discussed (linked in WWDC 2023 video) how they implemented features in watchOS to encourage people to go outside and stop looking at screens, as it may be a cause of myopia. I’m not the only one to catch this seeming contradiction in messaging.
The AVP’s Micro-OLED should give better black levels/contrast than MPQ’s LCD with a mini-LED local dimmable backlight. Local dimming is problematic and based on scene content. While the mini-LEDs are more efficient in producing light, much of that light is lost when going through the LCD, and typically only about 3% to 6% of the backlight makes it through the LCD.
While Apple claims to be making the Micro-OLED CMOS “backplane,” by all reports, Sony is applying the OLEDs and performing the Micro-OLED assembly. Sony has long been the leader Micro-OLEDs used in camera viewfinders and birdbath AR headsets, including Xreal (formerly Nreal — see Nreal Teardown: Part 2, Detailed Look Inside).
The color sub-pixel arrangement in the WWDC videos shows a decidedly small light emission area with black space between pixels than the older Sony ECX335 (shown with pixels roughly to scale above). This suggests that Apple didn’t need to push the light output (see optic efficiency in next section) and supported more efficient light collection (semi-collimation) with the use of micro-lens-arrays (MLAs) which are reportedly used on top of the AVP’s Micro-OLED.
John Carmack, former Meta Consulting CTO, gave some of the limitations and issues with MQP’s Local Dimming feature in his unscripted talk after the MQP’s introduction (excerpts from his discussion):
21:10 Quest Pro has a whole lot of back lights, a full grid of them, so we can kind of strobe them off in rows or columns as we scan things out, which lets us sort of get the ability of chasing a rolling shutter like we have on some other things, which should give us some extra latency. But unfortunately, some other choices in this display architecture cost us some latency, so we didn’t wind up really getting a win with that.
But one of the exciting possible things that you can do with this is do local dimming, where if you know that an area of the screen has nothing but black in it, you could literally turn off the bits of the backlight there. . . .
Now, it’s not enabled by default because to do this, we have to kind of scan over the screens and that costs us some time, and we don’t have a lot of extra time here. But a layer can choose to enable this extra local dimming. . . .
And if you’ve got an environment like I’m in right now, there’s literally no complete, maybe a little bit on one of those surfaces over there that’s a complete black. On most systems, most scenes, it doesn’t wind up actually benefiting you. . . .
There’s still limits where you’re not going to get, on an OLED, you can do super bright stars on a completely black sky. With local dimming, you can’t do that because if you’ve got a max value star in a min value black sky, it’s still gotta pick something and stretch the pixels around it. . . . We do have this one flag that we can set up for layer optimization.
John Carmack Meta Connect 2022 Unscripted Talk
Update June 14, 2023 PM: It turns out that Apple’s news release states, “This technological breakthrough, combined with custom catadioptric lenses that enable incredible sharpness and clarity . . . ” Catadioptric means a combination of refractive and reflective optical elements. This means that they are not “purely refractive” as I first guessed (wrongely). They could be pancake or some variation of pancake optics. Apple recently bought Limbak, an optics design company known for catadioptric designs including those used in Lynx. They also had what they called “super pancake” designs. Assuming Apple is using a pancake design, then the power output of the OLEDs will need to be about 10X higher.
Apple used a 3-element aspherical optic rather than Pancake optics in the MQP and many other new VR designs. See this blog’s article Meta (aka Facebook) Cambria Electrically Controllable LC Lens for VAC? which discusses the efficiencies issues with Pancake Optics. Pancake optics are particularly inefficient with Micro-OLED displays, as used in the AVP because they require the unpolarized OLED light to be polarized for the optics to work. This polarization typically loses about 55% of the light (45% transmissive). Then there is a 50% loss on the transmissive pass and another 50% loss on the reflection of a 50/50 semi-mirror in the pancake optics, which results, when combined with the polarization loss, less than 11% of the OLED’s light, making it through pancake optics. It should be noted that the MQP currently uses LCDs that output polarized light, so it doesn’t suffer the polarization loss with pancake optics but still has the 50/50 semi-mirror losses.
The AVP uses four hand-tracking cameras, with the two extra cameras supporting the tracking of hands at about waist level. Holding your hand up to be tracked has been a major ergonomic complaint of mine since I first tried the Hololens_1. Anyone who knows anything about ergonomics knows that humans are not designed to hold their hands up for long periods. Apple seems to be the first company to address this issue. Additionally, by all reports, the hand tracking is very accurate and likely much better than MQP.
According to all reports, the AVP’s eye tracking is exceptionally good and accurate. Part of the reason for this better eye tracking is likely due to better algorithms and processing. On the hardware side, it is interesting that the AVP’s IR illuminator and cameras go through the eyepiece optics. In contrast, on the Meta Quest Pro, the IR illuminator and cameras are closer to the eye on a ring outside the optics. The result is that the AVP cameras have a more straight-on look at the eyes. {Brad Lynch of SadlyIsBradley pointed out the difference in IR illuminator and camera location between the AVP and MQP in an offline discussion.}
As many others have pointed out, the AVP uses a computer-level CPU+GPU (M2) and a custom-designed R1 “vision processor,” whereas the MQP uses high-end smartphone processors. Apple has pressed its advantage in hardware design over Meta or anyone else.
The AVP (below left), the AVP has two squirrel-cage fans situated between the M2 and R1 processor chips and the optics (below left). The AVP appears to have about 37 Watt-Hour battery (see next section) to support the two-hour rated battery life. Thus it suggests that the AVP consumes “typically” about 18.5 Watts. This is consistent with people noticing very-warm/hot air coming out of the top vent holes. The MQP (below right) has a similar dual fan cooling. The MQP has a 20.58 Watt-Hour battery, with the MQP rated by Meta as lasting 2-3 hours.
Because the AVP uses a Micro-OLED and a much more efficient optical design, I would expect the AVP’s OLED to consume less than 1W per eye and much less when not viewing mostly white content. I, therefore, suspect that much of the power in the AVP is going to the M2 and R1 processing. In the case of Meta’s MQP, I suspect that a much higher percentage of the system power will power through the inefficient optical architecture.
It should be noted that the AVP displays about 3.3 times the pixels, has more and higher resolution cameras, and supports much higher resolution passthrough. Thus the AVP is moving massively more data which also consumes power. So while it looks like the AVP consumes about double the power, the power “per pixel” is about 1/3rd less than the MQP and probably much less when considering all factors. Considering the processing done by the AVP seems much more advance processing, it demonstrates Apple’s processing efficiency.
CORRECTION (June 14, 2023): Based on information from reader Xuelei Zhang, I was able to confirm that widely reported tweet of the so-called Apple Vision Pro Battery was a hoax and what was shown is the battery used in a Meta Quest 2 Elite. You can see in the picture on the right how the number is the same and there is the metal slug with the hole just like the supposed AVP battery. I still think based on the size of the battery pack is similar in size to a 37Wh battery or perhaps larger. In an article publish today, Charger Labs estimates that the Apple Vision Pro could be in the 74WH range which is certainly possible, but appears to me to be too big. It looks to me like the batter is between 35Wh and 50Wh.
Based on the available information, I would peg the battery to be in the 35 to 50Wh range and thus the power “typical” power consumption of the AVP to be in the 17.5W to 25W range or about two times the Meta Quest Pro’s ~10W.
Numerous, what I think is erroneous, articles and video report that the AVP has a 4789mAh/18.3Wh battery. Going back to the source of those reports, at Tweat by Kosutami, it appears that the word “dual” was missed. Looking at the original follow-up Tweats, the report is clear that two cells are folded about a metal slug and, when added together, would total 36.6Wh. Additionally, in comparing the AVP’s battery to scale with the headset, it appears to be about the same size as a 37Wh battery I own, which is what I was estimating before I saw Kosutami’s tweet.
Importantly, if the AVP’s battery capacity is doubled, as I think is correct, then the estimated power consumption of the AVP is about double what others have reported, or about 18.5 Watts per hour.
The MQP battery was identified by iFixit (above left) to have two cells that combine to form a 20.58Wh battery pack, or just over half that of the AVP.
With both the MQP and AVP claiming similar battery life (big caveat, as both are talking “typical use”), it suggests the AVP is consuming about double the power.
Based on my quick analysis of the optics and displays, I think the AVP’s displays consume less than 1W per ey or less than 2W. This suggests that the bulk of the ~18W/hour is used by the two processors (M2, R1), data/memory movement (often ignored), the many cameras, and IR illuminators.
In part 2 of this series, I plan to will discuss the many user problems I see with the AVP’s battery pack.
This blog does not seriously follow audio technology, but by all accounts, the AVP’s audio hardware and spatial sound processing capability will be far superior to that of the MQP.
In many ways, the AVP can be seen as the “Meta Quest Pro done much better.” If you are doing more of a “flagship/Pro product,” it better be a flagship. The AVP is 3.5 times the current price of the MQP and about seven times that of the Meta Quest 3, but that is largely irrelevant in the long run. The key to the future is whether anyone can prove that the “vision” for passthrough VR at any price is workable for a large user base. I can see significant niche applications for the AVP (support for people with low vision is just one, although the display resolution is overkill for this use). But as I will discuss next time, there are giant holes in the applications presented.
If the MQP or AVP would solve the problems they purport to solve, the price would not be the major stumbling block. As Apple claimed in the WWDC 2023 video, the feature set of the AVP would be a bargain for many people. Time and volume will cure the cost issues. My problem (teaser for Part 2) is that neither will be able to fulfill the vision they paint, and it is not the difference between a few thousand dollars and a few more years of development.
Great piece Karl, look forward to part II. Also, would you be interested in an interview on ‘All Things 3D’?
Excellent analysis.
Thank you for the analysis beyond the hype.
Regarding the VAC: I wonder how relevant this is to use cases like “spatial computing” where you mostly focus on objects that are at medium distances? I suppose you would not have a virtual screen close to your eyes.
Since virtual screens and lenses approximate 1.5m ir so distances ever since the Dk1 (perhaps with pancake lenses this range is different not sure) i imagine that the comfort when replacing desktop monitors and home televisions will not cause the same VAC fatique as when playing games or looking around in a 3d space.
I understand that they will probably target about 1.5 to 2M like everyone else. But then what you do about objects, like your hands, that a supposed to be close? How do you transition between things that are near and far?
I’m also not sure what happens if you make a virtual monitor where you optical focus point is far away. Does it also make it harder to mentally focus (they used to say so, but I have not seen a recent study)? There is a whole class of things that “just don’t work right.” Humans do a lot of things with their eyes, heads, and hands without deliberately thinking about it.
Overarching everything to me is that while we have had consumer TV glasses and even many monitor glasses over the last 30 years, I never have seen anyone wearing them. I’m really suspicious about the use on airplanes. Resolution is not a problem for TV/Movies, yet people would prefer to watch a small image on a smartphone.
Dear Kaul,Part NO. 345-00684-A is the battery of Quest 2, not for the Apple Vision Pro, I have disassemble many Quest2. The source of the news from personal twitter of @Kosutami, it is a fake news. The original pictures of the battery parts are from a flea market app named Xianyu which operated by the Alibaba Comopany.
Thanks, but with one sight addition/correction. The battery pack Part NO. 345-00684-A is for the “Quest 2 Elite” and not the normal Quest 2. I confirm this with the Reddit post: https://www.reddit.com/r/OculusQuest/comments/ksan42/oculus_quest_2_elite_battery_strap_disassemble/
I was at first thrown by the iFixit teardown shows the regular Meta Quest 2 to have a battery model 345-00550-A with a rated capacity of 14 Wh (see https://www.ifixit.com/Guide/Oculus+Quest+2+Disassembly/139759 and https://guide-images.cdn.ifixit.com/igi/UDyNvbwxKGrHS2d2.huge). But in looking further (particularly since the battery numbers were so similar), I was able to find the Reddit post which confirms your information, that the Kosutami post is a fake.
I will be updating the article with this new information
Thanks Karl for another thorough analysis!
A question on your “Note on measuring Pixels per Degree”: shouldn’t it be the other way around? That is, the number of visible pixels divided by the FOV for a single eye?
Oops, Thanks. I have corrected the definition.
Thanks for your articles, always very detailed and interesting.
On the myopia subject, at least the focal distance of the Vision Pro may be less stressful for the eyes than the 30-60cm away watch/computer screens.
Don’t know how they tuned it given their envisioned use cases, maybe 1.5-2m like other VR headsets.
I guess it’s still not good to look at that all day anyway, better take breaks.
This is correct. What matters for vision is the focus distance, not the screen distance. VR headsets with infinite focus distance (precisely because of the lack of varifocal) are less stressful for the eyes.
The weight/front heaviness was also called out by reviewers with marks on the forehead. I guess the design teams wanted metal for a premium feel. I don’t think that was a good idea, especially in combination with the default headstrap. Getting a headstrap in the style of PlayStation VR one was my first Quest 2 update. If they target office work, weight over longer periods becomes an issue. For office work, I would expect a free-floating design like Quest Pro and Hololens 2 to be much better since both headsets also allow for having peripheral vision. (You can do that on Quest 2 by removing the facial interface and using the pass-through mode; suddenly even Quest 2 feels like an AR headset.) Both the weight and peripheral vision issues could be fixed to some extent by offering custom headstraps (halo design) and open-face interfaces.
These are interesting advancements, likely highly useful for fully digital content interactions and experiences, yet they do not eliminate skepticism in the users ability to confidently interact with the physical world using passthru video, particularly at a variety of ever-changing depths. Considering the nature of visual attention and physical interaction with constant and dramatic shifts between near, far, and mid distances on a second-to-second basis, and that the biological perceptual sensorum adapts to these conditions in concert with no latency, there is a lot to keep up with in simulating seeing and interacting with the real world! The situational awareness of dynamic environmental happenings at variable focal distances may prove challenging with passthru video for confident and accurate interaction with physical objects; tools, touching, lifting, reaching, etc? Will part 2 about user experiences provide insights about technical and biological factors for physical world interactions like targetting and opening doors while moving through space, fitting an eye glasses screw, or pouring a glass of orange juice? Your work is very much appreciated!
“I have not seen any study (having looked for many years), and Apple presented none that suggests the long-term use of virtual desktops is good for humans (if you know of one, please let me know).”
We also have not had 40ppd 90Hz displays. I think that the use case shown for virtual desktop were screens are ~1m away with high resolution and hence less strain on the focus will make VAC unnoticeable for most of the population, but I guess we will see in 2024
Karl, I’m not sure if you are aware, but both UploadVR and RoadToVR did hands on time with the AVP and I would consider both to be knowledgeable of VR hardware and critical reviewers (though Apple’s short, on-the-rails demo didn’t really give much time for exploration or comparison):
https://www.roadtovr.com/apple-vision-pro-xr-hands-on-preview/
Great Article Karl ! Bug Fan ! but isn’t it Vergence Accommodation, Instead of Vegence Accommodation Conflict ( VAC) ..or i miss some thing here..
If you watch the reveal trailer carefully it looks to me like they are using some software tricks to simulate focal depth. I don’t know how well it actually works but with that many pixels it might be quite captivating. There is definitely some changing blur as the guy pulls his messages screen towards him and as (I assume) he focuses on the messages after being focused on the pass through background it looks like the background feed zooms out a tiny bit to simulate the change in focus to the near field.
Keeping the hands down in your lap for most of the interactions means less shifting between short and medium depths… the biggest challenge would be that virtual keyboard, but if they can track your eyes as well as everyone says I suspect the blur/motion fakery would simulate shifting focal depth between keyboad and virtual screen better than has been done to date.
They have people using macs through the pass through feed and looking at their apple watch through the feed… and those will be the most challenging interactions, but is it possible with the lidar and eye tracking combined that they could process that feed in real-time and blur the far away things if you are looking at your watch on your wrist?
I guess I’m asking – do you think we will ever get to a time where we’ll have so many pixels to play with that we could simulate variable focal depth without any lenses?
Apple: “The processing unit is designed to determine based on the distance of the point of regard a region which is to be shown in focus in the rendered virtual image, wherein the processing unit is further designed to render the virtual images accordingly to simulate the depth of focus for the whole image which a human eye would observe if it were seeing a real object at the same 3D coordinates as the point of regard in a real scene. By calculating a focusing distance by vergence (by the eye tracker) it is possible to realistically simulate focusing accommodation by the user. Further, a depth of focus simulation is possible which follows where the user is actually looking in the virtual scene, instead of pre-defined focusing distance, thus simulating a user’s own focus accommodation.”
US20160267708A1
Thanks for the reference.
Quoting from the patent, “By calculating a focusing distance by Vergence (by the eye tracker) it is possible to realistically simulate focusing accommodation by the user. Further, a depth of focus simulation is possible which follows where the user is actually looking in the virtual scene, instead of pre-defined focusing distance, thus simulating a user’s own focus accommodation.” What I don’t see in this statement (I only looked through it quickly), or elsewhere in the patent, how they get the focus correct for where the user is looking. You can simulate being out of focus, but you can’t simulate being in focus.
Software can in theory “defocus” to give some depth cue, but it can cause things that are not in focus to be in focus. So at least in theory if one had enough pixels and variable physical/optical focusing, they might be able to give a feeling do depth. But they need the variable focus to address the VAC issue.
It is tricky (and I got caught on the 3-lenses in the video) to read too much into what they show as “simulated” views. These are, in essence, artist interpretations.
There are displays that are “Maxwellian” that don’t require focusing (such as laser scanning and very high f-number optics like pin-holes) but have massive other issues such as tiny eye box (see: https://kguttag.com/2021/07/13/exclusive-eyeway-vision-part-1-foveated-laser-scanning-display/#maxwellian). But with “Newtonian” displays/optics, focus is a characteristic of the angles of the light and that requires variable focus optics. BTW, Gordon Wetzstein of Stanford as said in a presentation that from the perspective of VAC comfort that a Maxwellian display is almost, but not quite, as good as Variable focus.
Have you looked at what my boys are doing in Latvia with AR and 3D. If not you need to.
Great article. Though I have some nit picks.
1) Watts (W) is a unit of power NOT Watts-per-hour (W/h).
Power is the rate of energy flow. It’s measured in units of energy per unit of time. A Watt is one Joule-per-second (1 W = 1 J/s). However, few people measure energy in Joules, Watt-hours (Wh) is far more common. 1 Watt-hour = 1 Watt * 1 hour = 1 Joule-per-second * 3600 seconds-per-hour = 3600 Joules).
Alternatively, if a device drains a 36 Wh battery in 2 hours, that means it consumes 36 Wh/2h = 18 Watt-hours-per-hour (NOT Watts-per-hour). The hour unit in Wh/h is clearly redundant, so you can simplify to Watts.
2) You can’t assume that the light used to illuminate an LCD pixel is polarized from the start. The light has to be generated by a back-light which is typically unpolarized, then it passes through the LCD which polarizes the backlight incurring polarization losses.
I agree with your first point about Watts being a unit of power.
I think you are quibbling on the second point. For an LCD to work, the light is always polarized at some point. Whether the initial polarization is considered part of the display or the illumination is quibbling. LC on it’s own won’t polarize the light. Many illumination systems will with have polarization with polarization recycling built into them. The key point is that with pancake optics, the light out of the LCD is already polarized whereas with an OLED the light from the display is unpolarized and must be polarized to work with pancake optics.
[…] More info (Why Apple Vision is better than Quest Pro) More info (Problems of the Apple Vision Pro) […]
[…] la plus facile à aborder puisque beaucoup d’éléments sont connus et vous pouvez consulter la très bonne description de Karl Guttag dans sa série d’articles. Il y a un certain consensus entre les experts du sujet pour reconnaître l’excellente qualité […]
[…] I wrote in Apple Vision Pro (Part 1) regarding the media coverage of the Apple Vision Pro, “Unfortunately, I saw very little technical […]