Spreadsheet “Breaks” The Apple Vision Pro’s (AVP) Eye-Tracking/Foveation & the First Through-the-optics Pictures

Introduction

Today’s article is just some early findings on the Apple Vision Pro (AVP). I’m working on many things related to the AVP, and it will take me a while to prepare all of them for publishing. Among the things I am doing, I am trying to capture “through the optics” pictures of the AVP, and it is unveiling both interesting information on how the AVP works and the second test pattern I tried “broke” the foveated rending of the AVP.

Having done several searches, I have not seen any “through-the-optics” pictures of the AVP yet. They may be out there, but I haven’t found them. So, I thought I would put up a couple of my test pictures to be (hopefully) the first to publish a picture through the AVP’s optics.

Eye Tracking Display Based Rendering, Maybe “Too smart for its own good”

The AVP is sometimes “too smart for its own good,” resulting in bad visual artifacts. In addition to being used for selection, the AVP’s eye-tracking varies the resolution and corrects color issues (aberrations) in the optics by pre-processing the image. This makes it tricky to photograph because the camera lens looks different to the human eye.

Today, I threw together some spreadsheets to check my ability to take pictures through the AVP optics. I started with two large Excel spreadsheets displayed using the AVP’s native Excel App. One spreadsheet used black text on a white background, which looked like the AVP was making the text and lines look “bolder/thicker” than they should look, but it didn’t act that crazy; the AVP seems to be “enhancing” (not always what you want) the spreadsheet’s readability.

But then I tried inverting everything with white text and lines on a black background, and the display started scintillating in a square box that followed the eye tracking. Fortunately, the AVP’s recording captured the effect in the video below.

I want to emphasize that it is not just the camera or the AVP’s video capture that shows the problem with the foveated rendering; I see it with my own eyes. I have provided the spreadsheets below so anyone with an AVP can verify my findings. I have only tested this with the Excel running on the AVP. The effect is most easily seen if you go into “View” in Excel” and make the view smaller with “-” magnifying glass 3 or 4 times to make the text and boxes smaller.

My First Through-the-Optics Picture Experiments

With its eye-tracking-based rendering, the AVP will be tricky to capture through the optics. The tracking behaves differently with different cameras and lenses. When setting up the camera, I can see the AVP changing colors, sometimes resulting in pictures that are colored differently than what my eye sees.

It seems pretty clear that the AVP is using “foveated,” variable resolution rendering even on still subjects like a spreadsheet. This re-rendering is based on the eyes and due to the change in the 3-D space locking (aka, SLAM) that caused the artifacts seen in the White text and lines on the BLACK spreadsheet.

Furthermore, the resolution of the displays is definitely lower than the eye’s resolution, as you can easily see the anti-aliasing “twisted rope” rippling effect if you look at the white-on-black spreadsheet. The highest rendered resolution (“foveated”) part of the image that scintillates. I discussed this issue in Apple Vision Pro (Part 5A) – Why Monitor Replacement is Ridiculous, Apple Vision Pro (Part 5B) – More on Monitor Replacement is Ridiculous, and Apple Vision Pro (Part 5C) – More on Monitor Replacement is Ridiculous.

I should point out that if not for the foveation, the whole image would scintillate. Still, the foveated rendering worsens because it creates a visible square at the boundary between the foveated area and the lower-resolution region. The “foveated rendering” makes it worse by changing the text and lines’ resolution and thickness. I would argue that a more graceful degradation would be to have the whole image rendered the same way (it is not a processing limitation to render a spreadsheet), with the whole image scintillating rather than having boundary lines where it does and does not and with the boldness changing at the boundaries as well. The key point is that the AVP’s display, while much better than almost all other VR/MR headsets, is not, as Apple puts it, “retinal resolution” (or beyond what the eye can see).

Anyway, for the record, below are a couple of through-the-optics test pictures. The first was taken with an R5 camera with a 28mm lens and “pixel shifting” to give a 400-megapixel image. Click on the crop of a very small portion of the center of that picture below to see it in full resolution.

Crop of a very small portion of the original image to show the full detail

The second image below was taken with an Olympus D mark III (Micro Four-Thirds camera) with a 17mm lens. It does not have the resolution of the R5, but the AVP’s eye tracking behaves better with this lens. This camera has a 24mp sensor, and then I used its pixel-shifting feature to capture the image at about 80 megapixels. The whole image (click to see at full resolution) is included below.

If you scroll around the full-resolution image, you can make out the pixel grid through most of the image, yet the text becomes blurrier much more quickly. Preliminarily, this seems to suggest foveated rendering. I have not had time to check yet, but I suspect the resolution falloff coincides with the squares in the white-on-black spreadsheet.

Very Small crop from the image above to show the detail

Conclusion

Designers have to be careful when it comes to applying technology. Sometimes, the same smarts that make one thing work will make others behave poorly.

The biggest part of the problem could be a bug in the AVP software or the Excel port. I’m not saying it is the end of the world, even if it is not improved. There is probably a way to “tone down” the foveated rending to reduce this problem, but I don’t think there is any way to eliminate it, given the display resolution. At the same time, the second test I tried caused it to “break/escape.” Since it happens so readily, this problem will likely show up elsewhere. Fundamentally, it comes down to the display not having a resolution as good as human vision.

Karl Guttag
Karl Guttag
Articles: 260

42 Comments

  1. It looks to me like the problem here is not the concept of foveated rendering but simply a bug in the implementation. The downsampling algorithm is not preserving overall brightness in this high contrast situation, making it easy to perceive the boundary even outside your fovea. It should be possible to correct the downsampling algorithm to fix this.

    • Thanks for the comment. I generally agree that the the foveated region appears to not preserving brightness which is a part of the problem, but not all of it.

      The way I see it is that the fundamental problem is aliasing. When you under-sample a high frequency (=high resolution), you get low frequency (=low resolution effects). You can’t solve it, but rather have to pick from less than perfect options. Anything that reduces the scintillation will lower the resolution. What I think is happening is that you have two different scintillation/aliasing effect. The foveated region is scintillating much more than the non-foveated region, and with more aliasing it makes it more visible. They could blur everything to reduce the scintillation but then it would make everything blurry.

      I think they need a foveation algorithm that is smarter and with more “scope” so it does not do the foveation in cases of high resolution mostly-still content. We can see more sophisticated anti-moiré (anti-aliasing) in modern high end camera software.

      In a related subject, I would love to see how the eye saccades when present with this type of scintillating image. I wonder if it is jumping all over the place which would render foveated rendering ineffective or worse.

      • I don’t quite agree that is a fundamental issue, at least in the way it is phrased. Of course, the display does not have enough resolution to be literally imperceptible to a human eye, but the world is not perfect and it doesn’t mean we cannot improve on it. Given a particular resolution, there are smart and dumb ways to render an image, and it seems like at least in the example presented above, your are exposing some particular dumb parts of it. The scintillating is specifically due to bad (lack of) anti-aliasing. Yes, anti-aliasing will result in a blurrier image, but a blurrier (the resolution is high enough that we aren’t talking about a complete blurry mess like a VHS tape) but stable image without aliasing / high frequencies is perceived much better by a human.

        Video games have been dealing with this for a long time, and there are still improvements made to how anti-aliasing is done. Just because we don’t have a perfect display that goes above Nyquist Limit does not mean we cannot improve the rendering to be significantly better.

      • The aliasing problem is “fundamental” for displaying high-resolution-high-contrast images. This is the case with computer generated content like spreadsheets, charts, typical presentations, and text (i.e. “Office Products”). You can design the graphics in a modern video game with smooth shading and the like such that you don’t have anywhere close to the high contrast edges of spreadsheet.

        With something like a spreadsheet, you are definitely between a rock and hard place when it comes to aliasing. The default Excel spreadsheet has a grid of 1 pixel wide lines. But then what is a “1 pixel wide line” when rendering in 3-D space? It could be anything based on distance. Technically an “ideal sharp line” has infinite odd harmonics (a square wave in one dimension). Thus, you need infinite resolution to render them perfectly. If use any significant antialiasing on them, they will disappear, if you don’t they will scintillate/alias.

        Interestingly, the Apple Vision Pro, uses a different, “more aggressive” eye tracking based scaling/resampling approach when rendering Excel directly, than when rending a bitmapped snapshot of the same spreadsheet. I think in the “native” case, it may be rending into 3-D directly, whereas I think I have heard that for the bitmap case it rendering into an high resolution space than then transforms that high resolution flat image into 3-D.

        With a bitmapped copy of the spreadsheet, I have noticed with a set of parallel black and white lines that it takes more that 2 AVP pixels per line before they will start rendering the correct number of (still blurry) lines. Anyway, there will be lots of fun cases to test.

    • I just did a little experiment. I took a PNG (lossless) snapshot of the spreadsheet and saved it at two resolutions (4K and 1080p) on my Test Pattern Page (https://kguttag.com/test/) and they behaved VERY differently than the “Native” Excel. What I see is the “normal” aliasing everywhere, there is no “foveated square” with different scintillation and brightness. Preliminarily, it suggests that there is something “special” going on when Excel natively is rendering the spreadsheet versus when it is a “picture” of what should be the exact same content. Apple is doing something algorithmically with both the way the text and lines are drawn in the spreadsheet when drawing it “natively.”

      This is a form of puzzle where you try and figure out what they are doing from the way the images behave. I take hypothesis and then try and construct images to prove or disprove the hypothesis.

      • Oh interesting – using an image (and video of scrolling spreadhseet) instead to test, was what first came to mind when I saw your youtube video, glad you tried that.
        Also saw some flashing when people were looking at various appliance led displays or outdoor signs. It definitely has all the regular camera constraints – looking forward to more of your findings!

        Btw. The F-35 helmet never got down to the NVG latency the customer was looking for, so am curious in your opinion how apple did here comparatively? Did they beat the defense contractors, in terms of latency – the public conclusion was that it was not fast enough for landing an airplane etc. So NVGs are mechanicaly attached to that helment instead. With Applevisionpro there are people driving their cars , they are bonkers, yet curious just how good the latency , and motion blur really is on this thing? Did Apple with its special chip do any better than others?

      • Thanks for the information.

        Driving your car is absolutely dangerous with the AVP for many reasons including latency. I’m afraid it is only a matter of time before someone(s) are killed. It is also very dangerous to walk around with the AVP as you loose all your peripheral vision which warns you when things are approaching you from the sides (like moving cars and stationary obstacles while you are walking). Many a person has driven drunk and made it home until one day they didn’t killing themselves and/or someone else.

    • I’m sure there are whole swaths of applications for which this will not be a problem.

      As I often say, still images are often the hardest as your eye gets to concentrate on details and any movement is clearly an error. As the Verge Podcast put it, the Apple Vision Pro is magical until it is not.

  2. It looks the through-the-lens images have shifted colors as “pre-emphasis” to correct for lens distortion. So I would assume they are not representative of what one would see through the lens?

    >Furthermore, the resolution of the displays is definitely lower than the eye’s resolution, as you can easily see the anti-aliasing “twisted rope” rippling effect if you look at the white-on-black spreadsheet.

    To be fair, the only thing that is for certain is that you are trying to display content with a higher spatial frequency than the display can support. They seems to use local filters in the foveated region that try to retain contrast rather than average, as pointed out by james above. The result of this filtering can be seen regardless of whether the display has higher or lower resolution than the eye (as is obvious from the screenshots).

    • It looks the through-the-lens images have shifted colors as “pre-emphasis” to correct for lens distortion. So I would assume they are not representative of what one would see through the lens?

      The video I captured did a reasonable job of presenting the flashing effect I was seeing. It was better in color and resolution because it didn’t go through the optics which will degrade the image.

      To be fair, the only thing that is for certain is that you are trying to display content with a higher spatial frequency than the display can support. They seems to use local filters in the foveated region that try to retain contrast rather than average, as pointed out by james above. The result of this filtering can be seen regardless of whether the display has higher or lower resolution than the eye (as is obvious from the screenshots).

      I made a mistake in the statement in the article you cited (and I will be making a correction on the blog). Due to the fact they are resampling a high contrast image which has by definition infinite harmonics, thus there will always be some error. Some content will always trip up the algorithm. They are dealing well with single pixel lines but multiple lines seems to throw it off. What I should have said is that they need to be at least two times the “base frequency” (pixel resolution) of the source image to make a reasonably faithful representation and avoid the worse effects of aliasing. I discussed this issue in https://kguttag.com/2023/08/05/apple-vision-pro-part-5a-why-monitor-replacement-is-ridiculous/

      Further tests I have made since writing the article shows that the AVP behaves very differently when rendering the spreadsheet “natively” in Excel versus a bitmapped capture and then display of what should be the same image. This indicates that the AVP is applying different algorithms when rendering the spreadsheet than when it simply displays an image.

      • It makes sense to me that the resampling algorithm for static content like a picture and dynamic content like an app could be different. Static content can be resampled once at high quality, while app content resampling must be done every frame in case the content changes, so it is performance critical and may use shortcuts that are not strictly correct.

      • The reason I didn’t get what the problem was when you had posted just the video was that I thought the 1080p30 video was not representative of what you see behind the optics and I wasn’t thinking that the scintillating effects should reduce in lower resolutions.

        But now you’re telling us that the 1080p30 video preserves more detail than looking at the 11 megapixel screens. Why is that? Reviews have been saying that the resolution is good and if it’s not as good as those videos it seems pretty bad.

        I probably don’t understand the Nyquist–Shannon sampling theorem properly. Does it say that a display with twice the resolution of the eye in each axis is enough?

      • The spotlight effect of the foveated region large square shape. You can see this square moving around as I moved my head in the recording. Part of the problem is that for some reasons they don’t keep constant brightness in the foveated region. It looks like for some reason, perhaps to make the text more readable, they have made the text seem bolder WHEN rendering natively in Excel. I have saved bitmapped images of the same spreadsheet an while you can still notice the foveated region, it is much less noticeable.

        I’m not saying that a 1080p video preserves more detail, it is that the effect is so big that you can see it even at lower resolution.

        The Nyquist/sampling issue boils down to you every pixel in the source image is straddling at least 4 pixels on the display. There are different tradeoff between softening the image and causing aliasing artifacts. Then when the artifacts change when you move your head or with foveated rendering your eye, all the edges change/wriggle causing effects that are bigger than a pixel. I tried to explain the problem here: https://kguttag.com/2023/08/05/apple-vision-pro-part-5a-why-monitor-replacement-is-ridiculous/#rendering-a-dot. Generally, you are left with several bad choices.

        The Nyquist theory says that to prevent aliasing, you need to sample at 2 times the highest frequency component of the source. The problem is that a line in the frequency domain is a square wave (in the direction perpendicular to the line) which means it has infinite odd harmonics and thus it is theoretically impossible to not have aliasing except when rendering. Spreadsheets PCs with “normal” 2-D displays “cheat” and use grid stretching to make the text and lines behave well relative to pixel boundaries; otherwise you would have some thin lines and some thicker lines between rows. But grid stretching falls apart when rendering into 3-D space. If you put up a series of 1-pixel wide horizontal lines, the AVP will either end up blurring them together or dropping some lines, EVEN the AVP’s display 2x or more higher resolution that the original image.

      • >Further tests I have made since writing the article shows that the AVP behaves very differently when rendering the spreadsheet “natively” in Excel versus a bitmapped capture and then display of what should be the same image. This indicates that the AVP is applying different algorithms when rendering the spreadsheet than when it simply displays an image.

        That seem reasonable. It is very easy to resample an image because it is basically resampling from a fixed and known frequency.

        I assume the problems with the worksheet arise from trying to rasterize vectors (lines, font). You either have to have many subsamples per pixel (there are limits to this) or have a rasterization algorithm that is aware of all the additional image warping that is going on later. If you try to fit more than one line into a pixel, things get complicated.

        I found this, but did not dig through the patent application so far:
        https://www.patentlyapple.com/2023/06/a-new-apple-patent-describes-perspective-correct-vector-graphics-with-foveated-rendering-for-vision-pro-iphone-more.html

  3. There are some physical games you can play to explore not only FOVR, but also some chromatic and distortion compensation tricks the AVP uses.
    Enable the eye gaze cursor in Accessibility and triple click the crown dial and you will see a dot that follows your eye tracking.
    Next you want to squint eyes so they are barely open… (create a ‘pinhole’ to view through and makes it hard for the sensors to track your gaze) this spoofs the eye tracking and locks the eye tracking to a fixed forward mode. Now, rapidly and lightly flutter your eyelids while looking around, and you will see the extent of the FOVR directly around the fixed tracking spot. You will also see chromatic distortions pop in and out as well as barrel and pincushion distortions as the system attempts to correct what you are doing by spiffing eye tracking.
    You can learn a lot about what is going on, even if you cannot externally record.

    • Thanks so much for the help. I will give these tricks a try.

      I’m expecting to find some camera methods that can be applied to get the cameras to be more “faithful” to what the eye is seeing. My goal is always to provide images that representative to what the eye is seeing.

      • Yes, I met with them at last week. They have an interesting camera developed for writing eye tracking cameras.

        It appear that Apple is a image pre-correction for the optics based on eye tracking in addition to foveated rendering.

    • The small camera phone size sensors have a very large depth of focus, but not so much that they could focus far away and near, such as at your hands. So I think they have to have some focus just like a cell phone camera.

  4. Karl, you’re almost certainly already aware, but iFixit gave you a shout-out in their latest video: https://www.youtube.com/watch?v=wt22M5nWJ4Q (at 3:43), and linked to this site in the description.

    iFixit measured the per-panel resolution as 3660×3200, and estimated the average PPD (not central) as 34, based on an estimated horizontal FOV of 100º. But I’m not sure how they came up with that number, as the most naive calculation (3660/100) yields 37.

    Digging deeper, I’m pretty sure the 100º horizontal FOV estimate is for both eyes, so we really need the per eye FOV. If the binocular overlap is similar to Bigscreen Beyond (~80º) then the per eye horizontal FOV would be 90, which would give a PPD of 41. If the overlap is similar to Varjo Aero (~70º) then the per eye horizontal FOV would be 85, which would give an average PPD of 43. See columns M & N (PPD calculations C & D) in this spreadsheet: https://www.reddit.com/r/virtualreality/comments/18sfi3i/ppdfocused_table_of_various_headmounted_displays/

  5. Hi Karl,
    Just tried it, and noticed something that I’d like to ask you about:

    It felt like there were “two(2) separate systems processing the stereo views” that sometimes result in stereo disparity erros (almost as if they are using different math or camera spacing).

    one system seems to be for pass-through, and one for rendered content (like windows, spacial photos etc..) and then the two are combined and fed to the oled displays, sometimes erroneously, with stereo disparity erros.

    I noticed this when looking at furniture nearby(a cabinet) and then dragging “a window(like the photos gallery”) near the furniture – if I only looked at the window it felt correct, if I only looked at the furniture (which was a real object filmed via pass-through) it felt correct l, however when I tried to focus on both in the same area (say 1 sq meter box) it looked like Escher art. did not work well and I could focus on both, I had to close either eye for it to look ok.

    my hypothesis is that they are using one high speed system(possibly a separate physical die) to compute the pass-through images for each eye, and
    and second to render VR content, and then they merge the two for each oled , and that the math for the stero disparity computation is not consistent between the two systems, or there is some kind of bug/input error.

    wondering if you have experienced something simmilar

    Thanks
    Emerson Segura

    • I haven’t tried the AVP, so just speculating, but maybe not that complicated, just something related to vergence-accommodation conflict (VAC) (https://en.wikipedia.org/wiki/Vergence-accommodation_conflict)?

      Both objects exist on the same accommodation (eye lens focus) plane, as the AVP (and everything else available worth using) has a fixed focal distance. However, they will presumably have different vergence (binocular convergence) “distances”, unless you have the virtual window at just the right virtual distance, in which case what you’re describing shouldn’t be happening, and my theory is wrong. That would create a double-image when looking at both at once.

      But maybe you’re describing something more complex, which could be explained another way — the external cameras don’t like up with your eyes. Even if they moved on the same internal rails used to adjust IPD, they’d still only line up with your eyes if you were looking straight ahead (or at some predetermined fixed angle) — unless they had some huge highly curved lens perhaps (speculated wildly here). Anyhoo, so the external world has to be remapped / distorted to compensate, and it’s never perfect, as is much more evident using the Quest 3. Perhaps what you’re seeing is due to the limitations of this process?

      Or it’s more like what you said!

  6. On my AVP i noticed some interesting artefacts.

    1. on pass through lateral movements (moving body left to right along a plane and allowing head to follow) produced far less blurring than rotational head movements, be that left to right or up to down. When those movements occur smearing or blur happens. I believe this to be more than a screen persistence issue.

    2. while more minor i *believe* i see the same on both rendered text and mac virtual mirrored text, I am unclear is purely screen persistence – i think not a rapidly moving the content and keeping head still doesn’t seem to cause the issue (or to a much smaller effect)

    thanks for doing these articles i thought i was going mad when i said the virtual monitor looked terrible – i am mystified at the people who think it looks better than real (i am starting to think those folks have minor uncorrected vision issues affecting them).

  7. Why are these devices typically made to appear several meters in front of you? I’m nearsighted and having to require corrective lenses when a monitor doesn’t is really annoying.

    • It is a combination of factors. Primarily, at about 2 meters (~6 feet), a normal person’s eye focusing becomes mostly “relaxed” and in their “far vision.”

      Another issue is vergence accommodation conflict (https://kguttag.com/2023/06/16/apple-vision-pro-part-2-hardware-issues/#VAC) or having the eye focus disagree with where the eye’s verge. Most things in VR are mean to seem either on a wall or farther away, thus the eyes focus distance and where they verge (for 3-D depth effect) tend to agree.

      As you focus closer, the muscles must squeeze the lens more. Two meters works pretty well for most things from over 1 meter to infinity. Depth of focus and peoples ability to deal with it is somewhat logarithmic so you need ever more steps/focus-depth as you get inside 2 meters. The “focusing error” can be plotted for a given number of focusing distances (link to an example by Magic Leap: https://i0.wp.com/kguttag.com/wp-content/uploads/2020/09/a8ef6-ml-948-composite-003.png?ssl=1 from https://kguttag.com/2018/01/03/magic-leap-2017-display-technical-update-part-1/). As you can see from the graph, once you get to 2 meters, the percentage error is not that bad for distances past 2 meters.

      • So does this mean that if all you want to do is project a 2D display (ex. xreal glasses) in leiu of a monitor VAC does not apply? Can’t you make those glasses use a 0.5m distance similar to a regular monitor?

      • Typically, they want VR and AR glasses to be in the “relaxed focus” of the eye at about 2 to 2.5 meters. If they made the glasses focus at 0.5 meters the eye’s focusing muscles will be applied all the time. Additionally everything in the background/real world will be out of focus. 2-meters is sort of the Jack-of-all-trades focus distance.

        If the left and right eye see the same view without a 3-D effect, then the “proper” focus distance would be at infinity, once again arguing for the longer focus distance. You don’t have the VAC problem with a monitor because the vergence and the focus agree.

  8. I’ve heard from Bradley (SadlyItsBradly channel), that you / ifixit can’t find a lot of information about camera sensors. It so happens, that I was looking into firmware (it’s publicly available, but you have to look for it). And there were some references to the sensors used (e.g. SENSORIMX572_1_PIXEL_2656x2272_96, with resolution corresponding to 6MP extremely closely). I don’t know if this information is still relevant, but feel free to reach out, if it still is!

  9. Thank you for the informative content! Will you be updating the monitor replacement assumptions with hands on data soon? The techniques used for pc displays are vastly different between Quest and Vision Pro.

  10. I appreciate that Mr Guttag is merely using spreadsheets as an example to expose some limitations of the AVP and thus infer its workings. Therefore, my post here is tangential:
    I would have assumed a joy using a spreadsheet on a VR device is that a worksheet wouldn’t be constrained to an arbitrary 16:10 monitor-like window. (Indeed, early adopters of dual monitors were people who needed to see worksheets with lots of columns at a glance.) Not only that, but I’m sure there are people for whom the ability to arrange multiple interrelated worksheets in a virtual space would be a great aid to visualising a system or process.
    I look forward to seeing what productivity applications developers create to take advantage of AVP-like systems (whilst mitigating their weaknesses).
    As author Sir Terry Pratchett replied, after being asked why he worked across six monitors:
    “Because I don’t have room on my desk for eight”

    • The uses of spreadsheets is because they are easy to make any size with different size text. It also lets me test “native/direct” Excel image generation versus saving bitmaps of the same image (and the AVP behaves very differently). Spreadsheet were also the 2nd “killer app” after word processing for early personal computers (e.x. VisiCalc on the Apple 2 – https://en.wikipedia.org/wiki/VisiCalc).

      The problem with the AVP, is that the text needs to be about 1.3x to 1.5x bigger (I will be working to narrow this range) to be a readable. So while you get a bigger virtual monitor, you loose information density. Thus, some of what you gain in screen area is clawed back in less density of information. It also means you can clearly see less a one time. Human vision likes to stay within about 30 degrees horizontally (varies with individuals, so a typical range) with it saccades. So with bigger monitor with lower resolution means your head has to turn a lot more and you see less information at once.

      The AVP automatically makes EVERYTHING bigger including the text. You can sometimes scale it back. It seems to give me good control with a replicated MacBook screen to make it the size I want, but much less so with its native windows. To make a native window smaller in the FOV, I make it as small as I can and then have to back away until it is the size I want.

  11. Hi Karl:
    Thanks for sharing your insight. Would you comment on the motion blur in AVP? It seems to be the biggest complaint and i haven’t found any serious article that explains it.

    • Thanks. I hope to get to it, but it will likely be a while. I have a lot of “static testing” information to publish first. I know Brad Lynch (SadlyItsBradley on YouTube) has found it wanting in terms of motion. Brad’s biggest issue is that the display’s active on duty cycle is too long. With backlit LCDs it is relatively easy to drive the LEDs hard for a shorter period of time and then have a blanking period which helps break up blurring. But this is not so easy with Micro-OLEDs.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading