Apple Vision Pro Displays the Same Image Differently Depending on the Application

Introduction

My last article, Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3, drew many comments (on my blog and elsewhere) claiming I must be wrong. While I had checked it many times before releasing the previous article, I decided to go back and check again. This time, I used various applications on the AVP to show the same image with a side-by-side comparison. Surprisingly, displaying an image stored on the AVP from a folder gave me very different results than other methods/apps.

Setting up each comparison takes a lot of time (the AVP makes it extremely difficult), so I will show the two most different cases: the Folders application directly opening a downloaded image and the same file with the MacBook Display mirroring. Spoiler Alert: the MQ3 looks better than either AVP method (I’m not picking sides, but calling it as I see it).

As with the previous article, I will start with an overall view of the image and then use a series of full-resolution crops to zoom in on specific details.

AVP Image Processing — Making Things Bolder

In Apple Vision Pro’s (AVP) Image Quality Issues – First Impressions, I wrote, “The AVP also likes to try to improve contrast and will oversize the edges of small things like text, which makes everything look like it was printed in BOLD.” At the time, I was referring to directly rendering applications for text, including word processing, web pages, and Excel spreadsheets. Based on more testing, this statement also appears to be true for most image display cases where the image includes high-contrast fine detail. As this article will demonstrate, the AVP appears to be actively trying to improve the contrast/boldness of images.

Side By Side Windows of the Same Image Using Different Applications

Below is a thorough AVP optics picture showing the White-on-Black 1920×1080 source image on this blog’s Test Pattern Page (click on the image for the full-size picture; a closer crop will be shown next). The pattern on the left shows the PNG file displayed from the Folder Application directly, and the right-hand side shows the same image opened on a MacBook Pro 14″ M3 Pro that is mirrored to an AVP window. Since two windows are being shown side by side, the camera cuts off the far left of the left window and the far right of the right window. I tried to make the size of each image the same as in Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3 as closely as practical.

The first thing to note is that all the text and line features are lighter/thinner in the folder view (left window) than in the MacBook mirroring (right window). Interestingly, the brightness of the large white circles with the sub-pattern number is almost identical between the windows. Only the fine-line features, including text, appear dim when displayed from the local Folder and bold/bright via MacBook mirroring.

Closer Crops Showing Details

Below are closer crops test pattern subpatterns 26 and 36 from the AVP folder view and patterns 21 and 31 from the MacBook’s AVP mirroring moved next to each other. Consider the 2-pixel line spacing in the red rectangles from the Folder (left) and MacBook mirroring (right) view.

The line width and spacing in the folder view seem roughly equal, which is correct. However, in the MacBook mirroring, the white lines are much wider, while the black space between them is much thinner (and pretty light gray). The AVP’s image process of the MacBook mirror image, while less “accurate,” stands out better, and the text is more readable (reminder: the text in this case is “bitmapped” and not rendered from a display list).

The lack of space between the two pixel-wide lines in the MacBook’s AVP Mirror caused me to investigate the sharpness of the AVP, as I could barely see the two pixel-wide line separations in this test pattern. With respect to the classic resolution measurement of the Modulation Transfer Function (MTF), the AVP had an MTF of much less than 20% (i.e. very poor).

I’m not fond of either the Folder image, as it is too dim, or the Mirror image, as it is too bold/bright. In theory, when resampling/rescaling, maintaining the same average brightness is generally considered desirable. As will be seen later, the Meta Quest 3 does much better.

Another thing to notice about the two pixel-wide lines at the top (cells 26 and 21) and bottom (cells 36 and 31) is that they are slightly different each time they are rendered. How lines are rendered is a function of the alignment in 3-D space.

Extra Close Crop

Below is an even closer crop of the same image showing more details. Interestingly, the number “3” slightly darkens the top of the 3-pixel-wide line just below it, suggesting a sharpening halo. This closeup also reveals that the image produced from the Folder is losing small features, such as many of the dots in the letter “i” in Arial.

Revisiting the Comparison to the Meta Quest 3

The image below is from last time when the Apple Vision Pro (left – using the mirroring of the MacBook) was compared to a direct view monitor (center) and the Meta Quest 3 (right). The AVP image is softer/blurrier than the Meta Quest 3.

Meta Quest 3 is Still Better than the AVP in Terms of Sharpness and Scaling

The comparison below shows the AVP displaying from the Folder (left), the AVP mirroring the MacBook (center), and Meta Quest 3 (MQ3—right). The MQ3 has much better scaling behavior and sharpness than either of the AVP methods. The MQP’s brightness of fine details is much better, neither too bright nor too dim. The two pixel-wide lines are distinct and sharp with even black-and-white spacing, and the text is more readable.

Checking Other AVP Image Display Applications

I should note that of all the various ways I have tried to display an image with the AVP, displaying a downloaded image from a folder results in the thinest (and dimmest) fine details. Of all the AVP methods I have tried, Displaying an image via the MacBook mirroring makes fine details the boldest/thickest, although other methods, such as displaying the image via the Safari Browser App on the AVP, result in a similar but slightly less bold image than the MacBook mirroring.

I also tried various resolution settings with the MacBook Mirroring. I could see no substantial difference between the various resolution settings for the same-size output image, provided the resolution was 1920×1080 or above, and the test pattern image was the same size; the result was the same (or so similar that I could not see a difference).

The camera setup is very laborious due to the lack of precision in eye and hand tracking using the AVP and the complexity of doing it while the headset is in a tripod rig. Therefore, I didn’t take the time to produce through-the-optics photographs of each case that were minorly different.

Conclusion and Comments

Yes, you can simulate multiple “big” monitors with a VR headset, but they are very low in (angular) resolution by today’s standards when objectively measured. The test pattern used in this article is a 1920 by 1080-pixel image, and both the AVP and MQ3 fail miserably to display the set of single-pixel lines. The AVP even fails to display even the two pixel-wide lines properly.

After the rectangular image is inset inside the oval sweet spot of the optics, you get about a 45-degree image, equivalent to looking at a 28″ monitor from about 29″ away. My 28″ monitor has 3820 by 2060 pixels, whereas the AVP and MQ3 would have trouble (can’t) displaying faithfully 1280 by 720 pixels images. Any 3-D image resampling will lose about half or more of its resolution due to resampling (good old Nyquist sampling theory—see also Apple Vision Pro (Part 5A)—Why Monitor Replacement is Ridiculous).

Restudying the AVP’s display and taking higher-resolution pictures only reinforces my original conclusions. The AVP’s optics are a little soft, and the AVP’s processing is trying hard to compensate.

Next Time

While the AVP’s algorithms could be better, I think they are limited by having to compensate for the soft optics. As I am fond of saying, “When smart people do something that seems wrong, it is usually because they thought the alternative was worse.” In this case, I think Apple decided that making small objects bolder would make things like text more readable when seen through the optics.

This blog has been a resource to many writers of large and small publications and YouTube creators. Sometimes, they even give credit. In their “Apple Vision Pro—A PC Guy’s Perspective,” Linus Tech Tips showed several pages from my blog and were nice enough to make the web address big enough to read.

The same Linus Tech Tip video also included humorous simulations of the AVP environment with people carrying large-screen monitors. At one point (shown below), they show a person wearing a respirator mask (to “simulate” the headset) surrounded by three very large monitors/TVs. They show how the user has to move their head around to see everything. What they don’t discuss is that those monitors’ angular resolution is fairly low, which will cause even more eye and head movement.

Appendix: Some notes on the technical aspect of taking the picture

You will see a little bit of the Folder’s window peaking through between and behind the left and right windows (the picture took a long time to set up, so I just left the window behind it). I made both images the same size and close to the same size as last time. Last time, I used a 16mm lens to get almost the entire horizontal FOV, but I used a 28mm lens to get about 1.75x magnification (28/16 = 1.75) of the area of interest over the prior 16mm lens, but then it crops off the outer part of the image.

The camera has 8192 pixels horizontally over the 65.5-degree horizontal angle of view of the 28mm lens. This nets about 2.8 camera pixels (horizontally and vertically or ~8 total camera pixels) per display pixel in the AVP image.

For this picture, I increased the shutter speed from 1/15th to 1/100th of a second and increased the ISO by the same amount to keep the exposure similar. The AVP is specified to refresh (and frame render) between 90 and 100 Hz. The slower (1/15th) shutter speed assures that the brightnesses of several frames are averaged together but allows the AVP’s processing to change the image content several times. While the headset and camera are on tripods, the AVP can still process between frame renderings due to micro-movement or how the foveated rendering responds to how it views the lens. The 1/100th shutter speed assures that only a single image from a single frame rendering is captured at any given point. Still, the brightness varies from picture to picture as the camera is unsynchronized. I, therefore, shot several pictures and chose one of the brighter pictures.

For the comparison to the MQ3 above, I used a 16mm at 1/15th and f8 of a second shot of the MQ3 and compared it with my newer 28mm at 1/100th and f8 shot of the AVP, where I scaled down the 28mm picture by about 57% to match the same size and did not change pictures in the middle of comparing everything. Below is a comparison of using a 16mm photo versus a 28mm photo scaled down by 57%, showing that there is little difference in the apparent sharpness of the resultant image. However, using the 28mm picture in the extreme closeups elsewhere allows me to show more detail.

Karl Guttag
Karl Guttag
Articles: 296

6 Comments

  1. This product really feels like a gen 1 product. It’s overpriced for what it has to offer and it’s full of downsides here and there (weight, size, comfort, extensions, battery life, optics fov and glare, display algorithms, os limitations, etc…)
    I feel like it won’t be before AV(P?) 3 I would consider it.

  2. Hi Karl,

    While I understand your experiments were targeting image clarity, did you experience any ghosting that seems to be common with a pancake lens?

  3. Hi Karl,

    Interesting to see the difference in color/contrast between the different projection modes, as well as its impact on image clarity. A brief look, the ‘native’ downloaded image seems to have a color/contrast more dimmer and warmer, more attuned for the display and optic quality, is that likely a conscious choice in how the system generates its images? Since the optic is purposefully (allegedly) defocused, high brightness/high contrast images seem to cause significant blur and glow to adjacent pixels, whereas warmer and lower contrast images appear to be more legible, but harder to read.

  4. Great article as always.

    Any observations on sharpness in the 1.1 OS update – multiple folks are saying it has improved sharpness (but not blur caused my head motion).

    • I’ve downloaded and installed 1.1. I will likely reshoot and see if the camera can see any measurable difference on my test pattern. Visually, I didn’t see and dramatic difference. It does appear that the AVP uses different algorithms for different applications and conditions so it is possible that any improvement will impact some applications more than others.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading