Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
In this article, I want to discuss what seems often overlooked in AR Field of View (FOV) discussions, namely the view of the real world. It seems like for some headsets, the view of the real world is almost an afterthought. This article follows up on my blog entry “FOV Obsession.” In the process of writing FOV Obsession, I found the SPIE paper by Bernard Kress that deals with the real-world view. Figure 11 of the paper compares the view of the real world through a few VR and AR solutions.
For much of this article, I am going to be referencing figures in Kress’s paper, which I recommend reading. Kress is very generous in supporting the AR industry as a whole. Still, Kress works for Microsoft on Hololens, and his paper does have a bit of a “Hololens bias” (more on that later).
Paraphrasing a biblical expression, “For what profits a man if he gains the whole world but loses his own soul,” what does AR gain (over VR) if it blocks out the real-word? Ideally, AR should provide a view of the real world that is unfettered. After all, this is what differentiates AR from VR.
I’m particularly concerned with how some headsets seriously impair peripheral vision. Peripheral vision, while inferior in resolution, detects objects that may be of danger and keeps people from running into things.
Figure 11 seems mostly aimed at comparing Hololens 2 (HL2) to the Magic Leap One (ML1). I have included similar side by side views of the two headsets below. Not only does the HL2 support a much better view out, it also has significantly more eye relief, which further improves the downward view.
The blocking out of the real world is something I faulted the ML1 with in “Magic Leap One – FOV and Tunnel Vision,” and also in “Magic Leap Review Part 1 – The Terrible View Through Diffraction Gratings”. In that post, I took pictures through a 3-D printed model of the ML1 that showed the view through both eyes (right).
Using both eyes the view is not even all or nothing as Kress’s figure 11 suggest, but rather, there is a region of binocular overlap where you end up with a double-image where one eye is seeing the frame and the other eye can see through. I also mapped the view through the ML1 onto the view through the eye (right).
Kress’s figure 11 is more of an “all or nothing” concerning the real world view, but it does make an important point. We need to be concerned about how much light is blocked in addition to distortions and artifacts.
ML1, in the area around the display, blocks about 85% percent of the light (transmits only 15%), and with 6 layers of diffractive waveguides produces a lot of distortion and artifacts. The pictures (left) were taken through the ML1 and HL1 is from my blog article “Magic Leap Review Part 1 – The Terrible View Through Diffraction Gratings”. Note, the exposure was different for HL1 as it lets through more than twice the light as the ML1.
One thing that caught my eye, having mapped the ML1’s view onto human vision, was that Kress’s figure 11 did not show the view below the glasses. The view out is not that good, and you see something different in each eye, but it at least one can see their feet.
Kress’s “Fixed Foveated Region,” something Kress draws somewhat arbitrarily at 50 degrees (see “Human Vision and the ‘Fixed Foveated Region’” section of FOV Obsession). Unfortunately the rectangle showing ML1’s display FOV was drawn smaller, and thus inside the circle, relative to the way it was drawn on the Hololens 2 where it touches the circle.
The UploadVR figure (left) compares HL1 and 2 versus ML1 based on the spec’s from Microsoft and Magic Leap. Be advised that manufacturers are not consistent in how they measure FOV, and it varies with how the glasses are worn, the shape of various people’s heads, and other variables.
Sometimes there are many things that “you just know” from years of working in the industry. One of these things is that human vision is biased toward looking down. The saying goes, “snakes and tigers were more of a threat to our ancient ancestors than birds.” But then as you look at human factors diagrams such as figure 8 from Kress (below left), he had a line that Says “relaxed line of sight” and 15-degrees. First of all, the angle of the line in the figure was wrong, but more importantly, is the issue of what is relaxed, the head, the eyes, or both?
Many older human factor studies have similar diagrams. I included (above right) a figure from the 1993 book, “The Measure of Man and Woman” by Tilley. Most of these studies are for human factors in workplaces where the display is a fixed computer monitor. What both Kress’s and Tilley’s figures do is conflate head movement with the eyesight. The Tilley figure for a relaxed line of sight line assumes that the head is tilted 15-degrees which is typical for a person sitting.
The eyes are not tilting 15 degrees down in addition to the head’s 15-degree tilt. With an HMD, when the head tilts down, then so does the display that is attached. In this case, it looks to me like Kress just copied from old human factors studies without pointing out the implications for HMDs.
Kress has an unconstrained range of 15-degrees up and 20-degrees down for a total of 35 degrees, where eyes will move without much effort or unconsciously. But as noted in my FOV Obsession article referencing the Haynes 2017 Ph.D. dissertation, even 30 degrees of constant movement will tire the eyes over long periods. Thus, a user interface that tends to force the user to constantly move their eyes more than 20 degrees from the center will tend to cause discomfort.
Tilley’s figure above says that a 15 degree tilt down is the “No Fatigue” angle for the head when sitting. The neck muscles, in general, will become sore if the head is tilted up for a long period of time. And as said above, the eye motion is also biased toward looking down. Just about every ergonomic guide for computer monitors tells users to have the top of the monitor at or below the eye level.
The ML1 blocks most of the person’s peripheral vision in every direction, whereas HL2 only blocks the upward view. The upward view is not as critical because the human physiology of the forehead and eyelids tends to constrain the upward view. Typically when something is much above the horizontal, people usually tilt their heads rather than move their eyes.
It seems that in all the technical difficulty of building wearable see-through displays, designers are losing sight (both figuratively and literally) of what they are trying to do with “Augmented Reality.” If they go to the point where the real world view is significantly compromised and where it becomes unsafe to wear the headset except in a very controlled room, then they are not so much “augmenting” but replacing the real world. If all they want is to keep some of the real-world for say a game application, then it would seem they would be better off with pass-through (camera) AR.
As discussed in FOV Obsession, a human only has a tiny area of high-resolution vision at any instant of time. The human eye typical makes 3 to 5 rapid movements per second called saccades which take between 20 and 200 milliseconds to complete (depending, in part, on the distance), followed by a fixation when it does not move. While the eye is moving, vision is essentially blocked out. The human visual system is putting together what people perceive as a single high-resolution image from a series of mixed resolution fixation snapshots. The figure below shows a simulation of what the eye sees in a single fixation.
I want to thank Ron Padzensky for reviewing and making corrections to this article.
Thanks Karl for the article – as usual spot on.
While certainly unwanted by the user, there are unfortunately technical reasons why companies tend to create AR glasses that block most of the peripheral vision.
First, it is challenging to make small, robust, rigid AR glasses that have the form factor of regular correction glasses or sun-glasses. Making them bigger and adding more supportive structure just helps a lot.
The other, less obvious, but more problematic reason is the issue that many waveguides today have with light that is coming in from behind: It can create severe rainbow reflections (as shown in one of your ML1 images) and the most obvious solution is probably just blocking as much light as possible – leading to a designs like the ML1.
– Daniel
Yes, making AR glasses is a hard problem. This is also while I am skeptical that “even Apple” is going to have a set of lightweight and fashionable glasses in 2020 as some seem to think.
Regarding the rainbow colors with the Magic Leap pictures, these artifacts are caused by light sources that are in view of the user. The same diffraction grating that turns the light toward the eye within the waveguide, will also capture light that is about 40 degrees above the waveguide. The color rainbow of colors is due to the fact that the amount that a diffraction grating bends light is wavelength dependent.
Karl,
Thank you for your work on this site. Based on my struggle to find a viable option, an overlooked application of AR in the flight simulation world is to block out reality outside the cockpit, but allow full vision of it within. Pilots need better optics on their heads down displays than what a pass through camera (for MR goggles) or re-rendered inside VR goggles, can provide.
I’m struggling to find viable options for this application that would make expensive domes and projectors less critical for realistic Flight Simulation setups. Do you have any suggestions based on your research? The cost for simulators that provide 360 visuals is substantial, so there should be some market for an AR solution on this front.
One that we have used is the ST-50 from NVis which is several years old now with only 50 deg FOV:
https://www.nvisinc.com/product.html
Respectfully,
Greg
Thanks, but I am not clear as to the problem you are trying to solve and all the issues you are wanting to address. I understand you are looking for a headset that would be used in flight simulation, but it would be helpful if you could be more specific about how today’s headsets don’t work.
In other words, what is the mix of virtual and physical? For example, are you wanting the instruments in the cockpit to be physically there or virtually seen? What about the flight controls?
The mix we’d want is for OTW, HUD, and HMD to be virtual, and everything in the cockpit to be real. A live video feed of the cockpit (MR) is not sufficient since the optics and cameras will lose quality from what looking directly at them would provide, and they won’t give up any image quality there.
We’ve found this set of AR goggles that we will look into. Very view reviews or data on them on the web:
https://www.saphotonics.com/vision-products/augmented-reality/sa-147-s/#overview4cd6-9bfc
Sa Phontics might be a possibility. The image quality is ok but the lenses are a bit thick.
Have you looked at Elbit’s Skylens? It was designed as a commercial airline wearable HUD. It is primarily green only but they say that there are provisions to support color :https://elbitsystems.com/media/Skylens_Aviation_Award_2016.pdf
Thank you for the suggestion.
For a Cockpit Simulator, what negative do you see to the thickness of the lenses? The resolution looks good and hopefully renders usable visuals of terrain and external aircraft.
When you look through a thick piece of glass there is some distortion unless you are looking dead straight on which is never the case in real life with your eyes moving. You get a bit of a fishbowl-effect.
[…] Restricted peripheral vision with the ML2 and the ML1 (see above right) should be a safety concern in many enterprise applications. In many environments, it is dangerous to block the user’s peripheral vision. The issue of not blocking peripheral vision has been discussed on this blog many times, including comparisons between ML1 and Hololens 1. […]