EyeWay Vision Part 3: Analysis


This article is part 3 on EyeWay Vision’s (EyeWay) foveated laser scanning display and follows up on Part 1 that discussed the technology, and Part 2 that showed through-the-optics images. It will go through my thoughts on why I find EyeWay’s technology so interesting and what I see as the challenges.

As I discussed in Part 1, foveated displays is being studied by both some of the largest companies (ala, Apple and Facebook) and startups. To many, foveated displays support the ultimate goals of AR with major theoretical advantages.

This blog is followed worldwide, and I had two readers point me to a couple of very relevant articles. Before I get into my analysis, I want to point these articles out and provide links for those who want to go into more detail.

Added Information Related to EyeWay

Study On EyeWay’s Earlier Version of Eye Tracking

Kenneth Holmqvist, a professor at Lund University in Sweden, alerted me to a study he helped develop with a paper and Video presentation about EyeWay’s Eye Tracking development on studying eye tracking using the earlier version of EyeWay Vision’s eye-tracking technology. This study gives some clues as to how EyeWay’s tracking is more accurate than prior methods.

Saccadic Eye Motion – Does Vision Blank?

image of people on beach
When looking at a scene (left), each quick eye movement creates motion streaks (right) on the retina that we don’t consciously perceive – Martin Rolfs

In the first article on EyeWay Vision, I discussed the concept of “saccadic masking,” which suggests that vision is fully blanked out during eye movements. Subhash Sadhu, a Graduate Research Assistant at MIT Media Lab, alerted me to a recent article titled, “We thought our eyes turned off when moving quickly, but that is wrong.” The article leads to the full paper Intrasaccadic motion streaks jump-start gaze correction by Richard Schweitzer and Martin Rolfs at the Humboldt University of Berlin in Germany. Two quotes from the New Scientist article summarize the study and a potential issue.

“The paper suggests that during eye movements, what is left of motion streaks (the traces left in our visual system by fast-moving objects) helps perception, whereas it is a disturbance when the eyes are steady,” says Paola Binda at the University of Pisa in Italy. “This point would need direct testing, of course, but it is an intriguing one.”

“The only potential criticism I can see is that the results were obtained with stimuli ingeniously designed to investigate these effects, but it is not clear whether any of this occurs in natural vision – as the authors admit,” says Karl Gegenfurtner at Justus Liebig University Giessen in Germany.


Human vision is incredibly complex and is studied indirectly. Simply put, and as suggested by Karl Gegenfurtner above, various optical illusions are shown to human subjects. The researchers try and tease out what the visual system is doing based on the subject’s responses.

The issue of “intersaccadic perceptions” is also talked by about Ed Tang of Avegant in his AR/VR/MR video. Tang’s 19-minute video is very informative on human vision with respect to near-eye foveated displays and I would recommend watching if you are interested in foveated displays.

The more complex question is whether saccadic motion streaks will make EyeWay’s Foveated Display harder or easier to accomplish. All display devices create an illusion that is very different from how the eye sees the real world.

The question becomes whether the illusion is good enough that a person does not notice the difference. We will know the answer to this question without a working foveated display.

What Makes EyeWay So Interesting

EyeWay is trying for “the Holy Grail” of AR by being small, light, very energy efficient, wide FOV, eliminate vergence accommodation conflict (VAC), high brightness and contrast, while greatly reducing front projection and light illuminating the eye. While their prototype is still far from their goal, they demonstrate intriguing technologies that could be used in whole or as part of another system.

Retinal Eye Tracking

EyeWay’s developments in tracking the retina have broad application beyond foveated displays. Most eye tracking today is based on tracking the pupil or glints off the cornea, but these methods are not accurate. Before ever talking to EyeWay, I was convinced that if foveated displays could be made to work, it would require tracking the retina.

Foveated Displays = Power, Processing, and Data Efficiency

A foveate display can greatly reduce the amount of data to process and the power that goes with it. For example, my 2017 discussion of Nvidia and Microsoft work on foveated rendering as part of my coverage of Varjo. It also means that there is no need to generate a large eye box. EyeWay takes this a step further by only generating light that goes into the pupil (see pictures below). Thus, only a very small fraction of the light ever needs to be generated.

Comparison of EyeWay illuminating just the pupil vs. a conventional method illuminating a large eye box

With conventional displays, processing grows as a function of FOV and resolution; more pixels equals more processing. Foveated displays enable foveated rending where processing is concentrated only in the small area the eye can see with good acuity. The processing and power savings are at least theoretically an order of magnitude smaller for large FOV displays. Processing power grows only modestly with larger FOV.

More Socially Acceptable – Much Less Light

Hololens 2 with “Front Projection”

Because no light is generated that does not go into the eye, the area around the eye is not unnaturally lit up has demonstrated in eye box image above. All that can be seen is some specular reflection of a small spot of light on the cornea.

Because there is so much less light being generated, only a very small amount can be front projected (light projected away from the eyes). With today’s prototype, you can see a very tiny spot of light if you look very carefully from an off-angle.

Simultaneously supporting high-resolution and large FOV

Foveated displays’ big promise is to simultaneously support a wide field of view with high Foveated displays’ big promise is to simultaneously support a wide field of view with high perceived resolution. In theory, the EyeWay approach might support both. With EyeWay, both the foveal and peripheral images follow the eye to increase the perceived FOV.

Image quality compared to other Laser Scanning Displays

As I stated in the first and second articles on EyeWay, I was very impressed by the image quality of EyeWay Vision for a laser scanned display. It’s the first laser scanned display I have seen that didn’t have the image problems normally associated with laser scanning, such as scan lines, flicker, floater shadows, and/or speckle.

I have seen multiple direct (non-waveguide) LBS displays, and before EyeWay, they all caused issues with floaters (particles in the eye) casting shadows on the retina. EyeWay’s optics has addressed this fundamental issue.

I should note here that I am making allowance that the EyeWay display is not tracking the eye, and thus, the transition from the foveal to peripheral display is very noticeable (above right). While good enough for AR uses, the resolution and quality of EyeWay’s foveal display is less than that of Lumus Maximus and Nreal as examples.

Vergence Accommodation Conflict (VAC)

EyeWay is controlling the apparent focus distance of the foveated region of the display to reduce VAC. VAC occurs when the apparent distance due to stereo displays does not agree with where the eyes are focusing. Famously, Magic Leap made reducing VAC one of its cornerstones (see, for example, my 2018 article) and why Magic Leap One had dual waveguides.

EyeWay has a way of adjusting the apparent focus distance of the light in the foveal region. They say they can even adjust the focus within subregions of the foveal display (but not individual pixels). This accuracy is likely good enough, but once again, we will have to see how it works in practice.

Both high brightness and Transparency

Because all of the light is aimed into the pupil, EyeWay can provide high brightness without requiring high power its associated heat. In theory, they could provide high brightness over a wide FOV.

Thanks to the efficiency and brightness of the approach, it should be possible to support high transparency combiners, either with curved mirrors or holographic films.


While EyeWay has smart people and has some interesting technology, it still is a long way from being a product. Below I would like to outline what I see as their most serious challenges.

Holographic mirror

Perhaps the biggest long-term issue I see is going from a curved semi-reflective combiner to a flat or much less curved holographic film while maintaining image quality. While a holographic combiner is not necessary for Eyeway’s foveate display to work, it seems to be a key ingredient in reducing the size of the system.

It helps to understand the function of the highly curved semi-mirror combiner in the current design (left two figures below). Light rays from the laser scanner are spread to form the FOV and then are directed by the fovea tracking/steering mirror to the curve semi-mirror combiner. The combiner then makes the light rays converge to go through the pupil to illuminate the FOV on the retina. In this way, the light rays can go through the pupil at the angles necessary to illuminate the whole FOV.

On the surface, it may seem counter-productive first to spread the scanned image and then make the light converge. But it is necessary to get the light rays to where they need to be on the retina.

On EyeWay’s prototype, the lasers are above and not “on-axis,” at least vertically, with the eye. Being off-axis would result in some distortion that needs to be “pre-corrected.” This correction appears to be done by a curved mirror I have indicated with red dots on the figure on the right.

Instead of using a deeply curved semi-mirror combiner, the often proposed solution (not just by EyeWay) with direct laser retinal scanning is to use a holographic film mirror. The figure below shows North Focals using a Luminit holographic mirror, a similar holographic mirror used by Bosch for their retinal scanning AR glasses, and from a Facebook paper and presentation on their R&D headset.

In both the Bosch and North Focals pictures above, you should notice that you can see the roughly circular holographic mirror in the lens. It indicates that the hologram is affecting the light rays from the real world as they pass through. Both designs are monocular and place the image outside the straight-ahead view of the eye (as indicated on the Bosch booth photo), so this effect may be less visible to the user in normal use. Also, both designs support a very small FOV of about 15 degrees with basic “informational” rather than photographic image quality.    

Examples of Diffraction Grating Light Capture

The correction for being off-axis may be made in part by lenses or mirrors before the holographic mirror. It is possible to program some or all of the off-axis correction into the hologram.

While Holograms can be used to make some seemingly magical flat optical devices, there’s a catch. There is a reason you don’t see them being used in high-quality optics. Similar to a diffractive waveguide, the image quality generally suffers. When the hologram becomes large enough for a large FOV, the will be noticeable detrimental effects on the real-world image. Also, there will likely be capturing of off-axis light as is seen with diffraction-based waveguides (above right).

What about the eye-tracking IR pathway via the holographic mirror?

The current curved partial mirror combiners support a bi-directional optical path for the A holographic mirror is a unidirectional device. The usable light path goes in one direction. The current curved partial mirror combiners support a bi-directional optical path for the view of the retina to return for eye tracking. It is therefore unclear how they will support a return path for retinal tracking with a holographic mirror.

Retinal eye-tracking necessary, but is it sufficient?

As I wrote earlier, eye tracking down to the retinal level as Eyeway is doing seems necessary. But the question becomes, is it sufficient?

A basic question is whether the eye-tracking will be fast and accurate enough. Then the next question is whether the display can react fast enough. A deeper issue will be whether, even with excellent eye tracking, they can fool the human vision system without artifacts.

Human vision is incredibly complex and still not well understood as suggested by the motion screak the paper Intrasaccadic motion streaks jump-start gaze correction discussed earlier. We cannot predict whether a foveated display will work well enough to trick the eye completely without seeing it in a wide range of content and viewing conditions. A display that works “most of the time” would be very distracting when it doesn’t work.

Consider the case where the eye moves while a real-world object (or even a conventional display) remains stationary. Relative to the retina, the object has moved even though it will be perceived as stationary. So what will the display have to present to the eye to fool it that the virtual objects are stationary when the eye moves? Does it need, for example, some level of blurring, and if so, where?

It could very well be that EyeWay will get their foveated display working only to discover one or more subtle but even more difficult to solve issues with how human vision works.

Aside: Lesson from history: Motion blur was a big topic at Siggraph in the 1980s.

The discussion of how the eye perceives motion reminds me of presentations at Siggraph in the early 1980s. Motion blur in computer animation was a big topic of discussion. At Siggraph 1984, I saw “The Adventures of André & Wally B,” by (then) Lucas Computer Division (spun out and bought by Steve Jobs as Pixar in 1986). It was their first animation with motion blur. At Siggraph 1984, Lucas also presented a paper on motion blur with an iconic ray-traced still of pool balls (right)

It so happens that the Pixar Image Computer used Multiport Video DRAMs, which I had a hand in creating just a few years earlier. Back then, Pixar was what they called their hardware.

It turns out that when real objects are moving, the human visual system expects to see some blur. When shown animation without motion blur, the humans will sense that the motion is jumpy and can even cause the moving object to look like it is breaking up at the edges (see here for a simple demonstration with and without motion blur). Without the blur, the visual system will put the image together to look worse/strange.

The point here is that the human visual system does complex and not fully understood processing to create what a person perceives. Human vision is not like any camera, and it is not objective but rather fills does some level of interpretation, filling-in, and reconstruction, especially in moving objects. Some display illusions fool the eye while others confuse it. I expect that foveated displays will uncover some new issues that will have to be resolved.

“Hard AR”

EyeWay talks about “Hard AR,” meaning that their images will seem more solid and less ghost-like. How solid an object appears to be is a function of the relative brightness of the virtual image versus the real world it overlays. Any bright display should be similarly “hard.”

The virtual image needs to be about eight times brighter than the real world for the virtual image to look solid. Unfortunately, if it is a bright sunny day outside and you are looking at concrete which will reflect about 7,000 to 10,000 nits, this might mean needing the display to go to about 70,000 nits which is so bright that it will be uncomfortable to view.

Many products use an ambient light sensor which will vary the brightness of the whole image based on ambient light. One can theorize that if you could combine the real-world view from a camera optically centered on the eye to where that image would land on the retina, it might be possible to modulate the virtual image brightness based on localized ambient light. Essentially, make the image brighter based on the localized brightness of the real-world. I discussed human vision being a function of relative brightness in September 2020. At least in theory this might work as the eye only detects dynamically across the eye. I discussed

Many have also theorized using something like an LC panel in or on AR glasses to block light. I only bring this up because I get this all the time concerning hard edge occlusion. But at the distance of glasses from the eye, the light-blocking would be highly out of focus. It would also not line up with the image on the retina.

While on the subject, there are also many more bulky and impractical approaches, such as the ASU Hard Edge Occlusion approach I discussed in 2019. Cogni-Trax appears to be using a Hard Edge occlusion met, which looks like a variation on the ASU method (see Patent, which does not cite the ASU work as prior art). There are other methods I have heard of that I think will prove to be similarly impractical for hard edge occlusion.

Size and weight reduction

Certainly, the proverbial “elephant in the room” is the size and weight of the EyeWay prototype. It is an optical breadboard held up by pullies. It is highly reminiscent of Ivan Sutherland’s famous 1968 Sword of Damocles.

On top of adding functionality over the existing prototype, EyeWay has to make massive reductions to the size.  It is not at all clear that everything will scale down to a glasses-like form factor. They also have to allow for optical paths to support two LBS engines combined with the steering mirror.


EyeWay is trying to solve simultaneously many of the key technical challenges facing AR. They have some very bright people who have thought through how they could solve these problems. But trying to tackle all these problems at once will likely lead to taking many years to develop and perfect, no less scale down to a glasses form factor.

I’m seriously impressed with the image quality of the foveal display, considering they are using laser scanning. I’m further impressed how EyeWay manipulates the character of the light to eliminate the problems I have seen with near-eye LBS systems in the past.

A large obstacle to making the device smaller could be the combiner in the form of a holographic mirror. While holographic mirrors exist, I am concerned about the damage done to virtual and real-world images.

The next major step for EyeWay is to demonstrate the integration of their eye-tracking with their foveal display. As far as I know, nobody has ever (at least publicly) demonstrated a foveal display that tracks the eye (as opposed to “fixed foveal displays” such as such as Varjo). Avegant has talked about foveal displays, but I don’t know of any open demonstrations of it working, nor have I seen it work, so I suspect it has issues.

If EyeWay can prove their eye-tracking foveal display will work, then on a sheer technical basis, it should be valuable, even if the form factor issues have not been resolved. I think the proof that foveated displays.

To my mind, what EyeWay is doing would be of most interest to a company with a long-term perspective on AR. The sheer knowledge gained from a working foveated system could give insight into the human vision that could benefit many different types of products. EyeWay’s retinal tracking could prove valuable in its own right as there are many uses for highly accurate eye tracking.

Karl Guttag
Karl Guttag
Articles: 244


    • Thanks, but I don’t quite follow what you are talking about with respect to “LG+Google’s OLED on glass.”

      If you are talking about transparent OLEDs, they won’t work for a near-to-eye display as they would be out of focus.

      • Thanks for the reference. Interestingly, one of the authors, Nikhil Balram, who was at Google in 2018, is now the CEO EyeWay Vision Inc. (EyeWay Vision USA).

        For those that don’t know, the paper is for a VR display aimed at better supporting foveated rendering. Basically, the display has a high resolution of 3840 × 4800, and they were using foveated rendering to reduce the computational load and bandwidth. Researchers at Google, Microsoft, Nvidia, and elsewhere have shown that foveated rendering (only) into a high-resolution display will work particularly in VR. VR or computer monitor with eye-tracking foveated rendering (only) is a much easier job than having the display move while tracking the eye. Essentially, the only debate is how large the foveated region needs to be and the level of eye-tracking required so the user does not see the transition.

        Foveated rendering is just a small part of the job EyeWay has to do. It is well known that foveated rendering will work. EyeWay has the much harder job with a “foveated display” of having the display track the eye without the eye (human vision) sensing it is moving.

      • Thanks for clearing up. Interestingly I also only now notice they call it “foveated pixel pipeline ” , perhaps that’s a better description than “foveated rendering” that’s mostly about software and defaults to “fixed foveated” as of now. Interesting times (ahead) I guess.

    • I’m more than a little skeptical that Apple will do anything optical (see-through) AR anytime soon. I’m even a bit doubtful about the VR headset being released soon, at least as a product. I expect Apple to keep exploring how they can “augment” the iPhone with Lidar. The most obvious use would be in photography to simulate a shorter depth of field for things like portraits, and they could also be used Lidar for enhancements in games such as having things hide and appear in AR-like games (ala Pokemon Go). There are also those speculating that the Lidar on all the iPhones will be “mapping the world” for Apple to use in Apple Maps and other purposes.

      The Facebook/Ray-Ban seems to be more a dipping their toe in to try a few things. They are supposed to announce tomorrow as I write this. The expectation is that it will be mostly audio but could have a camera and will not have a significant display. I don’t have any inside information.

      I have seen the Avegant announcement and hope to have more to say about it in the not too distant future.

  1. Thank you for another great article.
    I am interested in where your comment “The virtual image needs to be about eight times brighter than the real world for the virtual image to look solid. ” comes from.
    I would be appreciated if you could share the reference.

    • The 8X comes from my own experience working with both front projected images and AR glasses. It is also the point at which colors start looking saturated.

      You want at least 2x (as in the display brightness to the eye to equal the background after any light-blocking) for things like text to be readable. At 2X, things look pretty translucent and colors are pretty washed out.

  2. Thanks for the follow up Karl.

    I don’t want to be the pessimist here but I feel like you’re being maybe a bit too optimistic with EyeWay?

    I don’t think there’s anything that Microsoft and Avegant haven’t also tried previously (patents and research papers) with LBS and foveation.

    It seems like for this to work not only exceptional eye tracking is required, but there’s also the isue of the display tech involved putting many limitations on what eye optics can be used. So there’s two big challenges here and again I don’t think others like MS, Avegant or Varjo haven’t tried to solve this before.

    The thing I personally don’t like with EyeWay is that it seems like eye tracking does not only need to be perfect for the foveal resolution to work all the time, but for the overall image to reach the retina. Seeing artifacts or reduced resolution due to eye tracking hiccups is one thing, but losing the entire augmented view or going blind if used for VR? That’s problematic.

  3. Any chance you can visit Vuzix to see what they’re up to with their upcoming tech? I Would love to hear your thoughts on how their upcoming tech with their volume hologram waveguide optics compared to other companies such as WaveOptics.

  4. Thank you for a great article.
    I’m wondering is there any problems with external acceleration forces acting on the fast scanning mirror of the laser scanner in a real world environment where the subject is moving around? I have read about that kind of problem for MEMS mirrors in LIDAR’s for automotive applications where bumps in the road will throw the mirror off because of the inertia of the mirror.

Leave a Reply