Magic Leap – No Fiber Scan Display (FSD)

Sorry, No Fiber Scan Displays

For those that only want my conclusion, I will cut to the chase. Anyone that believes Magic Leap (ML) is going to have a Laser Fiber Scanned Display (FSD) anytime soon (as in the next decade) is going to be sorely disappointed. FSDs is one of those concepts that sounds like it would work until you look at it carefully. Developed at the University of Washington in the mid to late 2000’s, they were able to generate some very poor quality images in 2009 and as best I can find, nothing better since.

The fundamental problem with this technology is that wiggling a fiber is very hard to control accurately enough to make a quality display. This problem is particularly true when the scanning fiber has to come to near rest in the center of the image. It is next to impossible (and impossible at a rational cost) to have the wiggling fiber tip with finite mass and its own resonate frequency follow a highly accurate and totally repeatable path.

Magic Leap has patents applications related to FSDs showing two different ways to try and increase the resolution, provide they could ever make a decent low resolution display in the first place. Effectively, they have patents that doubled down on FSD, one was the “array of FSDs” which I discussed in the Appendix of my last article that would be insanely expensive and would not work optically in a near eye system and the other doubles down on a single FSD that ML calls “Dynamic Region Resolution” (DRR) which I will discuss below after discussing the FSD basics.

The ML patent applications on the subject of FSD read more like technical fairy tales of what they wished they could do with a bit of technical detail and drawing scattered in to make it sound plausible. But the really tough problems of making it work are never even discussed, no less solutions proposed.

Fiber Scanning Display (FSD) Basics

ml-spiral-scanThe concept of the Fiber Scanning Display (FSD) is simple enough, two piezoelectric vibrators connected to one side of an optical fiber cause the fiber tip follow a spiral path starting from the center and a working its way out. The amplitude of the vibration starts at zero in the center and then gradually increases in amplitue causing the fiber to both speed up and follow a spiral path. At the fiber tip accelerates the tip moves outward radially. The spacing of each orbit is a function of the increase in speed.

ml-fiber-scanning-basic

Red, Green, and Blue (RGB) lasers are combined and coupled into the fiber at the stationary end. As the fiber moves, the lasers turn on an off to create “pixels” that come out the spiraling end of the fiber. At the end a scan, the lasers are turned off and drive is gradually reduced to bring the fiber tip back to the starting point under control (if they just stopped the vibration, it would wiggle uncontrollably).  This retrace period while faster than the scan takes a significant amount of time since it is a mechanical process.

An obvious issue is how well they can control a wiggling optical fiber. As the documents point out, the fiber will want to oscillate based on its resonance frequency that can be stimulated by the piezoelectric vibrators. Still, one would expect that the motion will not be perfectly stable, particularly at the beginning when it is moving slowly and has no momentum.  Then there is the issue as how well it will follow the exactly the same path from frame to frame when the image is supposed to be still.

One major complication I did not see covered in any of the ML or University of Washington (which originated the concept) documents or applications is what it takes to control the laser accurately enough. The fiber speeding up from near zero at its center to maximum speed as the end of the scan. At the center of the spiral the tip moving very slowly (near zero speed). If you turned a laser on for the same amount of time and brightness as the center, pixels would be many times closer together and brighter at the center than the periphery. The ML applications even recognize that increasing the resolution of a single electromechanical  FSD is impossible for all practical purposes.

Remember that they are electromechanically vibrating one end of the fiber to cause the tip to move in a spiral to cover the area of a circle. There is a limit to how fast they can move the fiber, how well they can control it, and the fact that they want fill a wide rectangular area so a lot of the circular area will be cut off.

Looking through everything I could find that was published on the FSD, including Schowengerdt (ML co-founder and Chief Scientist) et al’s SID 2009 paper “1-mm Diameter, Full-color Scanning Fiber Pico Projector” and SID2010 paper, “Near-to-Eye Display using Scanning Fiber Display Engine” only low resolution still images are available and no videos. Below are two images from the SID 2009 paper along with the “Lenna” standard image reproduced in one of them, perhaps sadly, these are best FSD images I could find anywhere. What’s more, there has never been a public demonstration of it producing a video which I believe would show additional temporal and motion problems. 2009-fsd-images2

What you can see in both of the actual FSD images is that the center is much brighter than the periphery. From the Lenna FSD image you can see how distorted the image is particularly in the center (look at Lenna’s eye in the center and the brim of the hat for example). Even the outer parts of the image are pretty distorted. They don’t even have an decent brightness control of the pixels and didn’t even attempt to show color reproduction (requiring extremely precise laser control). Yes the images are old, but there are a series of extremely hard problems outlined above that are likely not solvable which is likely why we have not seen any better pictures of an FSD from ANYONE (ML or others) in the last 7 years.

While ML may have improved upon the earlier University of Washington work, there is obviously nothing they are proud enough to publish, no less a video of it working. It is obvious that non of the released ML videos use a FSD.

Maybe ML had improved it enough to show some promise to get investors to believe it was possible (just speculating). But even if they could perfect the basic FSD, by their own admission in the patent applications, the resolution would be too low to support a high resolution near eye display. They would need to come up with a plausible way to further increase the effective resolution to meet the Magic Leap hype of “50 Mega Pixels.”

“Dynamic Region Resolution (DRR) – 50 Mega Pixels ???

Magic Leap on more than one occasion has talked about the need to 50 Megapixels to support the field of view (FOV) they want with the angular resolution of 1-arcminute/pixel that they say is desirable. Suspending the disbelief that they could even make a good low resolution FSD, they doubled down with what they call “Dynamic Region Resolution” (DRR).

US 2016/0328884 (‘884) “VIRTUAL/AUGMENTED REALITY SYSTEM HAVING DYNAMIC REGION RESOLUTION” shows the concept. This would appear to answer the question of how ML convinced investors that having a 50 megapixel equivalent display could be plausible (but not possible).

ml-variable-scan-thinThe application shows what could be considered to be a “foveated display”, where various area’s of the display varies in image density based on where it will be projected onto the human’s retina. The idea is to have high pixel density where the image will project on the highest resolution part of the eye, the fovea, and that resolution is “wasted” on the parts of the eye that can’t resolve it.

The concept is simple enough as shown in ‘884’s figures 17a and 17b (left). The concept is to track the pupil to see where the eye is looking (indicated by the red “X” in the figures) and then adjust the scan speed, line density, and sequential pixel density based on where the eye is looking. Fig 17a show the pattern for when the eye is looking at the center of the image where they would accelerate more slowly in the center of the scan. In Fig. 17b they show the scanning density to be higher where the eye is looking at some point in the middle of the image. They increase the line density in a ring that covers where the eye is looking.

Starting at the center the fiber tip is always accelerating.  For denser lines they just accelerate less, for less dense areas they accelerate at a higher rate so this sound plausible. The devil is in the details in how the fiber tip behaves as it acceleration rate changes.

Tracking the pupil accurately enough seems very possible with today’s technology. The patent application discusses how wide the band of high resolution needs to be to cover a reasonable range of eye movement from frame to frame which make it sound plausible. Some of the obvious fallacies with this approach include:

  1. Control the a wiggling fiber with enough precision to meet the high resolution and to do it repeatedly from scan to scan. They can’t even do it at low resolution with constant acceleration.
  2. Stability/tracking of the fiber as it increase and decreases its acceleration.
  3. Controlling the laser brightness accurately at both the highest and lowest resolution regions.  This will be particularly tricky as the the fiber increases or decreases it acceleration rate.
  4. The rest of the optics including any lenses and waveguides must support the highest resolution possible for the use to be able to see it. This means that the other optics need to be extremely high precision (and expensive)
What about Focus Planes?

Beyond the above is the need to support ML’s whole focus plane (“poor person’s light field”) concept.  To support focus planes they need 2 to 6 or more images per eye per frame time (say 1/60th of a second). The fiber scanning process is so slow that even producing a single low resolution and highly distorted image in 1/60th is barely possible, no less multiple images per 1/60th of a second to support the plane concept.  So to support the focus plane concept they would need a FSD per focus plane with all its associated lasers and control circuitry; the size and cost to produce would become astronomical.

Conclusion – A Way to Convince the Gullible

The whole FSD appears to me to be a dead end other than to convince the gullible that it is plausible. Even getting a FSD to produce a single low resolution image would take more than one miracle.  The idea of a DRR just doubles down on a concept that cannot produce a decent low resolution image.

The overall impression I get from the ML patent applications is that they were written to impress people (investors?) that didn’t look at the details too carefully. I can see how one can get sucked into the whole DRR concept as the applications gives numbers and graphs that try and show it is plausible; but they ignore the huge issues that they have not figured out.

Karl Guttag
Karl Guttag
Articles: 256

9 Comments

    • Thanks, I had seen the first video before but not the second one. It is impressive technology for use in endoscopes cameras. The images do appear to be reasonable stable which suggest that they are repeatable.

      But note that it has modest resolution and frame rate (about 250 spiral or 500 pixels across) and you can’t tell anything about the linearity of video images they show of the insides of an animal. They just need to see things inside the body and they don’t care if a line comes out as a curve or if things are somewhat stretched or distorted. You also don’t know if the distortion is different which each camera. The question is how could you scale up the resolution and do it accurately and repeatedly.

  1. Hi Karl,

    That is pretty sad, I thought FSD might bring nake eye 3D display into truth soon: imagine each fiber be a macro pixel, and it can present light field with high solution. Several million (without considering the cost) fiber can create an astounding 3D effect never existed, seems we have to wait longer for it. Beyond that, i like to know your opinion what kind of technology could be a break through on naked eye 3D in near future?

    • There is a real issue here of the fundamental resolution limits of spatial (pixel) light modulator (SLM) display technologies (LCOS, DLP, OLED-Microdisplays). In addition the the problems of physically making the mirrors/displays work, you are starting to get into where diffraction effects become significant as the pixel sizes get to 4 microns and below.

      But just because there is a known physical limits with the SLMs, it does not negate the fact that fiber scanning has it own set of problems. A huge issue is that they are using electromechanical scanning and there are limits as to how fast and how accurately they can move the fiber. I would think that if there is a solution, it will be an all electronic one that would not be so limited on speed or accuracy.

      Note that I have not seen or heard Magic Leap claim to be using Fiber Scanning Displays. Many articles are just assuming that FSDs will be used by Magic Leap due to their co-founder and chief scientist Brian Schowengerdt’s work on FSDs plus their appearance in a number of patent, combined with the occasional mention of 50 megapixel displays by Magic Leap.

  2. Though may seem like nitpicking, there are a few crucial spelling errors/omissions in parts – i.e. “…problem is particularly true when the (the what, fiber I presume) has to come to rest or near rest in the center of the image…”

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading