Analyzing Apple Glass Leak (Part 2) – Akonia Waveguide with an LCOS MicroDisplay

Introduction

Act in haste, repent in leisure,” they say. In my first article on FPT’s Jon Prosser’s video leak giving information on Apple Glass, I didn’t discuss what just about every other tech magazine mentioned, namely Apple’s purchase of Akonia Holographics. I even wrote about Akonia’s patents back in November 2019:

Most of the few “serious” patents seem to come from companies acquired by Apple. Additionally, A simple search of patent applications assigned to Apple directly will not turn up all of their AR patents. For example, Apple acquired Akonia Holographics back in August 2018, but the Akonia patent applications show on the US Patent Office site as still being assigned to Akonia and not Apple. 

Apple AR Patent Clutter and Hiding AR Patents

When I plug the Akonia technology into my assumptions, it starts to solve the problems I had with how they could support IPD adjustments and have a thin glasses-like product. But to make this work, I had to break the “cost assumption” that Apple would get Apple-like margins (more on this next time) which unduly influenced my thought processes last time.

Great News for AR Display and Optics Companies after Magic Leap’s Debacle.

I can’t count the number of times someone in the AR industry asked me, “what are we going to do WHEN Magic Leap implodes? The Apple Glass Leak is important not just to Apple watchers, but to every company working on AR display components. As I am going to discuss, I think the technology is out there to build a headset that could compete with what Apple appears to be developing. I am going to mention some of the Optics and Display companies that should be getting a fresh look.

While writing this article, word came in that Magic Leap has raised another $350M from someone (thought to be a medical company). It looks like they found another “investor” with more money and with a bigger ego, than brains (“P. T. Barnum” was proven right again).

The Apple Glasses story is several orders of magnitude more important than what happens with Magic Leap.

Bloomberg’s Mark Gurman Disputes Prosser’s Timeline

While FPT’s Jon Prosser I’m told has a good track record with his leaks, other Apple watchers still think the timeline for Apple is longer. Bloomberg’s Mark Gurman has tweeted that he things Prosser’s timeline is too aggressive and that Prosser may be confusing VR and AR releases. Prosser Tweeted back that he was not confused about which product.

Even if Apple is intending to release an AR product as Prosser believes, Apple could always run into production snags or decide that there is a problem with the strategy which could cause delays.

Akonia, Owned by Apple, Still Filing Patents under Akonia

It turns out that Apple (mostly under Akonia’s name) has been filing new patent applications. While Apple owns Akonia, Akonia looks like it is operating as a somewhat separate entity. The people still give Akonia as who they work for on Linkedin.

IPD Adjustment Was The Main Sticking-Point Last Time

Last time, the big unsolved problem was how Apple was going to support IPD adjustment with a stereo (dual display) system. I got too caught up on the price point in my quick analysis and jumped to the erroneous the conclusion Apple could be using lower resolution displays with simpler (non-waveguide) optics. But the simple low-cost optics and small display approach fails to address IPD adjustment. It requires a very non-Apple-like clunky mechanical adjustment. But if one assumes that Apple might accept less than Apple-like margins to get to a secondary revenue stream then it gets much easier to explain what Prosser saw,

By ignoring the manufacturing cost of Akonia’s holographic based waveguides and generate a large eye box, and then they use a reasonably high-resolution (say 720p or 1080p) field sequential color LCOS microdisplay, things start to fall in place. They could then use “electronic IPD adjustment” which simply reserves part of the field of view for moving the image around to match the IPD.

Apple Glass could then have one or just a few IPD model headsets. Having more than one IPD variation helps keep the image more in the sweet-spot of the optics and supports a bigger FOV as less has to be reserved for IPD adjustment. As examples, Hololens 1 (and 2) have a single model whereas Magic Leap opted for models that covered 2 different IPD ranges. I can’t say right now which way Apple has chosen. Their choice may depend on the quality of the Akonia waveguides and the size of the optic’s “sweet spot.”

Apple Owns Akonia, But Why LCOS?

Once the Akonia holographic waveguide is assumed, then a field sequential color (FSC) LCOS display device becomes the obvious choice for several reasons. First of all, it technically makes the most sense and has been chosen by most companies using pupil expanding waveguides. FSC LCOS, with LED illumination, is probably the only display technology today that can simultaneously support:

  1. The high brightness required (with LED illumination)
  2. The high resolution to support the electronic IPD adjustment
  3. Small pixels to support a small optical engine
  4. With a cost that would support a $499 retail price point with dual displays.

Briefly on the other optics. OLEDs far from bright enough to work with a pupil replicating waveguide display like Akonia uses. MicroLED are A) not ready, B) would be too expensive, and C) may still not be bright enough (a big issue today). DLP is probably too expensive and too big. Laser scanning would be too expensive and the optics required too bulky (see Hololens 2).

Next we have the “patent trail” left by Akonia/Apple. Many of the patents, Akonia talks generically about the display saying it could be anything, but most patents point to a 2-D pixel array display. Several of the patents point to polarized optics that would be aimed at an LCOS or LCD display. And then one patent filed by Akonia in August 2019 shows a compact FSC LCOS illuminator design.

Who Makes the LCOS Device and Who Else Could They Use?

We could round up the “usual” LCOS suspects which include Raontech, my old company Syndiant, Ominivision (used in Magic Leap), Forth Dimension Displays (owned by Kopin), Jasper, and Himax (used in Google Glass and Hololens 1).

Then there is Compound Photonics (CP) that has been around for years with several management shakeups and nobody using their products. But I started hearing some good buzz about CP’s LCOS last year with their May 2019 announce 0.26-inch 1080p device with a 3.015-micron pixel has some very impressive specs in terms of pixel size, contrast, and color field rate (to stop the color-breakup seen on headsets like Hololen 1). Then on April 26, 2020, CP announced: “World’s Smallest Wide Field of View 1080p Optical Engine Reference Design for Smart Glasses” (pictured on the left). This optical module is specifically designed for use with waveguides. Quoting CP’s release, “Prototype demonstration units have shown up to 2x brightness vs. other engine designs when projected into the new generation of 50°+ wide field of view (WFOV) waveguides.”CP’s module shows how small a 1080p optics can be with a 0.26″ LCOS device (see picture next to a dime above). If Alkonia’s concept for LCOS shown in their patent application above works, the module would be even smaller.

I’m not saying that CP is in Apple Glasses, as I think that most of the LCOS companies could meet Apple’s requirements, but just that CP seems to have the best fit and the most momentum. I may also be giving them more credit because of their work on MicroLEDs, which I am following due to their announcements with Plessey, followed by their announcement that CP is getting into MicroLED modules. I have a series of articles on MicroLEDs almost ready to go that keeps getting pushed back due to breaking news. One of the good things with LCOS is that there are many products from which to choose.

Optics For Others Competing With Apple

The only image I have of an Akonia’s optics was taken by Akonia in December 2016 before they were acquired by Apple. Frankly it is about on par or worse than other diffractive optics technologies the time and much worse than Lumus reflective waveguides were back then. Since 2016, we can assume that Akonia has improved, but so has everyone else. I have personally seen dramatic improvements in everyone’s waveguide technology, both diffractive and reflective, over the last few years.

Still I don’t expect that with a holographic diffractive waveguide that the image quality will be perfect. I would expect some issues with color purity. My understanding is that most of the waveguide companies do some level of “digital” correction to improve the purity such as WaveOptics presented at the Photonics West AR/VR/MR conference this year.

Apple Glass appears to be using a (hologram) diffractive waveguide technology. I’m expecting the image quality will be similar to WaveOptics which has one of the better diffractive waveguides. Dispelix came out of nowhere last year with their single-layer diffractive waveguide a year ago, and at Photonics West AR/VR/MR Conferencing in February 2020 they were showing image quality similar to Waveoptics. Both Waveoptics and Dispelix, IMO, looked much better than say Hololens 1&2 and Magic Leap One. Digilens uses a more photographic process to form its diffraction gratings and has been making impressively large and thin waveguides, some of which are aimed at consumer price points. Vuzix also has diffractive waveguide technology but generally supports smaller fields of view.

In terms of resolution and color uniformity, Lumus reflective waveguides are better than any diffractive waveguide I have seen. Back in October 2018 compared Lumus to the diffractive optics of Hololens and Magic Leap and it was not even close (see image below). Lumus also claims (I have not personally verified) to have a significant efficiency (brightness for the same power) over any of the diffractive based waveguides.

Prescription Lenses – Another Sticking Point

The next question is how Apple Glass will support prescription eye correction? The “easy way” which most companies, including Magic Leap and Nreal (see right), use is a prescription insert.

A much more expensive and complex alternative would be to encase the waveguide in over-molded lenses similar to Tooz Technologies. But this is much harder to do with flat waveguides as they need an air-gap for the total internal reflection (TIR) to work which greatly complicates matters. It also gets tricky to have the optics work for both the “real world” and the displayed image. Embedding waveguide in the lenses would likely be a logistics nightmare as there would be a multiplicity of combinations of IPD and prescription adjustments that would get molded into the optics. Besides, many people change their prescriptions as often as once a year and it would entail getting entirely new glasses.

The more I think about, I think Apple has no choice but to use some kind of prescription insert. I expect Apple will try and do a good styling job to hide the added inserts.

Akonia/Apple Patents With Some Explanation

I have selected some key figures from about 22 US patents and applications by Akonia and Apple to give an idea of what Akonia/Apple are doing. I already covered above application ‘163 that shows using an FSC LCOS microdisplay.

US2019/0391393 – Pupil Expansion

The ‘393 patent application shows the basic concept. At first glance, it looks very much like most other diffractive waveguides such a Hololens or Magic Leap (see below). The one subtle difference that I will go into more later is that Akonia does not use an “entrance grating” but rather uses either a prism or cut surface (1210 in the Akonia left) more like Lumus (right) to get the light started at the desired angle for TIR.

Using cut prism at the entrance may give Akonia some efficiency advantages over other diffractive waveguides in that there is one less diffraction loss.

US 2020/0088931- Input coupling

Akonia’s ‘931 patent on input coupling has some of the best images for giving a high-level overview of the light flow through the optics. This application has a series of diagrams drawn at different levels of detail and from a different angle and I have picked a few (left).

It show a pixel type display 355, almost certainly LCOS, with collimation optics 360 going into the input prism 36. The light then proceeds down a horizontal waveguide “cross couple” 365. Embedded in the horizontal coupler is a “skew mirror” (Akonia’s name for holographic grating) before entering waveguide 370 with a holographic grating that will cause the light to exit the waveguide toward the eye.

The second figure (Fig. 23 in the application) I copied from the ‘931 application shows a more top-down view. of the same optical flow. It starts to show where the holographic film is located.

Figures 24C gives another view of the light flow from the input prism 2425-c through the holographic exit grating 2460-c.

US 2020/0117003 Optical System with Dispersion Compensation – Assigned to Apple

This application, filed in the same time frame as the others and with many of the same inventors above, was assigned to Apple. It was filed a month or so later, and it included an inventor from California, home of Apple, whereas the other applications have Colorado, home of Akonia, inventors only. This application is the first I have seen with Apple’s name on it based using inventors from Akonia. As it was only published a month ago, we will have to see if it is the start of a trend or just an exception.

Conceptually a (hologram) diffraction grating functions like a mirror tilted at an angle. But the holographic grating is physically parallel to the surfaces of the waveguide. The hypothetical tilted mirror is shown diagrammatically with a series of tilted mirrors 708, 710, and 712 above. This figure is showing that with a holographic grating, Akonia can tune the mirror effect of the hologram based on the wavelength of the light in order to correct for the color dispersion cause preceding optics (the dispersion show with 728, 730, and 732).

Shown (below) is a single wavelength grating showing a single periodic function. Below that is a complex function made up of six overlapping/superimposed diffraction gratings. In theory, the hologram(s) can be programmed to any desired function. But as I have not seen images through these gratings, so I don’t know how well they work in practice. I’m constantly reminding people that not everything in a patent works out in practice. Researchers at other companies seem to prefer surface relief (stamped or deposited) gratings.

A key difference between a “holographic” and typical (usually surface relief) grating is that the Hologram can be more “analog” with smooth transitions as seen in the Akonia figures above. The surface relief diffraction gratings are more “digital” on or off. Hololens 1 used slightly tilted gratings originally developed by Nokia. A Magic Leap paper shows using a major and minor grating. Everyone seems to have their own tweak on diffraction gratings with some claimed advantage. It is impossible to know which works best without seeing them all side by side.

Conclusion – It Can Be Done But at What Price?

As I pointed out above, the technology does exist to build what Prosser says Apple will have soon. The good news for everyone else interested in AR is that the technology exists for others to built similar headsets. By similar, I mean waveguide-based that supports better than 50% transparency and is reasonably thin (a step closer to glasses than Nreal).

With the computing separate and depending on a mix of iPhones for the compute, Apple is not trying to compete with VR games as Magic Leap tried. They will likely be targeting games and interactions that are less compute, communication, and SLAM centric. I would particularly be worried about the “motion to photons” latency. Nreal gives a clue as to what might be possible.

Even squeezing manufactures, moving the computer to the phone, and with many less capabilities than HL2 and Magic Leap, it seems unlikely that Apple is going to get the kind of margins it normally expects at $499. Setting the Price at $499 suggest that Apple wants to move considerable numbers of units versus making money on each unit. Apple appears to be thinking in terms of secondary revenue stream which I plan to discuss in the near future.

Default image
Karl Guttag

18 Comments

  1. Another great article, looking forward to the Micro LED stuff. Cheers! M.

  2. In Bernard Kresse’s book, Fundamentals of Wearable Computers and Augmented Reality, he states-

    “The efficiency of with LCoS or LCD transmission microdisplays remains low (2%-4% typically).”

    What is your take on battery life/size/weight needed to run 2 displays and how this will integrate into a “sleek” Apple Glass design.

    • I could not find that quote in Kress’s book, as I was trying to understand the context.

      LCOS is radically better in efficiency than LCD at high resolution, so the statement on the surface makes no sense.

      The big “loss” with pupil replicating waveguides is the etendue coupling into the waveguide, pupil replication loss (basically area-in versus area out), and the losses in the diffraction grating for diffraction-based or partially coated mirrors for multiple mirror (ex. Lumus). The losses are so huge that even MicroLEDs starting with over 1 million nits aren’t bright enough. LCOS with LED illumination does work.

      Non-pupil replicating optics is much more efficient (by on the order of 1000X). This is what you will see used with OLED in the 800 to 12,000 nit range and you won’t see OLEDs used with pupil replicating waveguides. Non-pupil replicating, for large FOV (greater than about 20 degree) requires large optics (this is discussed on page 128, section 14.1 of Kress’s book).

      One thing very interesting with the Akonia design is the light enters the waveguide via a wedge/prism similar to Lumus. This would see to save a diffraction loss and might help efficiency.

      The trade-off is that if you want a much greater than a 20-degree FOV, the optics for thin with a small optical engine are all pretty inefficient.

      • The quote – “The efficiency of with LCoS or LCD transmission microdisplays remains low (2%-4% typically).”

        came from

        Fundamentals of Wearable Computers and Augmented Reality
        Chapter 5 – Optics for Smart Glasses, Smart Eyewear , Augmented Reality and Virtual Reality
        Headsets
        Bernard Kress
        page 103

        (my mistake it’s not” Kress’s book” – he just contributed Chapter 5)

        https://www.google.com/books/edition/Fundamentals_of_Wearable_Computers_and_A/QxUqCgAAQBAJ?hl=en&gbpv=1&dq=Fundamentals+of+Wearable+Computers+and+Augmented+Reality&printsec=frontcover

        Beyond the increased power from using 2 displays , wouldn’t the larger 720p and 1080p displays you now suggest further add to the power requirements as opposed to the 400 x 300 to 800 x 600 pixels you estimated in your previous article ?

        Do you have any data on power consumption or efficiency comparisons for LCoS vs DLP vs LCD vs OLED vs LBS? My understanding is that many specifications are held close to the vest by manufacturers . I have seen a lot of vague terminology used in comparisons. Even within a display type there can be “radical” variations.

        So how will 2 – 1080p (or 720p) LCOS displays fare in terms of packing the power needed into a sleek Apple Glass design and provide a decent battery life?

  3. What FOV can be expected with this technology?

  4. Karl, what do you think of LETs instead of LEDs?
    As in OLET, LEFET, QDLEFET?

    Are *they* the future instead of Micro LED?
    https://onlinelibrary.wiley.com/doi/full/10.1002/adfm.201904174

    • Thanks for the reference. I haven’t heard of them before and I don’t understand what advantage they would have over inorganic LEDs. I noticed that they had an organic layer that might be problematic at high brightness. I certainly could be missing something in a quick look.

      Inorganic LEDs have a manufacturing advantage of being highly mainstream. I would be looking more for a technology that would emit light over a tighter angle or emit much more like per unit area so the light on a per-pixel basis could be better collimated.

  5. I doubt very much Prosser’s outline is anywhere close to reality so an analysis is a waste of time.

    • What is your reason to doubt him? He has an “act” that some people don’t like, but my understanding is that he has a reputation of being right most of the time.

    • well there isn’t a second source ….and it’s not anything concrete …they might have an announcement this year and they might have it out by 2021-22 or even after that
      ….so yeah it could be way off or wrong for various reasons but it all fits ….we need analysis and other sources to figure out if it’s legit

  6. Hey KarlG,

    Really interesting analysis you provide here. You are certainly an expert on AR display technologies! I’m quite excited from these new developments in your current guess on AppleGlasses technology as opposed to the quality tradeoffs previously guessed in part 1 (they’re known for never ‘jumping the gun’ on a new technology, so a quality compromise of that level would be unexpected and disappointing).

    I’m putting together a youtube video detailing how AR works, and wanted to include AppleGlasses and the possible technologies behind it. Would you be okay if I used this article as one of my sources and credited you accordingly? I think this stuff is super cool, and want to spread that interest to more people. Thanks again for your expertise! Will be following to see what you think after/if prototype pictures are leaked.

    • Thanks,

      No problem using this blog as a source with credit.

      • Thanks KarlG, I really appreciate that!

        I’ve been scratching my head over two questions recently and was hoping you’d be able to provide a little insight. I have one question concerning waveguide/diffraction-grating AR displays more generally, and a second question related to a specific Apple patent that is likely related to Apple Glass.

        1) Virtual Images

        I am trying to understand how the diffraction-grating/out-coupler is displaying a virtual image for our eye. Are the rays traveling from the out-coupler collimated, or is it a bit more complicated than that?

        My understanding is that to produce a virtual image, the near-eye display must out-couple the light rays at specific angles such that their “virtual origin” would be a point in 3D space that makes sense to our eye and brain (i.e. as an object a few feet away). I assumed the rays would not be collimated because then we would be unable to simultaneously focus on the virtual image and real objects in front of us (due to the differing object distances).
        – If it’s true they must be angled, would the diffraction grating have enough tunability to allow for different focal distances of the image (use cases: virtual projections on a table in front of us vs on the far wall)?
        – How can one keep the “edges” of the out-coupled image from being completely messed up? For a large FOV, I feel that the farther out you go from the center, eye-rotation and slightly larger eye-relief could drastically mess up the optics. Would diffraction-gratings need some free-form properties? (this may be a full article so no need to answer it in full if it is complicated)

        2) Patent #: US20200089014 (link: http://www.freepatentsonline.com/y2020/0089014.html)

        This patent filed by apple (recently) is supposed to “provide the eyebox with uniform intensity light” i.e. even brightness across the display. It uses beam-splitter structures inside the waveguide to replicate beams of light, and then they are coupled out of the waveguide at different locations, so a single beam of light is refracted into two parallel beams and thus cover a larger area of the eye-box.

        I can see how this would help achieve a larger spread of light, but I don’t get how this would be compatible with a high-resolution display. For instance, if a single pixel from the LCOS is meant to ultimately become a single pixel on our retina, then how would splitting a single light ray into two still be able to achieve this one-to-one mapping? Perhaps I’m missing something obvious.

        Thanks again for your expertise!

        Best,
        Jeff

  7. I’m surprised you didn’t mention Apple might be using the display tech from Mirasol which they acquired from Qualcomm in Hsinchu Science Park, Taiwan. Again, the assumption is their first product needs to support full motion video, true color etc. Perhaps they are considering a product more in the line of what Vuzix has been promoting. Apple has been investing heavily in display technology (MicroLED) in Taiwan as well. To me it makes sense Apple would integrate a homegrown display technology and test it in the low volume AR space first and then move it to the watch business as factory capacity/yields ramp.

    • I understand the Mirasol concept and I can’t imagine how it would apply at all. It was a technology that was trying to compete with eInk with color. I think Apple’s MicroLED effort will be direct more at direct view displays for watches and phones. Playnitride in Taiwan is working on MicroLEDs for both direct view and microdisplays that could be used in AR. But the sense I get is Playnitride is more serious about the direct view efforts. As I see it, the technology for making direct-view and Microdisplays for AR are going to be significantly different even though they are both being called “MicroLEDs.

  8. […] article builds on Part 1 and Part 2 in this series on Jon Prosser’s Apple Glass […]

Leave a Reply

Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

%d bloggers like this: