Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Magic Leap just announce the “Magic Leap One” in, of all places, a Rolling Stone Article rather than the technical press. Actually “announce” is too strong a word for what Magic Leap did, it is more of a “public admission” or a “teaser” since they revealed so little. Interestingly, most companies wouldn’t announce in late December figuring that the announcement would be overlooked in the middle of the Holiday season and many reporters are on vacation.
Usually, this blog is more about deeper technical analysis of Magic Leap and other display technologies, but I though my readers would be interested in my take on what could be gleaned from the little bit shown. Please understand this is “instant analysis” and subject to change as I learn more and there will likely be updates as more information becomes available.
The more I look at this, the more issues I am finding. So to get this out today, I’m going to give my quick observations and just go into a little detail on just the “see through” aspect today.
The more I look at this, the more I am finding wrong with it. I’m going to go into a bit of detail later, but below are my quick takes:
Frankly, just about everything above was expected based on the information and rumors available. Perhaps the biggest technical surprise is not being able to see the person’s eyes at all and the size of the Lightpack (suggest renaming it to the “Bigpack”). The glasses are a little more hideous looking (to me and others) than expected and similar to the Magic Leap design patents.
I’m found of saying, “it is often more interesting what is not said” and in this case, there is a lot that has not been said. The Rolling Stone article and Magic Leap’s Web Page have not given us a lot to go on. All the photos are “stock” photos provided by Magic Leap where they can control the angles of the shot and what is revealed and what they want to hide. There is no price and the availability has been given as sometime in 2018. There are zero specs with respect to resolution, field of view, brightness, and see-through percentage. There are no through the optics photos or videos (maybe a slight exception in next paragraph), and no uncontrolled photography or video of the product in use. We don’t even know if the pictures were taken with the glasses turned-on or whether they were pictures of functional units. Everything is being tightly controlled by Magic Leap.
There was also a 5 second video release yesterday that either was shot in a very dark room or with very dark glasses. But Magic Leap has, as reported by Business Insider, been obtuse as to whether they were “through the optics” or a composite shot. Either way the video is so short, so dark, and devoid of much in the way of content that it is hard to discern much.
My very first reaction and the biggest optical think I can see wrong, is that I can’t seen the user’s eyes. Either Magic Leap has doctored the photos or their optics are heavily scattering and/or blocking the light between the real world and the eyes. For reference compare the Magic Leap Lightwear photo on the left with a similar view taken of Hololens on the right where you can still clearly see the wearer’s eyes. If true (if not true, then the photos are doctored), the serious disruption of the light path to the eyes is a major human factors and image quality problem.
Humans are very sensitive to the look of a another persons eyes, as has been said, “the eyes are the window into a person’s soul. People instinctively pick up clues and when the eye are missing, the person looks strange, more cyborg than human. Plus it makes people wonder what the person is doing and/or hiding.
If the optics are so badly scattering/blocking the light path to the eyes that you can’t see them. This also means the optics must also be having a major effect on the view out of the real world (which Magic Leap is not showing yet). I’m expect that you will see diffraction rainbows (scattering) and a severe darkening of the real world so that it will be like wearing smudgy dark sunglasses when you are indoors.
If Magic Leap was not so hyped up with the $1.9 billion invested so far, this would not merit the attention it is getting. There was next to no information given and if anything headset looks worse than people were expecting. If another startup had shown what was shown today, almost nobody would care.
In the end, I keep coming back to not being able to see the eyes of the user. This to me is fatally flawed if it is true. They are compromising terribly on human factors and the view of the real world.
Anyway, that is my quick take. I’m sure I will be finding other things to discuss and hope to elaborate more later.
Regarding the “where are the eyes” recurring theme, I believe the eyes will be visible in the end product. These are probably digital renders and not the actual goggles.
It has been reported that Magic Leap admitted to retouching the photos, but they claim the base photos are “real.”
I said in the article that there is a possibility that Magic Leap retouched them. But then what was the point of this whole “announcement,” to show people that it was bigger and uglier than people hoped?
I guess the point of this reveal is that it simply signals the “Day one” of their “road to release”.
I am certainly not impressed by the bulky design but their vision and general aesthetics prevail in my mind. Their site for example. Pretty great.
The announcement feels a bit rushed and is basically about what ML One will look like …..and nothing more.
btw they had one patent that was pretty similar to the version they r showing https://i.redd.it/0s5j314h01sz.jpg and they have never made any claims about the design or size of it… all they have ever said has been just generic babble about ar/mr glasses
and there was a patent of the processing unit ……so I’m not shocked by the design of that part but yeah it looks pretty odd and unpractical in some ways ……but I can imagine some reasons for it …..like it has big surface area for cooling by splitting it like that….. if it is at all affected by that and if it wasn’t split in two it would have been fat and u can’t really stick it in your pocket and use it comfortably and they made the external part circular to look more spacey or whatever ….and not so much like a rectangle 90s phone on your belt
….but yeah it’s not a super refined design
If the display combiner is based on their patent drawings of a stack of waveguides with a different output focus for every waveguide, then the cumulative dispersion of every output grating may have this effect of blocking a view of the eyes.
It could be or the images could be fake/Photoshopped as many people are claiming.
This article says that Magic Leap admits that some of it was altered: https://arstechnica.com/gaming/2017/12/magic-leap-finally-announces-a-headset-but-its-vague-rendered-in-photoshop/?comments=1
<blockquote>”Update, 3:05 pm: A Magic Leap spokesperson sent a statement to Ars Technica to explain the hardware images posted today: “The photos are not renderings, and the only retouching that was done was to edit out some sensitive IP.”
You have not covered the Leia display ranges from I could find. They are a breed of light field displays, maybe the ML uses the Leia diffraction panel as the image combiner, or what they call it photonics chip ?
Leia uses this under a LCD panel, to generate edge lit light ‘bundles’ and overlays the actual image information on top via LCD panel. Since the HMD / eye views the combiner from one side, the same principle applies, with the only difference that it is a semi see trough display to enable augmentation.
I wonder how that would work for ambient light hitting the diffraction panel, it certainly ould work for the projected image, but not for the real life light ?
Leia Lightfield display (https://www.leiainc.com/ and not to be confused with Leia, the fog screen company) and how it works is interesting, it uses what I called “illumination side control” to steer the light to the various subimages of a lightfields. It is a “true” but limited and crude lightfield and you give up a huge amount of resolution. Magic Leap can’t be using true lightfields as they would have horribly low resolution. Basically what lightfields do is trade resolution in X & Y for depth in Z. But you end up with on the order of 50 to 100X or more lower resolution.
If you go to my this post on a Magic Leap patent, you will see they are also using illumination side control to switch between two paths: https://www.kguttag.com/2016/12/16/magic-leap-focus-planes-too-are-a-dead-end/ . This is my “most likely” candidate for what Magic Leap is doing. It supports two focus planes and use the LEDs to in effect select which optical path.
Thanks Karl.
I am fully aware that fogging glasses don’t mean light field… or laminar flow screens in general.
Hong Hua (ML) stated her biggest dream in display development would be, dynamic foveated displays, where pixel density and order can be adjusted, part of this can be done computationally.
If it was possible to address the “pixels” on the Leia panel via the ‘edge illumination’, even only in zones. Then you could computationally adjust the imaging device, to subdivide the image into ‘light field pixels’ where needed, could you not map a light field portion onto a other ways high resolution image, as not the entire frame would be degraded in resolution due to the ‘light field subdivision’ ? Each frame would consist of sub frame of non light field (with occluded object), then a second sub frame containing the light field portion where the Leia grating is active and delivers the ‘object portion’ in lower resolution. Its a bit backwards, as the foveated portion would have to be higher resolution then the periphery.
In addition we are assuming that all happens optically, what if the main focal plane is focused via eye tracking over the entire field, but only the out-of-focus parts are processed in sequencing. Image quality degradation would be less noticeable. With a ‘priority’ order the ‘focus’ part of the image could be cycled more then the other parts. Optically this might not be correct, but the human brain often times fills in gaps ‘magically’.
First, I gave the link and mentioned fog screens for the benefit of others that may be reading this.
It is interesting that Leia can switch between light field and normal display mode and demonstrates the concept of “illumination side control”. Most technologies that switch the light do so post-modulation/imager. I immediately made the connection between what Leia was doing and the Magic Leap patent.
The problem is people are struggling to get decent angular resolution without having to throw in light fields and the like. Everything you add degrades the image in some way, usually in contrast, artifacts, and resolution. There are a lot of tricks you can do that are interesting, but most of them don’t make sense outside a demo.
I’m not following everything you are writing about, but I do agree that foveation will likely play a bigger part in the future, but display foveation has it own set of issues as you need to keep moving the high resolution part of the image.
I tried to package my thoughts into a own posting, with attached graphics. Maybe this is now more clear to understand. Plus while doing so, some new thoughts came up that I included.
I am curious what you think, wether this is totally on the wrong track or not.
https://themagicalworldofsakie.wordpress.com/2017/12/23/hmd-edge-illuminated-diffraction-grating-lcd-light-field-display/
In this comment train Karl states that x,y resolution is a trade-off for z in light field displays but for other readers who may be confused and for sake of completeness, time can also be used as a trade-off.
I’m not sure you’ve seen the video they released but to your point about display brightness and “pop” check this out:
https://youtu.be/GmdXJy_IdNw
You can see lamps in this picture that are turned on to illuminate the room but are barely visible.
Yes, everything is pretty dark suggesting that they block a lot (more than 50%) of the real world light. To really be “see through” you should block less than 20% of the light. Otherwise, it will be like wearing dark sunglasses indoors. I can even be a safety hazard in an industrial situation or walking around outdoors if you are wearing glasses much darker than necessary.
One of the “Holy Grails” of AR, is what is known as “hard edge occlusion” where you block light in-focus with the image. This is trivial to do with pass-through AR and next to impossible to do realistically with see-through optics. You can do special cases if all the real world is nearly flat. This is shown by some researchers at the University of Arizona with technology that is Licensed to Magic Leap (the PDF at this link can be downloaded for free: https://www.osapublishing.org/oe/abstract.cfm?uri=oe-25-24-30539#Abstract). What you see is a lot of bulky optics just to support a real world with the depth of a bookshelf (essentially everything in the real world is nearly flat).
Without hard edge occlusion, you are left with trying to dominate the light in the real world. The bright the room appears the brighter the display must be to appear solid and not translucent. The problem is that you can only chase the brightness up so far before you start hurting the person’s eyes.
The closest anyone got to the “holy grail” optical occlusion is in using a miniature and optimised version of the “tensor display” architecture using multiple layers of LCDs close to the eye: http://www.cs.unc.edu/~maimone/media/glasses_ISMAR_2013.pdf
There have been a few other attempts and patents besides Hua et al. for “flat” optical occlusion:
https://www.google.com/patents/US20120206452
https://patents.justia.com/patent/20160171779
The University of Arizona one Karl linked really reminds me of this one:
https://www.researchgate.net/profile/Mark_Billinghurst/publication/4040677_An_occlusion_capable_optical_see-through_head_mount_display_for_supporting_co-located_collaboration/links/00463524646de116c2000000.pdf
I think on-axis electrochromism or cell-less LCD or my own attempts at off-axis photochromism are the most viable candidates for occlusion in optically transparent HMDs:
http://www.freepatentsonline.com/y2017/0090194.html
The U of NC paper from 2013 looks interesting and shows the difficulty of the problem. I used to be on discussion panels at Siggraph with Henry Fuchs back in the mid 1980’s when he was working on computational memory devices (Pixel Planes). Both of us have drifted to display devices over the years.
The Hua patents are licensed to Magic Leap (where she part time consults) and are part of the same work as the link I pointed to.
The 2003 U of Arizona paper is interesting and once again shows the difficulty of optical occlusion.
The big problem is getting the occlusion masking to be in-focus with the real world. If you put a small “pixel size” black on on the front of a pair of glasses it will have next to no effect (most of the light will go right around it) unless the dot is directly in the path of a laser aimed at your eye. .
Hi Karl:
I was expecting that you’d have a quick response to today’s announcement — thanks! Magic Leap’s web site also says “Our lightfield photonics generate digital light at different depths”. Do you suppose this means a multi-planar display, like the Avegant “light field” HMD? If, this could contribute to the low transmission efficiency. Also, I’d be curious about the Avegant IP involved.
ML got funded because they had variable focus light field display demos ……similar to Avegant …..years before Avegant
Ronnie likes to slather on hype. This definitely sounds like a multi-planar display.
If you listen this video, Doug Lanman of Oculus appears to say that the multi-planer method really does not work well. https://youtu.be/Rp-zR-gpjQo?t=2571 (the link queues up the video at the point where he is talking about it). Ed Tang of Avegant is next to him.
well the images they used look like renderings and they didn’t bother with Photoshoping the eyes in the picture
most likely they won’t ship anything in the next 6 months …at least …..so no detailed proper reviews anytime soon
From other articles, Magic Leap says they touched up photos but says they did not use 3-D rendered images. If that is not what it looks like, then the Rolling Stone reporter would have to be dishonest for not saying that what he used what not the same as the pictures.
btw if u could see the eyes it would look like this https://twitter.com/JohnPaczkowski/status/943494031379533824 they might go with sunglasses look instead of more transparent looking glasses
What about the sensors on the ML One? What are you guys thinking?
Similar conclusions…
themagicalworldofsakie.wordpress.com/2017/12/20/magic-leap-thoughts-hopes-or-dreams/
Thanks. I liked the FOV analysis on the blog.
http://www.wipo.int/designdb/en/showData.jsp?SOURCE=HAGUE&KEY=DM095652 u can use these drawings although the scale is not clear …….also the rolling stone article mentioned something about rectangle on it’s side which when u think about it is pretty weird especially if it’s per eye because u need good overlap of the 2 images
Cool, scale is never the problem, as it has to fit on a human adult head. That dimension is well enough documented as average size. The interesting part I picked-up from this drawings is, that the HMD does not close on the back. It makes sense, it needs a bit of springiness to hold on to the wearer’s head.
If you take imaginary cross-sections through different part of the HMD body, you can only wonder what is inside. There is not a heck of a lot of space as in other HMDs.
From what I can estimate there is barely space for the sensor stack, telling from the released pictures there is in every space large enough a sensor / camera.
Which makes wonder again, if the optical part is largely coupled via optical fibers.
If anybody wants, i can throw it back into CAD and put dimensions in place.
We really can’t see what’s going on with the cable between the Lightwear and the Lightpack. Why have two cables? Data/power for the left and right eyes? And why is the Lightpack so bulky? A very large battery?
Having a flexible strap around the head will make the Lightwear more comfortable or feel more lightweight (weight also being supported on rear/sides of head, rather than just on the bridge of the nose and ears for other AR glasses), but I guess the Lightpack is actually quite heavy and the cables will make wearing the thing quite awkward.
All will be revealed in due course, but for now it’s fun to tire-kick the ML!
I believe Lightpack really means light pack, its a photonics unit, or light source for simple. I speculate they are two fibre optical ‘umbilicals’ going to the left and right projection unit on the HMD. Thats why light pack and GPU unit separated from one another.
This would explain why they are not detachable in the same time. Another speculation of mine is and Karl will know much better, that the unit is so lossy that they need a fair amount of power to feed through the ‘system’, hence the separated unit.
I highly doubt they are using fiber optics between the Lightpack and the headset, but it is an interesting theory why they split them that way. It looks to me more like it was a weight distribution issue. I don’t know how they can get past safety issues if they can’t be detached which tends to suggest that this is still very much a prototype. No matter what the reason or technology used, the whole cables down the back is a lousy and potentially dangerous solution.
Very good points. I wanted to write that the Lightpack actually has two optical engines in it and then there are two fiber-bundle endoscopes and power/data running up the left and right cables, but it seemed so unfeasible that I deleted it. I don’t think it’s ML’s fabled FSD, because it’s too impractical to develop for just a prototype ‘reveal’ like they did with Rolling Stone; it must just be fiber-coupled optical engines.
A reason the external Lightpack would need to be so big is not because of the optics but because of the size of the battery needed to overcome optical coupling losses to meet an acceptable contrast ratio. Having the optical engines in the Lightpack means the weight (and weight distribution) and heat dissipation in the Lightwear is reduced, which is a reasonable idea. If it is just two electrical cables, then yes, having the cables tethers at either side of the headset will balance the head. In either case, then it is actually just a single, non-detachable system, which of course is a hopeless for consumers – like a heavily-tethered medical AR headset but for the entertainment segment.
UA -This where exactly my thoughts too. On the other hand, since the HMD ‘wraps’ around the head, a single connection centered in the back could have achieve similar balancing. There is plenty of space to run wiring internally.
I am pretty convinced they place the main optical engine externally, for battery, space and heat reasons.
With all possible optical approaches they have outlined in the various patents, it become somewhat obvious that the unit would be quiet light hungry.
Hi Karl, What do you think of this being a mash up of LCoS modulators with their internally developed light sources? I am hearing some buzz about how ML couldn’t make their “endoscopic butterfly scanning” technology (which they originally licensed from the UWash/HIT lab) work through production so had to pivot to a more standard tech stack.
In that case, what would really set ML apart from Hololens? The design/build is meh – it may work for some and for others, it wont. Their demo renders are not the ‘cinematic reality’/Weta digital quality we were promised – barely looking better than what Hololens already delivers on 2014-era hw. From what I hear. MSFT is also going to release a better integrated version of HL in 2018. So what unique proposition could ML bring?
Thnx!
I wrote about my best guess as to what they are doing in this article: https://www.kguttag.com/2016/12/16/magic-leap-focus-planes-too-are-a-dead-end/
This is from a Magic Leap patent and is consistent with all the available information. If they are doing this (basically two “planes” of waveguides) they will gain some “plane depth” but at the expense of worse image quality.
It is a common fallacy to compare what a company is doing in “the lab” to what another company has on the market, or in other words, “you have to shoot ahead of the duck”. Hololens first shipped in March of 2016, the best case is that Magic Leap is two years behind. They look like an incremental improvement in the best case.
Interesting that few question this- prescription lens won’t work for “developer” devices; the whole point of a developer device is to create an experience, and then share the experience with an audience who most likely won’t own the device. How many Hololens were purchased exclusively for just use of only 1 person?
Maybe prescription lens will make sense when they become mass market consumer devices like smartphone customization, but until then …
These at the type of trade-offs everyone is making. There are lots of companies that have put our products with diopter adjustment but that does not support astigmatism, then there are those with clip-in insert. Magic Leap has a semi-ridged and semi-adaptable frame which first ties it to a single user. Both the requirement for prescription optics and the lack of adjustability are barriers to entry and are killers for sharing a headset. In this one regard, Hololens with its single headset and with enough eye relief to support user glasses is far superior. So yes, practically speaking this is a mess.
You also have to realize that it is unlikely you will wear the Magic Leap device all day (this is “Glass-holes” on steroids). So you have to carry around a backpack with the protective case to put the unit into when you are not wearing it along with your glasses you will put on when you take the prescription headset off.
This unit looks like it is target for the same market that today plays VR video games at home and not something to take out into outside world. This in turn means that the market size could be limited to the super high end of the VR market (i.e. not very big).
Any comment on the visual occlusion of the robot with the presenter? That seemed noteworthy.
Could you be more specific about what you are referencing?
Occlusion of virtual objects comes down to the SLAM capability and rending an object that seems to be occluded. It generally works best if the real thing occluding the virtual object is “far” away, as things get closer there parallel error becomes worse as a percentage. Occlusion of real object by virtual is not possible for all practical purposes with optics (there are some techniques that only work in limited cases such as with https://www.osapublishing.org/abstract.cfm?uri=FiO-2017-FW5C.2). Since the optics only add light and it is extremely difficult to remove real world light on a pixel by pixel bases (what is known as “hard edge occlusion).
This part of the article:
“Miller wanted to show me one other neat trick. He walked to the far end of the large room and asked me to launch Gimble. The robot obediently appeared in the distance, floating next to Miller. Miller then walked into the same space as the robot and promptly disappeared. Well, mostly disappeared, I could still see his legs jutting out from the bottom of the robot.
My first reaction was, “Of course that’s what happens.” But then I realized I was seeing a fictional thing created by Magic Leap technology completely obscure a real-world human being. My eyes were seeing two things existing in the same place and had decided that the creation, not the engineer, was the real thing and simply ignored Miller, at least that’s how Abovitz later explained it to me.”
Does your comment address that observation?
If the photo’s of the headset provided by Magic Leap in the article are correct, then the optics are blocking a large part of the real-world light. This allows you to “dominate” the brightness reaching the eye. This is not occlusion, this is light dominance and does not work if the “real world” is not dark, or made dark by wearing in-effect dark sunglasses. If you look at the short video that was released (https://www.youtube.com/watch?v=OLtDeonCAYE), perhaps a through the optics video (Magic Leap has been ambiguous as to this point so far as I have found) a, you will notice that the room is VERY dark.
Interesting, (and disappointing if true). The Rolling Stone author seemed to make a big deal that the images didn’t look translucent or superimposed like other AR headsets.
I just asked the author some questions on a Reddit AMA (ask me anything). The author said the room was dark/dim. It is easy to “dominate” when the room is dark as you have essentially a dark background. Not much of a “demo” of AR if Magic Leap made the room dark.
I have seen this all the time where people see products at a show and think one is better than another when in fact the biggest difference that that one company has a better controlled demo setup to show off their display product better. If you want your display to look bright, make the room dark.
Also, link failing for me. Can you summarize the technique in that paper? Thanks
Sorry about that, I tried to take you straight to the article. You will have to go to the link below and then click on the “pdf.” The pdf should be free, but apparently they don’t let you get to it directly. Hopefully the link below works for you (and I have changed it in my first comment back to you)
https://www.osapublishing.org/abstract.cfm?uri=FiO-2017-FW5C.2
Just thinking out of the box, while taking some of the different things that where said over time by R&D members of ML, the ‘invisible eye’ issue and more.
The HMD has a lot of cameras included, somewhat more then are actually required with approaches of today. In addition ML has talked a fair amount about computation, that is currently less part of the discussion, as we focus only on the optical properties.
What if the HMD overlays a lot of the natural image information with the ‘dim’ real world perception through the optical stack. Fusing the real world image with a computed version. A lot of talk has gone on into foveation and how they trick the human brain into seeing what ML want the viewer to see. Add to it the focal plane sequencing with the LCOS device. If they had a dual approach of using the focal plane sequencing blended only onto the foveate section of the image, while using for the peripheral parts ‘generic’ methods. Then one could do away with less resolution at higher frequency. In addition to my other comments related to optical fiber and power light source, it would start to make sense why more power is required in illumination.
Having said that, a very accurate tracking would be required beyond what I have seen in the market until now. But the entire occlusion issue would be solved, by actually displaying a complete composite of a image while keeping the real life image to a bare minimum. This would make the ML more of a VR unit then AR/MR.
Only a thought, I am by means no expert.
What i tried to explain is so abstract that I can barely fit it into my head – so maybe I will try a diagram instead… I am not a native english speaker, so putting my scatter brain into text is another challenge on top of the subject matter.