304 North Cardinal St.
Dorchester Center, MA 02124
304 North Cardinal St.
Dorchester Center, MA 02124
A few days ago I published a story on the Disney Lenovo Optics and wondered why they didn’t use a much simpler “bug-eye” combiner optics similar to the Meta-2 (below right) which currently sells in a development kit version for $949. It turns out the very same day Mira announced their Prism Headset which is a totally passive headset with a mount for a phone and bug-eye combiners with a “presale price” of $99 (proposed retail $150). Furthermore in looking into what Mira was doing, I discovered that back on May 9th, 2017, DreamWorld announced their “DreamGlass” headset using bug-eye combiners that also includes tracking electronics which is supposed to cost “under $350” (see the Appendix for a note on a lawsuit between DreamWorld and Meta)
The way both of these work (Mira’s is shown on the left) is that the cell phone produces two small images, one for each eye, that reflects off the two curved semi-mirror combiners that are joined together. The combiners reflect part of the phone’s the light and move the focus of the image out in space (because otherwise human could not focus so close).
Mira has definitely built production quality headsets as there are multiple reports of people trying them on and independent pictures of the headset which looks to be near to if not a finished product.
DreamWorld has not demonstrated, at least as of their May 9th announcement, have a fully functional prototype per Upload’s article. What may appear to be “pictures” of the headset are 3-D renderings. Quoting Upload:
“Dreamworld’s inaugural AR headset is being called the Dreamworld Glass. UploadVR recently had the chance to try it out at the company’s offices but we were not allowed to take photos, nor did representatives provide us with photographs of the unit for this story.
The Glass we demoed came in two form factors. The first was a smaller, lighter model that was used primarily to show off the headset’s large field of view and basic head tracking. The second was significantly larger and was outfitted with “over the counter” depth sensors and cameras to achieve basic positional tracking. “
The bottom line here is that Mira’s appear near ready to ship whereas DreamWorld still has a lot of work left to do and at this point is more of a concept than a product.
DreamWorlds “Shot Directly From DreamWorld’s AR Glass” videos were shot through a combiner, but it may or may not be through their production combiner configured with the phone in the same place as the production design.
I believe views shown in the Mira videos are real, but they are, of course, shooting separately the people in the videos wearing the heaset and what the image look’s like through the headset. I will get into one significant problem I found with Mira’s videos/design later (see “Mira Prism’s Mechanical Interference” section below).
While both DreamWorld and Mira have similar optical designs, on closer inspection it is clear that there is a very different angle between the cell phone display and the combiners (see left). DreamWorld has the combiner nearly perpendicular to the combiner whereas Mira has the cell phone display nearly parallel. This difference in angle means that there will be more inherent optical distortion in the DreamWorld design whereas the Mira design has the phone more in the way of the person’s vision, particularly if they wear glasses (once again, see “Mira Prism’s Mechanical Interference” section below).
Almost all see-though designs waste most light of the display in combining the image with the real world light. Most designs lose 80% to 95% (sometimes more) of the display’s light. This in turn means you want to start with a display 20 to as much as 100 times (for outdoor use) the brightness of a cell phone. So even an “efficient” optical design has serious brightness problems starting with a cell phone display (sorry this is just a fact). There are some tricks to avoid these losses but not if you are starting with the light from a cell phone’s display (broad spectrum and very diffuse).
One thing I was very critical of last time of the Disney-Lenova headset was that it appeared to be blocking about 75 to 80% of the ambient/real-world light which is equivalent to dark sunglasses. I don’t think any reasonable person would find blocking this much light to be acceptable for something claiming to be “see-through” display.
From several pictures I have of Mira’s prototype, I very roughly calculated that they are about 70% transparent (light to medium dark sunglasses) which means they in turn are throwing away 70+% of the cell phone’s light. On of the images from from Mira’s videos is shown below. I have outlined with a dashed line the approximate active FOV (the picture cuts it off on the bottom) which Mira claims to cover about 60 degees and you can see the edge of the combiner lens (indicated by the arrows).
What is important to notice is that the images are somewhat faded and don’t not “dominate”/block-out the real world. This appears true of all the through optics images in Mira’s videos. The room while not dark is also not overly brightly lit. This is going to be a problem for any AR device using a cell phone as its display. With AR optics you are both going to throw away a lot of the displays light to support seeing through to the real world and you have to compete with the light that is in the real world. You could turn the room lights out and/or look at black walls and tables, but then what is the point of being “see through.”
I also captured a through the optics image from DreamWorld’s DreamGlass video (below). The first thing that jumps out at me is how dark the room looks and that they have a very dark table. So while the images may look more “solid” than in the Mira video, most of this is due to the lighting of the room
Because the DreamWorld background is darker, we can also see some of the optical issues with the design. In particular you should notice the “glow” around the various large objects (indicated by red arrows). There is also a bit of a double image of the word “home” (indicated by the green arrow). I don’t have an equivalent dark scene from Mira so I can’t tell if they have similar issues.
Mira (only) supports the iPhone 6/6s/7 size display and not the larger “Plus” iPhones which won’t fit. This gives them 1334 by 750 pixels to start with. The horizontal resolution first has to be split in half and then about 20% of the center is used to separate the two images and center the left and right views with respect to the person’s eye (this roughly 20% gap can be seen in Mira’s Video). This nets about (1334/2) X 80% = ~534 pixels horizontally. Vertically they may have slightly higher resolution of about 600 pixels.
Mira claims a FOV of “60 Degrees” and generally when a company does not specify the whether it is horizontal, vertical, or diagonal, they mean diagonal because it is the bigger number. This would suggest that the horizontal FOV is about 40 and the vertical is about 45 degrees. This nets to a rather chunky 4.5 arcminutes/pixel (about the same as Oculus Rift CV1 but with a narrower FOV). The “screen door effect” of seeing the boundaries between pixels is evident in Mira’s videos and should be noticeable when wearing.
I’m not sure that supporting a bigger iPhone, as in the Plus size models would help. This design requires that the left and right images be centered over the which limits where the pixels in the display can be located. Additionally, a larger phone would cause more mechanical interference issues (such as with glasses covered in the next section).
A big problem with a simple bug-eye combiner design is the location of the display device. For the best image quality you want the phone right in front of the eye and as parallel as possible to the combiners. You can’t see through the phone so they have to move it above the eye and tilt it from parallel. The more they move the phone up and tilt it, the more it will distort the image.
If you look at upper right (“A”) still frame form Mira’s video below you will see that the phone his just slightly above the eyes. The bottom of the phone holder is touching the top of the person’s glasses (large arrow in frame A). The video suggest (see frames “B” and “C”) that the person is looking down at something in their hand. But as indicated by the red sight line I have drawn in frames A and B the person would have to be looking largely below the combiner and thus the image would at best be cut-off (and not look like the image in frame C).
In fact, for the person with glasses in the video to see the whole image they would have to be looking up as indicated by the blue sight lines in frames A and B above. The still frame “D” shows how a person would look through the headset when not wearing glasses.
I can’t say whether this would be a problem for all types of glasses and head-shapes, but it is certainly a problem that is demonstrated in the Mira’s own video.
Mira’s design maybe a bit too simple. I don’t see any adjustments other than the head band size. I don’t see any way work around say running into a person’s glasses as happens above.
Mira’s design is very simple. The combiner technology is well known and can be sourced readily. Theoretically, Mira’s Prism should cost about the same to make as a number of so called “HUD” displays that use a cell phone as the display device and a (single) curved combiner that sell for between $20 and $50 (example on right). BTW, these “HUD” are useless in the daylight as a cell phone is just not bright enough. Mira needs to have a bit more complex combiner and hopefully of better quality than some of the so-called “HUDs” so $99 is not totally out of line, but they should be able to make them at a profit for $99.
First let me say I have discussed Mira’s Prism more than DreamWord’s DreamGlass above because there is frankly more solid information on the Prism. DreamGlass seems to be more of a concept without tangible information.
The Mira headset is about as simple and inexpensive as one could make an AR see-through headset assuming you can use a person’s smartphone. It does the minimum enabling a person to focus on a phone that is so close and combining with the real world. Compared to say Disney-Lenovo birdbath, it is going to make both the display and real world both more than 2X brighter. As Mira’s videos demonstrate, the images are still going to be ghostly and not very solid unless the room and/or background is pretty dark.
Simplicity has its downsides. The resolution is low, image is going to be a bit distorted (which can be corrected somewhat by software at the expense of some resolution). The current design appears to mechanical interference problems with wearing glasses. Its not clear if the design can be adapted to accommodate glasses as it would seem to move the whole optical design around and might necessitate a bigger headset and combiners. Fundamentally a phone is not bright enough to support a good see-through display in even moderately lit environments.
I don’t mean to be overly critical of Mira’s Prism as I think it is an interesting low cost entry product, sort of the “Google Cardboard” of AR (It certainly makes more sense than the Disney_Lenovo headset that was just announced). I would think a lot of people would want to play around with the Mira Prism and find uses for it at the $99 price point. I would expect to see others copying its basic design. Still, the Mira Prism demonstrates many of the issues with making a low cost see-though design.
DreamWorld’s DreamGlass on the surface makes much less sense to me. It should have all the optical limitations of the much less expensive Mira Prism. It it adding at lot of cost on top of a very limited display foundation using a smartphones display.
It should be noted that what I refer to as bug-eye combiners optics is an old concept. Per the picture on the left taken from a 2005 Links/L3 paper, the concept goes back to at least 1988 using two CRTs as the displays. This paper includes a very interesting chart plotting the history of Link/L3 headsets (see below). Links legacy goes all the way back to airplane training simulators (famously used in World War II).
A major point of L3/Link’s later designs, is that they used corrective optics between the display and the combiner to correct for the distortion cause by the off-axis relationship between the display and the combiner.
The basic concept of dual large combiners in a headset obviously and old idea (see above), but apparently Meta thinks that DreamWorld may have borrowed without asking a bit too much from the Meta-2. As reported in TechCrunch, “The lawsuit alleges that Zhong [Meta’s former Senior Optical Engineer] “shamelessly leveraged” his time at the company to “misappropriate confidential and trade secret information relating to Meta’s technologies”.
There are at least two other contenders for the title of “Google Cardboard of AR.” Namely the Aryzon and Holokit which both separate the job of the combiner from the focusing. Both put a Fresnel lens in between the phone and a flat semitransparent combiner. These designs are one step simpler/cheaper (and use cardboard for the structure) than Mira’s design, but are more bulky with the phone hanging out. An advantage of these designs is that everything is “on-axis” which means lower distortion, but they have chromatic aberrations (color separation) issues with the inexpensive Fresnel lenses that the Mira’s mirror design won’t have. There also be some Fresnel lens artifact issues with these designs.
Magic leap just had several new patents released. Thought they would be of interest to you.
Thanks, I have taken a quick look through the patent applications (note they are applications and not patents per say). There are some that are just extensions of what they already filed and some that might be call “fencing patents” where they filing for patent on concepts that they may or may not be using anytime soon.
Good research and discussion, although you missed the pre-existing Seebright headset (Ripple) that shipped a development kit of the fresnel/beamsplitter format last year and which has a video of its design on its website in action (http://seebright.com).
It is also worth noting that both Aryzon and Holokit would be violating their patent (https://www.google.com/patents/US20170017088) were it to be granted (based on a line going back to 2012)
Thanks for the information. The Seebright design is definitely similar. Whether their designs would violate a patent application would depend on the claims that are allowed (as it is only an application), the interpretation of the claims and is eventually for a court to decide. It is interesting that Seebright has a bit different configuration from Holokit or Aryzon
Thanks for posting this excellent information.
How would you recommend approaching the task of designing a device with a bug eye combiner? I’ve been looking at getting a light simulation add-on for my CAD software but I’m not sure if that’s overkill for this (or underkill?) or if all I need is some equations.
Also, what kind of limits do you expect there would be on focussing with the combiner alone? Is it possible to collimate the cellphone’s display like this so the user can focus on a distant object as well as the virtual image? (I realise that everything on the display would be at the same focal distance.) Mira’s “Stereo Best Practises” says you should avoid putting objects more than 10ft from the user and that might not suit my purposes.
I would suggest hacking one that you can buy soon (say Mira’s). I’m not an optics designer but I think you would get a much better feel for it that way. You can also buy mirrors with about a 150 or so mm focal length to experiment with.
You can’t collimate an image that you can see with a single curved mirror. To do that, you have to have the display at the focus (half the radius of curvature) of the mirror at which point the image magnification goes to infinity and the image is unstable (it gets unstable as you approach the focus).
I’m not quite sure what you are asking with respect to the cell phone. You could perhaps put a Fresnel lens over a cell phone to affect its apparent focus distance.
I misunderstood what the combiner was doing. I’ve ordered some parts, and I’ll start experimenting.
I believe these low-cost headsets might find its place on the market as being quite inexpensive and utilizing smartphones people already have in their pockets (think of Gear VR for AR/MR). They are not supposed to replace high-end devices for sure, I believe we can agree on that. There are obvious drawbacks with these simple designs and we will see if people are going to buy into these or not, it’s about content and there are many other factors too. But I personally believe there is a market for it…
Anyway, I wanted to know your opinion about how the 6DOF world tracking in the Mira Prism headset is done. Based on the videos they released, you can see they are developing a Unity-based SDK and are tracking images using Wikitude with plans to support more in the future. How do you think they deal with the tracking when using the front facing camera? The spherical combiner in front of the smartphone screen has to cause some distortion. Also, they can’t use ARKit for world tracking because is not able to work with a front facing camera yet. That is something what e.g. Aryzon, you mention in your article, can utilize based on the design but obviously comes with other compromises elsewhere.
[…] first wrote about Mira back in 2017 in the article Mira Prism and Dreamworld AR – (What Disney Should Have Done?). Disney had just released a Star Wars headset and lightsaber and I wrote that they should have gone […]
[…] expensive possible; they use dual merged large curved free-space combiners, often called “bug-eye” combiners based on their appearance. They use a single cell phone-size display device to […]