304 North Cardinal St.
Dorchester Center, MA 02124
304 North Cardinal St.
Dorchester Center, MA 02124
This article will compare and contrast the Vuzix Ultralight, Lumus Z-lens, and DigiLens Argo waveguide-based AR prototypes I saw at CES 2023. I discussed these three prototypes with SadlyItsBradly in our CES 2023 video. It will also briefly discuss the related Avegant’s AR/VR/MR 2022 and 2023 presentations about their new smaller LCOS projection engine and Magic Leap 2’s LCOS design to show some other projection engine options.
It will go a bit deeper into some of the human factors of the Digitlens’ Argo. Not to pick on Digilens’ Argo, but because it has more features and demonstrates some common traits and issues of trying to support a rich feature set in a glasses-like form factor.
When I quote various specs below, they are all manufacturer’s claims unless otherwise stated. Some of these claims will be based on where the companies expect the product to be in production. No one has checked the claims’ veracity, and most companies typically round up, sometimes very generously, on brightness (nits) and field of view (FOV) specs.
This is a somewhat long article, and the key topics discussed include:
The Vuzix Ultralite and Oppo Air Glass 2 (top two on the right) have 640 by 480 pixel Jade Bird Display (JBD) green-only per eye. And were discussed in MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7).
They are each about 38 grams in weight, including frames, processing, wireless communication, and batteries, and wirelessVuzix developed their own diffractive waveguide and support about a 30-degree FOV. Both are self-contained with wireless, with an integrated battery and processing.
Vuzix developed their own glass diffractive waveguides and optical engines for the Ultralight. They claim a 30-degree FOV with 3,000 nits.
Oppo uses resin plastic waveguides, and MicroLED optical engine developed jointly with Meta Bounds. I have previously seen prototype resin plastic waveguides from other companies for several years. This is the first time I have seen them in a product getting ready for production. The glasses (described in a 1.5-minute YouTube/CNET video) include microphones and speakers for applications, including voice-to-text and phone calls. They also plan on supporting vision correction with lenses built into the frames. Oppo claims the Air Glass 2 has a 27-degree FOV and outputs 1,400 nits.
Lumus’s Z-Lens (third from the top right) supports up to a 2K by 2K full/true color LCOS display with a 50-degree FOV. Its FoV is 3 to 4 times the area of the other three headsets, so it must output more than 3 to 4 times the total light. It supports about 4.5x the number of pixels of the DigiLens Argo and over 13x the pixels of the Vuzix Ultralite and Oppo Air Glass 2.
The Z-Lens prototype is a demonstration of display capability and, unlike the other three, is not self-contained and has no battery or processing. A cable provides the display signal and power for each eye. Lumus is an optics waveguide and projector engine company and leaves it to its customers to make full-up products.
The DigiLens Argo (bottom, above right) uses a 1280 by 720 full/true color LCOS display. The Argo has many more features than the other devices, with integrated SLAM cameras, GNSS (GPS, etc.), Wi-Fi, Bluetooth, a 48mp (with 4×4 pixel “binning” like the iPhone 14) color camera, voice recognition, batteries, and a more advanced CPU (Qualcomm Snapdragon 2). Digilens intends to sell the Argo for enterprise applications, perhaps with partners, while continuing to sell waveguides optical engines as components for higher-volume applications. As the Argo has a much more complete feature set, I will discuss some of the pros and cons of some of the human factors of the Argo design later in this article.
Below is a composite image from four photographs taken with the same camera (OM-D E-M5 Mark III) and lens (fixed 17mm). The pictures were taken at conferences, handheld, and not perfectly aligned for optimum image quality. The projected display and the room/outdoor lighting have a wide range of brightness between the pictures. None of the pictures have been resized, so the relative FoVs have been maintained, and you get an idea of the image content.
The Lumus Z-lens reflective waveguide has a much bigger FOV, significantly more resolution, and exhibits much better color uniformity with the same or higher brightness (nits). It also appears that reflective waveguides have a significant efficiency advantage with both MicroLEDs (and LCOS), as discussed in MicroLEDs with Waveguides (CES & AR/VR/MR 2023 Pt. 7). It should also be noted that the Lumus Z-lens prototype has only the display with optics and has no integrated processing, communication or battery. In contrast, the others are closer to full products.
A more complex issue is that of power consumption versus brightness. LCOS engines today are much more efficient for an image with full-screen bright images (by 10x or more) than MicroLEDs with similar waveguides. MicroLED’s big power advantage occurs when the content is sparse, as the power consumption is roughly proportional to the average pixel value, whereas, with LCOS, the whole display is illuminated regardless of the content.
If and when MicroLEDs support full color, the efficiency of nits-per-Watt will be significantly lower than monochrome green. Whatever method produces full color will detract from the overall electrical and optical efficiency. Additionally, color balancing for white requires adding blue and red light with lower nits-per-Watt.
Vuzix has an impressively small optical engine driving Vuzix’s diffractive waveguides. Seen below left is a comparison of Vuzix’s older full-color DLP engine compared with an in-development color X-Cube engine and the green MicroLED engine used in the Vuzix Ultralite™ and Shield. In the center below is an exploded view of the Oppo and Meta Bound glasses (joint design as they describe it) with their MicroLED engine shown in their short CNET YouTube video. As seen in the still from the Oppo video, they have plans to support vision correction built into the glasses.
Below right is the Digilens LCOS engine, which uses a fairly conventional LCOS (using Ominivision’s LCOS device with driver ASIC showing). The dotted line indicates where the engine blocks off the upper part of the waveguide. This blocked-off area carries over to the Argo design.
The Digilens Argo, with its more “conventional” LCOS engine, requires are large “brow” above the eye to hide it (more on this issue later). All the other companies have designed their engine to avoid this level of intrusion into the front area of the glasses.
Lumus had developed their 1-D pupil-expanding reflective waveguide for nearly two decades, which needed a relatively wide optical engine. With the 2-D Maximus waveguide in 2021 (see: Lumus Maximus 2K x 2K Per Eye, >3000 Nits, 50° FOV with Through-the-Optics Pictures), Lumus demonstrated their ability to shrink the optical engine. This year, Lumus further reduced the size of the optical engine and its intrusion into the front lens area with their new Z-lens design (compare the two right pictures below of Maximus to Z-Lens)
Shown below are frontal views of the four lenses and their optical engines. The Oppo Air Glass 2 “disguises” the engine within the industrial design of a wider frame (and wider waveguide). The Lumus Z-Lens, with a full color about 3.5 times the FOV as the others, has about the same frontal intrusion as the green-only MicroLED engines. The Argo (below right) stands out with the large brow above the eye (the rough location of the optical engine is shown with the red dotted line).
Another significant improvement with Lumus’s Z-Lens is that unlike Lumus’s prior waveguides and all diffractive waveguides, it does not require an air gap between the waveguide’s surface and any encapsulating plastics. This could prove to be a big advantage in supporting integrated prescription vision correction or simple protection. Supporting air gaps with waveguides has numerous design, cost, and optical problems.
A typical full-color diffractive waveguide typically has two or three waveguides sandwiched together, with air gaps between them plus an air gap on each side of the sandwich. Everywhere there is an air gap, there is also a desire for antireflective coatings to remove reflections and improve efficiency.
Older LCOS projection engines have historically had size problems. We are seeing new LCOS designs, such as the Lumus Z-lens (above), and designs from Avegant and Magic Leap that are much smaller and no more intrusive into the lens area than the MicroLED engines. My AR/VR/MR 2022 coverage included the article Magic Leap 2 at SPIE AR/VR/MR 2022, which discusses the small LCOS engines from both Magic Leap and Avegant. In our AWE 2022 video with SadlyItsBradley, I discuss the smaller LCOS engines by Avegant, Lumus (Maximus), and Magic Leap.
Below is what Avegant demonstrated at AR/VR/MR 2022 with their small “L” shaped optical engines. These engines have very little intrusion into the front lenses, but they run down the temple of the glasses, which inhibits folding the temple for storage like normal glasses.
At the AR/VR/MR 2023, Avegant showed a newer optical design that reduced the footprint of their optics by 65%, including shortening them to the point that the temples can be folded, similar to conventional glasses (below left). It should be noted that what is called a “waveguide” in the Avegant diagram is very different from the waveguides used to show the image in AR glasses. Avegants waveguide is used to illuminate the LCOS device. Avengant, in their presentation, also discussed various drive modes of the LEDs to give higher brightness and efficiency with green-only and black-and-white modes. The 13-minute video of Avegant’s presentation is available at the SPIE site (behind SPIE’s paywall). According to Avegant’s presentation, the optics are 15.6mm long by 12.4mm wide, support a 30-degree FOV, with 34 pixels/degree, and 2 lumens of output in full color and up to 6 lumens in limited color outdoor mode. According to the presentation, they expect about 1,500 nits with typical diffractive waveguides in the full-color mode, which would roughly double in the outdoor mode.
The Magic Leap 2 (ML2) takes reducing the optics one step further and puts the illumination LEDs and LCOS on opposite sides of the display’s waveguide (below and described in Magic Leap 2 at SPIE AR/VR/MR 2022). The ML2 claims to have 2,000 nits with a much larger 70-degree FOV.
As seen in the pictures above, all the waveguide-based glasses have transparency on the order of 80-90%. This is a far cry from the common birdbath optics, with typically only 25% transparency (see Nreal Teardown: Part 1, Clones and Birdbath Basics). The former Osterhout Design Group (ODG) made birdbath AR Glasses popular first with their R6 and then with the R8 and R9 models (see my 2017 article ODG R-8 and R-9 Optic with OLED Microdisplays) which served as the models for designs such at Nreal and Lenovo’s A3.
Several former ODG designers have ended up at Lenovo, the design firm Pulsar, Digilens, and elsewhere in the AR community. I found pictures of Digilens VP Nima Shams wearing the ODG R9 in 2017 and the Digilens Argo at CES. When I showed the pictures to Nima, he pointed out the progress that had been made. The 2023 Argo is lighter, sticks out less far, has more eye relief, is much more transparent, has a brighter image to the eye, and is much more power efficient. At the same time, it adds features and processing not found on the ODG R8 and R9.
Another social aspect of AR glasses is Front Projection, known as “Eye Glow.” Most famously, the Hololens 1 and 2 and the Magic Leap 1 and 2 project much of the light forward. The birdbath optics-based glasses also have front projection issues but are often hidden behind additional dark sunglasses.
When looking at the “eye glow” pictures below, I want to caution you that these are random pictures and not controlled tests. The glasses display radically different brightness settings, and the ambient light is very different. Also, front projection is typically highly directional, so the camera angle has a major effect (and there was no attempt to search for the worst-case angle).
In our AWE 2022 Video with SadlyItsBradley, I discussed how several companies, including Dispelix, are working to reduce front projection. Digilens is one of the companies I discussed that has been working to reduce front projection. Lumus’s reflective approach has inherent advantages in terms of front projection. DigiLens Argo (pictures 2 and 3 from the right) have greatly reduced their eye glow. The Vuzix Shield (with the same optics as the Ultralite) has some front projection (and some on my cheek), as seen in the picture below (4th from the left). Oppo appears to have a fairly pronounced front projection, as seen in two short videos (video 1 and video 2)
DigiLens has been primarily a maker of diffractive waveguides, but it has, through the years, made several near-product demonstrations in the past. A few years ago, they when through a major management change (see 2021 article, DigiLens Visit), and with the management came changes in direction.
I’m always curious when a “component company” develops an end product. I asked DigiLens to help clarify their business approaches and received the following information (with my edits):
As I was familiar with Digilen’s image quality, I didn’t really check it out that much with the ARGO, but rather I was interested in the overall product concept. Over the last several years, I have seen improved image quality, including uniformity and addressing the “eye glow” issue (discussed earlier).
For the type of applications in the “enterprise market” ARGO is trying to serve, absolute image quality may not be nearly as important as other factors. As I have often said, “Hololens 2 proves that image quality for the customers that use it” (see this set of articles discussing the Hololen 2’s poor image quality). For many AR markets, the display information is simple indicators such as arrows, a few numbers, and lines. It terms of color, it may be good enough if only a few key colors are easily distinguishable.
Overall, Digilens has similar issues with color uniformity across the field of view of all other diffractive waveguides I have seen. In the last few years, they have gone from having poor color uniformity to being among the better diffractive waveguides I have seen. I don’t think any diffractive waveguide would be widely considered good enough for movies and good photographs, but they are good enough to show lines, arrows, and text. But let me add a key caveat, what all companies demonstrate are invariably certainly cherry-picked samples.
While the Argos 30-degree FOV is considered too small for immersive games, for many “enterprise applications,” it should be more than sufficient. I discussed why very large FOVs are often unnecessary in AR in this blog’s 2109 article FOV Obsession. Many have conflated VR emersion with AR applications that need to support key information with high transparency, lightweight, and hands-free. As Professor and decades-long AR advocate Thad Starner pointed out, requiring the eye to move too much causes discomfort. I make this point because a very large FOV comes at the expense of weight, power, and cost.
The diagram below is from DigiLen on the ARGO and outlines the key features. I won’t review all the features, but I want to discuss some of their design choices. Also, I can’t comment on the quality of their various features (SLAM, WiFi, GPS, etc.) as A) I haven’t extensively tried them, and B) I don’t have the equipment or expertise. But at least on the surface, in terms of feature set, Argo compares favorably to the Hololens 1 and 2, if having a smaller FOV than the Hololens 2 but with much better image quality.
As stated above, Digilens’ management team includes experience from RealWear. RealWear acquired a lot of technology from Kopin’s Golden-i. Like ARGO, Golden-i was a system product outgrowth from display component maker Kopin with a legacy before 2011 when I first saw Golden-i. Even though Kopin was a display device company, Golden-i emphasized voice recognition with high accuracy even in noisy environments. Note the inclusion of 5 microphones on the ARGO.
Most realistic enterprise-use models for AR headsets include significant, if not exclusively, hands-free operation. The basic idea of mounting a display on the user’s head it so they can keep their hands free. You can’t be working with your hands and have a controller in your hand.
While hand tracking cameras remove the need for the physical controller, they do not free up the hands as the hands are busy making gestures rather than performing the task with their hands. In the implementations I have tried thus far, gestures are even worse than physical controllers in terms of distraction, as they force the user to focus on the gestures to make it (barely sometimes) work. One of the most awful experiences I have had in AR was trying to type in a long WiFi password (with it hidden as I typed by asterisk marks) using gestures on a Hololens 1 (my hands hurt just thinking about it – it was a beyond terrible user experience).
Similarly, as I discussed with SadlyItsBradley about Meta’s BCI wristband, using nerve and/or muscle-detecting wristbands still does not free up the hands. The user still has their hands and mental focus slaved to making the wristband work.
Voice control seems to have big advantages for hands-free operation if it can work accurately in a noisy environment. There is a delicate balance between not recognizing words and phrases, false recognition or activation, and becoming too burdensome with the need for verification.
In what I see as a futile attempt to sort of look like glasses (big ugly ones at that), many companies have resorted to skull-gripping features. Looking at the skull profile (right), there really isn’t much that will stop the forward rotation of front-heavy AR glasses unless they wrap around the lower part of the occipital bone at the back of the head.
Both the ARGO (below left) and Panasonic’s (Shiftall division) VR headsets (right two images below) take the concept of skull-grabbing glasses to almost comic proportions. Panasonic includes a loop for the headband, and some models also include a forehead pad. The Panasonic Shiftall uses pads pressed against the front of the head to support the front, while the ARGO uses an oversized large noise bridge as found on many other AR “glasses.”
ARGO supports a headband option, but they require the ends of the temples with the skull-grabbers temples to be removed and replaced by a headband.
As anyone who knows anything about human factors with glasses knows, the ears and the nose cannot support much weight, and the ears and nose will get sore if much weight is supported for a long time.
Large soft nose pads are not an answer. There is still too much weight on the nose, and the variety of nose shapes makes them not work well for everyone. In the case of the Argo, the large nose pads also interfere with wearing glasses; the nose pads are located almost precisely where the nose pads for glasses would go.
As was pointed about by Microsoft with their Hololens 2 (HL2), weight distribution is also very important. I don’t know if they were the first with what I call “the bustle on the back” approach, but it was a massive improvement, as I discussed in Hololens 2 First Impressions: Good Ergonomics, But The LBS Resolution Math Fails! Several others have used a similar approach, most notably with the Meta Quest Pro VR (it has very poor passthrough AR, as I discussed in Meta Quest Pro (Part 1) – Unbelievably Bad AR Passthrough). Another feature of the HL2 ergonomics is the forehead pad eliminates weight from the nose and frees up that area in support of ordinary prescription glasses.
The problem with the sort-of-glasses form factor so common in most AR headsets today is that it locks the design into other poor decisions, not the least of which is putting too much weight too far forward. Once it is realized that these are not really glasses, it frees up other design features for improvement. Weight can be taken out of the front and moved to the back for better weight distribution.
Perhaps the best ergonomic/user feature of the Hololens 1 & 2 over most other AR headsets is that they have enough eye relief (distance from the waveguide to the eye) and space to support most normal eyeglasses. The ARGO’s waveguide and optical design have enough eye relief to support wearing most normal glasses, but still, they require specialized inserts.
You might notice some “eye glow” in the CNET picture (above right). I think this is not from the waveguide itself but is a reflection off of the prescription inserts (likely, they don’t have good anti-reflective coatings).
A big part of the problem with supporting eyeglasses goes back to trying to maintain the fiction of a “glasses form factor.” The nose bridge support will get in the way of the glasses, but the nose bridge support is required to support the headset. Additionally, hardware in the “brow” over the eyes could have been moved elsewhere, which may interfere.
Another technical issue is the location and shape of their optical engine. As discussed earlier, the Digilens engine shape causes issues with jutting into the front of glasses, resulting in a large brow over the eyes. This brow, in turn, may interfere with various eyeglasses.
It looks like Argo started with the premise of looking like glasses putting form ahead of function. As it turns out, they have what for me is an unhappy compromise that neither looks like glasses nor has the Hololens 2 advantage of working with most normal glasses. Starting from the comfort and functionality as primary would have also led to a different form factor for the optical engine.
While MicroLED may hold many long-term advantages, they are not ready to go head-to-head with LCOS engines regarding image quality and color. The LCOS engines are being shown by multiple companies that are more than competitive in size and shape with the small MicroLED engines. The LCOS engines are also supporting much higher resolutions and larger FOVs.
Lumus, with their Z-Lens 2-D reflective waveguides, seems to have a big advantage in image quality and efficiency over the many diffractive waveguides. Allowing the Z-lens to be encased without an air gap adds another significant advantage.
Yet today, most waveguide-based AR glasses use diffractive waveguides. The reasons include there being many sources of diffractive waveguides, and companies can make their own custom designs. In contrast, Lumus controls its reflective waveguide I.P. Additionally, Lumus has only recently developed 2-D reflective waveguides, dramatically reducing the size of the projection engine driving their waveguides. But the biggest reason for using diffraction waveguides is that the cost of Lumus waveguides is thought to be more expensive; Lumus and their new manufacturing partner Schott Glass claimed that they will be able to make waveguides at competitive or better costs.
A combination of cost, color, and image quality will likely limit MicroLEDs for use in ultra-small and light glasses with low amounts of visual content, known as “data snacking.” (think arrows and simple text and not web browsing and movies). This market could be attractive in enterprise applications. I’m doubtful that consumers will be very accepting of monochrome displays. I’m reminded of a quote from an IBM executive in the 1980s when asked whether resolution or color was more important said: “Color is the least necessary and most desired feature in a display.”
Not to pick on Argo, but it demonstrates many of the issues with making a full-featured device in a glasses form factor, as SLAM (with multiple spatially separated cameras), processing, communication, batteries, etc., the overall design strays away from looking like glasses. As I wrote in my 2019 article, Starts with Ray-Ban®, Ends Up Like Hololens.
Thanks Karl for another great article! I agree with the image quality advantage of reflective vs diffractive waveguides, but I believe you’re massively overestimating the efficiency advantage of Lumus’ solution…
Thanks. Why do you think the efficiency difference is much less?
Much appreciate the last few articles. Thank you for the detailed coverage.
On the subject of front projection, it appears Rokid Max is using some type of coating(https://youtu.be/Yx6-75T-JX0?t=209) to practically eliminate this issue(https://youtu.be/Yx6-75T-JX0?t=468).
Looks like Birdbath and Freeform designs do not have to suffer from this flaw after all.
Would you have any idea as to what kind of coating this might be?
Does this involve polarization at the source?
Thanks for the feedback.
As far as the birdbath front projection goes, the “special coating” is probably a polarizer (as you seem to have surmised). The most common birdbaths incorporate a polarizing beam splitter whether or not the display is polarized to begin with (see my Nreal teardown article: https://kguttag.com/2021/06/01/nreal-teardown-part-1-clones-and-birdbath-basics/). Thus, the light projecting forward is polarized and, therefore, can be blocked. I haven’t worked through it, but it might take some combination of quarter or half-wave plates for the real-world light to make it through the beam splitter.
Regardless, less than 25% of the real world’s like make it through to the eye with a birdbath design, so indoors, it is like wearing moderately dark sunglasses.
Also, birdbath designs using either OLED-Microdisplays or small LCDs are also not very bright or very efficient. Waveguide-based designs deliver about 10X – 30X more nits per Watt for the same FoV. Waveguide designs can be more than 85% transparent while delivering thousands of nits per Watt, where most birdbath designs take about 1 Watt to deliver about 100 nits and are about 25% transparent.
Thanks for confirming my suspicions. Rokid markets it as “Reduce forward light leakage by 90%” so some leakage still remains(https://global.rokid.com/products/rokid-max).
I am certainly not defending birdbath or freeform here. My angle is more on the side of actual availability and budget.
While first gen could only do 100nits, Nreal Air is specced at 400nits and Rokid Max is listed at up to 600nits (https://global.rokid.com/products/rokid-max). Can only assume their power consumption is comparable.
I fully support AR development, however for the moment, my primary use cases will be screen replacement and VR headset replacement at modest FOVs(avoiding the bulk of VR in public). In other words, high transparency and high brightness are not the main deciding factors in this particular instance. Price, FOV and market availability, on the other hand, are.
I am personally not in favor of waiting for the app ecosystem since VR already has a sufficient amount of “apps” that can be utilized by AR glasses through OpenVR or OpenXR. Keeping AR and VR ecosystems separate is not something I support. Both use cases I am interested in are ready today.
Question remains as to why it is taking so long to introduce LCoS with reflective waveguides to the market.
Is everyone happy to spend the years waiting for microLEDs to resolve uniformity issues?
Is there some inherent business or quality assurance disadvantage to using LCoS?
What is your take on this?
The ODG-Type (as they were to make it popular) Birdbath is very much in a box regarding light from the display to the eye vs. transparency to the real world. Referring to my diagram of the Nreal (https://i0.wp.com/kguttag.com/wp-content/uploads/2021/05/Nreal-Optics-Diagram010-copy.jpg?ssl=1), they are limited by having to A) polarize the OLED and B) the front partial spherical mirror. They can gain a bit in using better components (mostly the polarizers). They can trade better display brightness for less real-world light by the reflectivity of the partial spherical mirror. The newer MicroLED with microlens arrays and some other improvements have roughly tripled their brightness over the (fairly old) OLEDs used in the earlier designs for about the same power. Let me add that there is no “spec police,” and the nits specified may not be what they will measure. Often companies exaggerate brightness and contrast-related specs.
Waveguides and their optics are more expensive. The image quality, ignoring size and transparency, of diffractive waveguides is much worse, and with reflective waveguides, it is somewhat worse than a Micro-OLED with the birdbath. Birdbath with OLED has pretty good image quality, but it suffers by comparison in terms of transparency, brightness/efficiency, and bulkiness.
BTW per your previous question, you should notice on the Nreal diagram (linked to above) that they have a front polarizer to reduce eye glow. I even showed the with and without the polarizer effect in one of the pictures (see: https://kguttag.com/2021/06/04/nreal-teardown-part-2-detailed-look-inside/)
Thanks for directing me to the with/without polarizer picture.
I must have missed where Nreal Light introduced this as I remember seeing a few videos where front projection was pretty bad.
Having thought about it, I have a feeling everyone is awaiting uLEDs due to display giants plans for mass production. LCoS may be a smaller fish in terms of meeting consumer demand in terms of numbers.
Ability to add multiple panels side by side may be another major advantage of uLEDs outside of size and brightness(uOLED panel size for reference https://youtu.be/8gM4x12RSzE?t=319).
Please correct me as I am likely wrong here.
Birdbaths with a few uLED panels per eye may be the way to go in the near future since reflective waveguides cannot match birdbath’s image quality.
How much would uLED’s increase in brightness improve birdbath transparency?
Also wondering how difficult it would be to extend birdbath FOV to 100deg by adding a few uLED panels together side by side?
It may be a long wait. uLEDs are an interesting future technology, but they still have many issues regarding image uniformity, color, resolution, and cost/yield. uLEDs are also very inefficient with waveguides due to their Lambertian (or worse) emitter light output which can be improved somewhat with micro-optics. Still, it is about an order of magnitude worse than a typical LCOS engine when dealing with a full-white image. uLEDs have the advantage that the light output is proportional to the average pixel value, whereas LCOS takes about the same power regardless of the screen content.
There are a LOT of image issues and cost and quality of alignment with trying to blend two side-by-side panels. I also don’t think using a color X-Cube to combine R, G, and B is a long terms solution.
The ODG/Nreal/etc. style birdbath has two partial reflectors: the beamsplitter (often uses a polarizing beamsplitter) and the front curved element. Instead of using a polarizing beam splitter, one could use a partial mirror type. Say both were 90% transmissive (and thus 10% reflective. You would have 81% net transmissivity. Sounds good, but then you would have a ton of unpolarized light (~ 10x brighter than what goes to the eye) being front projected with no way to stop it other than with neutral density filters that would then kill transmissivity.
An alternative birdbath is what Google Glass and Raontech use (see https://kguttag.com/2019/10/23/samsung-ar-design-patent-whats-inside/). In this design, the curved mirror is at the end, is fully reflective (and thus more efficient), and you are only looking through the beamsplitter. The downside of this design is that it is usually fully “encased” in plastic or glass and thus is heavier and more expensive, and you end up looking through a thick optical element that causes “swimming.” Then you are back to whether to use a polarizing beam splitter which is 50/50, or a non-polarizing beamsplitter which will have serious front projection issues.
I’m a bit dubious about side-by-side panels, although Ostendo is trying to develop something along these lines with their MicroLEDs. AntReality (see video: https://www.youtube.com/watch?v=-_JQHzNo1HY&t=3156s) has developed a dual display (using Micro-OLEDs), but you have most of the same issues as with a conventional birdbath.
Thanks for entertaining my questions and a reference to AntReality.
It is probably much easier to make long narrow microLED panels than deal with side by side issues.
MicroLEDs may be some years out, although I do believe a 100deg FOV birdbath is possible with current gen microOLEDs.
Surprisingly, Tilt5 demonstrated this in action. Single panel per eye and up to 110deg FOV, albeit without the curved mirror.
Not entirely sure why Jeri chose Intel and LCoS, however, I suspect it had to do with firmware customization freedoms and light collimation. Unfortunately these choices add some bulk to the top section and USB3.
There’s always hope for a compact next gen.
Could she do the same thing with MicroOLED or was it collimated light she was after with LCoS?
I would be compelled to get Tilt5 with a VR cover and USB-C Alt Mode.
Also, why wouldn’t she add a VR cover?
Essentially some extra tilt of the display panel/lens extends reflection surface adding the extra FOV.
Tilt5’s name should be more like Tilt35. Nreal Air’s panel is already tilted but not enough to make it interesting (https://youtu.be/8gM4x12RSzE?t=300)
Add 2560×2560 microOLEDs and this becomes the first set of glasses worth getting.
Would I be incorrect in thinking that a 100deg FOV birdbath version of Tilt5 with a reflective mirror can fit into Nreal Air form factor?
Pardon me, Nreal Air panel is not actually tilted (https://youtu.be/8gM4x12RSzE?t=346)
Tilt5’s is tilted inwards which extends FOV surface by allowing for a shallower angle of the beam splitter in relation to the eye.
Question comes down to whether Nreal Air will obtain FOV similar to 110deg of Tilt5 if it’s panels are tilted inwards in the same way?
Tilt5’s is Tilted inward at about the same 45 degrees as a birdbath beamsplitter. Tilt5 does this to (roughly) line up the direction of the projected image with the eye.
The Tilt-5 reflects the image off the game board, which is huge if further away, compared to the curve mirror in a birdbath. The Tilt5 does not have to move the focus of the image because of the distance of the game board from the eye. In a birdbath, the curved mirror is there to move the focus. The FOV sets the size of the beamsplitter and curved mirror whice become exponentially bigger as the FOV increase. As a simple example, they would have to be infinitely big to support a 180-degree FOV. You may note the size of the beamsplitter in the Tilt-5, then imagine the curved mirror required. It starts to get to be like the 2017 Disney/Lenovo Star Wars AR (https://www.engadget.com/2017-08-31-lenovo-star-wars-jedi-challenges.html)
I don’t see how without doing something more radial, like what Ant Reality did with dual displays. A “simple” birdbath optics are boxed-in. To get a bigger FOV they have to grow 3 dimensionally with a bigger beamsplitter and curved mirror. They likely have to grow the display panel as well. So you will end up with something very big. You may notice that the Nreal Lite got smaller by going to a smaller FOV.
Tilt-5 is NOT a birdbath. The slanted beamsplitter directs the image out, and there is no need to change the focus of the projected image. Its FOV is a function of the projector’s projection lens.
You don’t seem to understand how a birdbath works. The beamsplitter has to be at about 45 degrees, or the image will be distorted. Slanting differently will just mess up the image’s distortion and focus.
I would call Tilt5 a birdbath with retro-reflective material acting as an external mirror. Nothing against Tilt5 here. Wishing for them to come up with an optional VR cover and to shrink size down to normal glasses like Nreal Air.
When I refer to tilting the panel I am implying that the beam splitter will be tilting equally with the panel i.e. the 45 degree angle will be maintained.
Main reason for tilting is to increase FOV whilst keeping thickness of the glasses unchanged. Without tilting increased size of beam splitter would cut into the eye.
Appreciate the reference to Lenovo Mirage. Have to keep in mind that it is only that large because it uses phones for screen(https://www.cnet.com/a/img/resize/13a6dc0220456d9ed2612b92877d9cee18ac602a/hub/2017/12/14/e706d1cd-51be-42ec-a03a-3a1207815fc9/02-lenovo-star-wars-jedi-challenges.jpg?auto=webp&width=1200).
Mirage’s actual size of beam splitter and mirrors should fit into a Wayfarer frame based design.
FOV is 60deg horizontal by 33deg vertical. Key spec here is 60deg horizontal. If it was 60×60 with a square panel then diagonal FOV would be 85deg which is close to lower end VR.
This makes me think that 60-70deg horizontal FOV birdbath glasses are a possibility(vertical will depend on panel and lens assembly) if a larger Wayfarer frame is used and if panels are tilted inwards. Diagonal 70deg would be a bit low in my view. Tilting would also create space for a slightly larger panel.
Would you agree based on this example?
The name “birdbath” comes from the curved mirror with the light coming in and out of it looks similar to a birdbath.
The beamsplitter is there to direct the display light on-axis into the curved mirror and then allow it to pass to the eye on the return path (in the ODG/Nreal/etc. type birdbath). If you tilt the combination of the birdbath and beamsplitter, the image will miss the eye. You are stuck with the display being approximately 90 degrees relative to the eye with a birdbath.
Correction and apologies for confusion:
To fulfill reflection being on-axis with the mirror, panel tilt would have to be twice the amount of beam splitter tilt.
In practice, if beam splitter needs to be tilted 15deg off 45 in order to take full advantage of Wayfarer frame area at the same thickness, then panel would have to be tilted inwards 30deg off horizontal.
I have a feeling Tilt5 is tilting for same reason to keep thickness under control at a high FOV.
Still wonder what FOV they will get with a VR cover(curved mirror) based on how large the beam splitter is. That is assuming panel is tilted appropriately to keep reflection on-axis with the mirror.
Thank you for tolerating my ramblings and pointing out the flaws. Hoping there is nothing else I missed conceptually i.e. in how polarizers work at various angles of incidence or in some other fundamental way.
I’m not following you on how to go off 45-degree angles with a birdbath and have it work. You might want to look at AntReality AR and how they do it, but they have some solid optics and use TIRs to get things back on-axis.
As for Tilt5, the angle of the splitter is based on getting the projected image lined up with the eye’s view. The projector is above the beam splitter. Ideally, the projected image would appear as if it came from the eye. The better the projected image is lined up with the eye’s view, the better aligned any “occlusion” will be. For example, if you hold your hand in front of you if aligned perfectly, you will see your hand but not a shadow cast by your hand on the retro-screen. The worse the alignment of the projected image to the eye, the more real objects will cast shadows.
AntReality has a single panel “type A” solution rated at up to 85deg FOV which reinforces the possibility of an 85deg birdbath.
Although a 9mm lens will probably cause a “swimming” effect as you call it.
An easier way to explain – if both the panel and the beam splitter are tilted 15deg off mirror axis, then reflection off beam splitter will also be 15deg off axis.
We then tilt the panel an extra 15deg to return reflection back onto mirror axis.
As a result more area can be taken up by the beam splitter at the same thickness of the glasses. A larger panel can also be used at a given thickness.
I must be missing something here. Are beam slitters designed to only reflect at 45deg?
Would you be able to tell me why this wouldn’t work?
On the other hand, current gen birdbaths have a problem with picture getting blurry towards the edges. Clear window is, in fact, a limited proportion of total FOV.
Is this something lens manufacturers can address or is it a fundamental issue with the lenses?
Can LC lenses other types of lens tech address this? Or is this not even a lens problem?