304 North Cardinal St.
Dorchester Center, MA 02124
304 North Cardinal St.
Dorchester Center, MA 02124
[Nov. 1, 2018 Update – A systems engineer from Raontech contacted the blog to let me know that severe asymmetry in the LCOS response noted with DK-Vision prototype using prototype Raontech displays does not occur on their production 1080p devices and they have verified this with my test patterns. I have not had a chance to check it myself, but I believe the problem was solvable. My reason for pointing the issue out was so it was not thought to be part of the Lumus optics in the pictures shown]
As I have stated in Part 1 and Part 2 of my review of the ML1, the ML1’s image is exceedingly soft/blurry. The ML1’s optical quality is so poor that it is cutting the effective resolution by roughly half both horizontally and vertically.
I have wanted to do this “shootout” since CES in January this year
(2018). It is a chance to compare three different waveguide-based headsets, the two more famous ones using diffraction waveguides, Hololens, and Magic Leap, and one from Lumus, less well-known company with a non-diffraction waveguide. I was impressed with the image quality of Lumus compared to the diffraction waveguides when I visited their booth at CES in 2017 and 2018.
For this article, I’m primarily going to be comparing the resolution of the Magic Leap One (ML1), Microsoft HoloLens, and Lumus DK-Vision. All three use “total internal reflection” (TIR) to support a thin “waveguide.”
The ML1 and HoloLens use a series of diffraction gratings to make the light enter and exit the waveguide. Hololens has a waveguide for each of red, green, and blue whereas the ML1 as two per each color, for a total of 6 waveguides, to support their focus planes concept. I discussed the ML1 and Hololens waveguides some more in part 1 (see the “Background on Diffraction Waveguides” section).
Lumus, with what they call a “Light-guide Optical Element” (LOE), has a single waveguide that works on all colors. The thickness of their one LOE is similar to the stack of multiple (one per red, green, and blue) thinner waveguides on Hololens. They simply cut the waveguide’s entrance at an angle to get the light to enter (rather than use a color specific diffraction grating), and then they use a series of very specially designed partial mirrors to cause the light to exit.
For taking the pictures I used the same Olympus OM-D E-M10 Mark III mirrorless camera with the same 14-42mm lens for all of them through the optics pictures. For the closeup images, it netted between about 3.4 and 6+ camera pixels per pixel displayed by the various headsets. You can click on the pictures to see them at much higher resolution. I took hundreds of pictures of each display and then picked the best area within the test pattern to try to show each device at its best.
The test patterns are based on ones I have found useful in testing other displays. There are a series of 320 x 240 sub-patterns for the testing of resolution that can be replicated the number of times to fill out the display resolution. There are some large circles to detect color purity across the display. I also replace one or two pictures of a colorful scene with a person in it to check for color. For anyone that would like to verify or challenge my results, the test patterns used can be found on HERE.
At a high level, the three headsets have similar optical architectures. All three of the headset use field sequential color (FSC) and Liquid Crystal on Silicon (LCOS) microdisplays but they come from different LCOS manufacturers. Each of them has “projector optics” that collimate and manipulate the image for injection into their respective waveguides. Where the headsets most differ is in their waveguide structures. I will try where possible to disambiguate the effects caused by differences in the non-waveguide parts of the system, particularly their different LCOS devices.
The ML1 uses a diffractive waveguide and blocks about 85% of the real-world light. You might note how you can’t see the user’s eyes in the picture on the left. It has a 1280 x 960 LCOS microdisplay made by Omnivision, but as will be demonstrated, the effective resolution is much lower. It has about a 45°diagonal and 40° horizontal field of view (FOV).
The ML1 supports about 220 cd/m2. In displaying the test pattern, I was only able to see the ML1 displaying about 1160 of the 1280 horizontal pixels is it supposed to have. I suspect the missing ~120 pixels are being used for interpupillary distance (IPD) adjustment. The iFixit teardown (in which I helped) showed that the ML1 device has a custom resolution made by Omnivision, and it is likely the using the same 4.5-micron pixel pitch of Omnivision’s 1080p and 720p devices.
The Microsoft HoloLens uses diffractive waveguides similar to the ML1 and blocks about 60% of the light which lets about 2.7x more light through than the ML1. In the picture on the right, you can see the wearer’s eyes, but they are darkened. It uses a 1366 x 768-pixel LCOS microdisplay made by Himax and has roughly a 35° diagonal and 30° horizontal field of view (FOV).
I was able to see ~1270 of the 1280 horizontal pixels in the test pattern when viewed at full frame. HoloLens supports about 320 cd/m2 at full brightness or about 1.5x brighter than the ML1. It is likely that the Himax FSC LCOS device used in HoloLens is using an ~6-micron pixel pitch based on their nearest publicly specified device.
Lumus’s DK-Vision displays 1920 x 1080 pixels (1080p) or roughly double that of the ML1 and HoloLens. I first wrote about the DK-Vision in my CES 2018 AR Overview, and there is a flyer on the Lumus Website. The Lumus LOE has a series of polarized semi-mirrors that act as vertical FOV and pupil expanders. The Lumus headset blocks only about 20% of the light letting through ~5.3x more light than the ML1 and ~2x than HoloLens. I don’t have a direct measurement of the brightness of the Lumus headset, but based on the camera settings, it very roughly has 1,000 cd/m2 making it about five times brighter than the ML1 and three times that of the HoloLens (Lumus also has military products that go up to 6,500cd/m2). Lumus claims the DK-Vision currently can support up 2,000 cd/m2 will eventually support up to 3,000 cd/m2 or about an order of magnitude brighter than the ML1 and Hololens. The 1080p Raontech LCOS device used in the Lumus headset has a 6.3-micron pixel pitch.
I want to be clear that while the ML1 and HoloLens are low volume developer kit products, the Lumus headset is only a referenced. The ML1 and HoloLens have many features that are not included in the Lumus demo device. The Lumus device, while it does have cameras, inertial measurement, and an Android processor, it does not all the features (such as SLAM), found in Magic Leap and Hololens.
The LCOS device in the Lumus unit is also an early prototype (non-production) 1080p device from Raontech and as such is not necessarily indicative of their final product. The color balance and grayscale response were frankly not very good which is not uncommon with it being a prototype and not a product. I did make some white balance adjustments to the photos as a) it was a prototype, and b) the goal was to compare the optics, and c) it is not clear that the Raontech will be in a final product as Lumus uses LCOS devices from various manufacturers.
As the Lumus headset is one of a few existing prototypes, they are likely hand-picking and assembling the units and therefore it may not be indicative of the final production product. At the same time with manufacturing volume, the quality should improve. Lumus has recently concluded a manufacturing partnership with Quanta which could improve their ability to manufacture in volume. I also want to note that I had special access to the Lumus prototype (see the disclosure in the appendix).
[Nov. 1, 2018 Update – A Raontech engineer contacted me to let me know that Raontech’s production 1080p devices do not exhibit the asymmetry noted below as verified with my test patterns. While I had a chance to verify the claim, I believe that it is a solvable problem]
A key thing that affected the results is that the black versus white response of the Raontech LCOS panel was highly asymmetrical with it strongly favoring black over white. One-pixel wide white lines are barely visible whereas one-pixel wide black lines are nearly twice as wide as they should be. There is always some level of difference in the black versus white response of any liquid crystal based display due to the LC behavior and drive, but the Raontech asymmetry is extreme bad relative to other LCOS devices I have seen. This asymmetry also affects the overall look and even the grayscale/color response. I would hope Raontech would adjust their liquid crystal formula/processing in their LCOS device [Update: Raontech claims this problem does not exist on production 1080p units].
The pictures below show the whole field of view for each headset. These pictures are not to scale. The horizontal FOV of HoloLens is ~30°, the ML1 is ~35°, and the Lumus is also about 35°. Each sub-pattern (with the numbered circle and variable sized text) in the test pattern is 320 x 240 pixels. Lumus prototype is showing almost double the horizontal pixels of the ML1 and HoloLens. You may also notice that while Lumus and Hololens have roughly a 16:9 (HDTV like) wider aspect ratio, the ML1 is squarer with its nominal 4:3 ratio.
Viewing the whole test pattern gives an overall feel for the image quality and how the color varies across the FOV. The pictures have about 2,600 to 2,900 camera pixels horizontally (if you click on the thumbnails) which is not enough to fully evaluate the resolution of images that are between 1,140 and 1920 pixels wide. One would like well more than two camera pixels (samples) per display pixel based on basic sampling theory and the Nyquist rate.
Both HoloLens and the ML1 have significant issues with color shifting across the FOV which is even more evident in the black on white images to follow. Lumus only has some shifting in the far corners. In the Lumus image, the smaller text appears to be dimmer/fading which is a result of the LCOS device behavior.
I did notice that the Lumus DK-Vision has some vertical pincushion distortion not seen in the ML1 or HoloLens. I suspect the pincushioning is in their projection optics and not the waveguide itself. There are also some double images in some of the sub-patterns (particularly sub-patterns 26 and 36) on the far right. It should be noted that this is a prototype and not a production product so hopefully, some or all of this will be improved before production.
Lumus (~1920 pixels horizontally)
Next, we have the black on white background images. The Lumus LOE waveguide has dramatically better color and brightness uniformity across the field compared to both HoloLens and Magic Leap diffraction waveguides. The Lumus display does show a slight dark band about row 5 (51 to 56) of the sub-patterns which I suspect is caused by the matching of the LOE partitions. Additionally, the lower left corner has some color shift and dimming. Since the asymmetry of the LCOS device they are using favors black over white, the small text does not fade the way it did with the white text on black background.
For this next set of pictures, the camera is zoomed the same amount for the larger image, so you can see roughly equal parts of the FOV for each device, each at roughly the same scale. In the picture, there is an inset of just the 3-, 2-, and 1-pixel wide lines. Hololens has about the same number of horizontal pixels as ML1 but has a smaller FOV and thus the image and the inset image is smaller when scaled similarly. It should be noted that the insert for the Lumus DK-Vision has been magnified and additional 2x so you can see the detail.
So as viewed by the eye, the two pixel wide lines on the DK-Vision are about the same size and width as the 1 pixel wide lines on the ML1 and are better modulated than the 2 pixel wide lines (that are twice the size) on the ML1. The DK-Vision appears to have around 4X the horizontal and vertical angular resolution of the ML1.
The black on white background pictures are below. Having a white background tends to see how much the optics scatter light. It should be noted in the case of the ML1 that the “black” one-pixel-wide lines do not get very black compared to both Hololens and the DK-Vision which indicates that the ML1 optics/waveguides are scattering light:
Finally, below is the best 64 x 64 pixel sub-pattern for each headset. The sub-pattern has a set of 3-pixel wide lines with 3-pixel wide spaces, followed by 2-pixel wide lines and spaces, followed by 1-pixel wide lines and spaces. The purpose of this sub-pattern is to test the effective resolution of the optics and is based on the widely used 1951 USAF resolution chart.
The LCOS devices for the three parts are different and asymmetrical regarding their black to white and white to black transition as is demonstrated with the white on black versus the black on white images. The black on white field images shows if there are issues with scattering in the optics.
As stated previously, the Lumus has been enlarged to twice the relative size of the HoloLens and ML1 images. This means that the 2 pixel wide lines on Lumus in real life are very close to the size to that of the 1-pixel lines on the ML1.
HoloLens can show 1 pixel wide lines, but they are not particularly sharp. The LCOS appears to be slightly biased in favor of white over black (one pixel white lines appear wider than the black lines). You may notice the white on black lines are wider (upper left) than the black on white lines (lower left). The contrast of the black on white lines is lower due to scattering (glow) of light in the HoloLens optics (waveguide and/or projector).
The ML1 shows almost no modulation of the single pixel lines. Even the two pixel wide lines are not particularly sharp when compared to the other two devices. From other experiments, I have noticed that the “on-off” contrast of the ML1 is better than HoloLens, but as the black lines on the white background demonstrate, the scattering of white light is significantly worse on the ML1. You should note how the even 3-pixel wide lines look very soft and rounded.
Similar to HoloLens, the ML1’s LCOS seem to have a slight bias in favor of white over black. Based on the available information, the ML1’s Omnivision microdisplay pixels are physically smaller (4.5 microns versus about 6 microns), and the need to do more magnification may be contributing to the much lower effective resolution of the ML1. Based on what I am seeing, the ML1 has at best maybe half of its stated resolution in each direction.
The ML1’s default image size resolution was so poor that I did a test to try and get more precision on its effective resolution. With the test pattern locked in space, I gradually moved toward it and stopped when I could just discern the 1-pixel wide lines. I then noted how many pixels were visible within the FOV and took a picture (below). The result was that at about 710 pixels wide I could start to see the discrete four lines. They still were not well modulated, but at least there were four lines visible.
The Lumus optics can resolve 1-pixel wide lines from a 1920 x 1080 display. The angular resolution is more than triple in each direction than that of the ML1 and is more like 4x. The limitation on the resolution of the Lumus system is the LCOS microdisplay. As can be seen in the white on black versus black on white images, the LCOS is highly asymmetrical in favoring black over white. This is particularly bad considering that the physical pixel sizes of the LCOS device are the biggest of the three at 6.3 microns. With the white on black, the 1-pixel white lines are very dim and thin while the black on white lines is very wide.
Looking at the enlarged image from the Lumus headset below carefully, you can even see a series of faint horizontal and vertical lines from the gaps between the LCOS pixel mirrors which are much less than the width of the pixels (and right at the limit ability to detect them with the camera and lens used). In this case, the resolution is being limited by the LCOS device. I will caution you that this is a “best case” subpattern (as it was for the other devices).
The ML1’s resolution is dismal when compared with HoloLens and Lumus. I used the same equipment and spent much more time trying to get the best images possible with the ML1. You certainly don’t want to be reading text on a Magic Leap one.
HoloLens, being from a big established company more or less hits the numbers dead on. There is a little cropping, but the 1-pixel lines are about what you would expect, they are a little soft but not bad. The “on-off” contrast of the Hololens is a bit low at a little better than 100:1.
The DK-Vision is in a different league in terms of resolution. But it is limited by the asymmetry of the LCOS device which is hopefully something that will be fixed.
Based on my observations, the ML1 tries to display about 1160 pixels horizontally with 40° horizontal FOV (about 45°diagonally). It works out to 2.06 arcminutes (one arcminute = 1/60th of a degree) per pixel. Because the ML1 blurs so much, it effectively can display only about four arcminutes per pixel. The HoloLens displays about 1024 pixels over about 30.5° horizontally or about 1.78 arcminutes/pixel or more than double that of the ML1. The DK-Vision displays 1080p pixels with approximately a 35° FOV (~40° diagonally), this works out to about a 1.08 arcminutes per pixel or about four times its effective resolution of the ML1.
While this article is primarily focused on resolution, I want to add a few other observations on some significant differences I noticed between the headsets.
One thing that immediately impressed me with Lumus LOE over the Magic Leap and Hololens diffraction waveguides is the uniformity of the color and brightness across the FOV. Both Magic Leap and Hololens colors shift and ripple wildly across the FOV as seen in the full-screen pictures above. While the DK-Vision has some issues particularly in the corners, it obviously far better than the other two.
There are also major advantages for Lumus in the amount of transparency. Lumus only blocks about 20% of the real-world light where Magic Leap blocks about 85% and HoloLens 60%. This is the difference between being barely noticeable and putting on dark sunglasses. The ML1 significantly dulls real world. Another human factors issue is that you can see the wearer’s eyes with Lumus where they are blacked out with the ML1 and significantly darken with HoloLens.
The Lumus headset is also about an order of magnitude brighter than either the ML1 or HoloLens. This is necessary for the image to stand out with so much more transparent optics and to support outdoor use. Lumus claims to have a significant light efficiency advantage over the diffraction waveguides, and while I have not been able to verify this claim, it is believable. All three headsets use LCOS which should have similar reflectivity, but I suspect that the ML1 is losing some efficiency due to the dual focus planes (it likely happens in several places). If you simply crank up the power to the LEDs, they will get hotter and less efficient, which in turn leads to needing more heat management which adds bulk and weight, and this very quickly will spiral out of control.
Both the DK-Vision and ML1 use external battery packs, while the Hololens has internal batteries internal, but I don’t think this is a major factor in the differences in brightness. I think it is the light losses and heat management that limit the brightness.
In short, while not perfect, the Lumus optics are much more of what I would expect in an “Augmented Reality” display in terms of overlaying virtual information on the real world. The Hololens is “serviceable” for indoor use in good lighting, but not nearly bright enough for outdoor use. The ML1 is pretty much a different type of VR game playing device that is only usable with a narrow range of lighting conditions.
I don’t know why Magic Leap and Microsoft both decided to use diffractive waveguides and I would welcome their response to this analysis. Both Magic Leap and Microsoft know about Lumus’s waveguides (I was in the Lumus booth with major individuals from both companies at CES 2017) and maybe they have their business or technical reasons. The issues I found with both the ML1 and Hololens are consistent with about a dozen other diffractive waveguides I have seen. In terms of every major factor including being transparent (and not causing artifacts), resolution, color uniformity, and brightness (optical efficiency) Lumus seems to win by a wide margin.
Back in early May 2018, I gave a paid presentation to Lumus on my perceptions and predictions for the AR market. I have done similar work for other companies. Prior to them asking me to give the presentation, I had already written favorable things about the Lumus LOE relative to the diffractive waveguides I had seen. Lumus additionally gave me special access to take pictures through the optics of the DK-Vision prototype during AWE 2018 in late May which are being used with their permission in this report. While I got permission from Lumus to use the pictures for this article since I had taken them in private, they did not have any control over my analysis.
I would like to thank Ron Padzensky for reviewing and making corrections to this article.
It would be funny if they r actually using lower res for the creator edition and will go to 1280×960 for the ML2 and call it a day ….but I’m guessing that’s not what’s happening
I doubt they would use a 1280 x 960 display device and then deliberately make the optics so poor it only passes the lower resolution.
I suspect that the focus planes are at least part of the problem. Working from the eye out they have two blue waveguides, two green, and then two red and in each case the “far plane” is further from the eye. This means that the “far” red image must pass through 5 diffraction gratings to get to the eye.
Another huge difference between the ML1 and Hololens is that the Hololens entrance gratings separate the various colors to their waveguides based on the wavelength. The entrance gratings are bigger and stacked sequentially. With the ML1 and it focus planes, they direct the light to each waveguide based on spatial separation. There are six spatially separate waveguides and six separate LEDs illuminating from different angles. I suspect that some of the resolution is lost in going through the small entrance diffraction gratings, but I am not sure about this.
The main point I’m getting from skimming through is that Lumus has noticeably outperformed what Microsoft and Magic Leap have spent billions trying to do. Can you give reasons to explain this.
For starters, you mentioned that Lumus is using a single waveguide with all colors that they are calling the Lightguide Optical Element (LOE). How is this different from the diffraction waveguides used by Microsoft and Magic Leap for their offerings? Can you explain in detail how the LOE works beyond what you described in the article. How thick is the LOE waveguide?
Magic Leap and Hololens have a series of thin waveguides. For the ML1 there are 6 layers, with each layer being about 0.33 mm with about a 0.027mm air gap (the air gap is necessary to make the waveguide’s TIR work) or about 0.357mm per layer or about 2.14mm for the whole stack. Then there are some thicker protection layers on both sides and the whole thing is about 2.8mm thick. I think each HoloLens layers are thicker but then they have only 3 of them so the net is similar. Lumus’s single waveguide is about 2mm or very close to the same overall thickness before protection is considered.
For a “waveguide” to work the light must be injected at about 45 degrees (the precise angle has several variables) so it will totally internally reflect down the glass. The diffractive waveguides doe this with a diffraction gratings (that are wavelength/color specific) and Lumus does this by simply cutting the edge of the waveguide at about 45 degrees. For light to exit the waveguide the light must be bent back to normal to the glass. The diffraction gratings do this with another diffraction grating. Lumus does this with a series of polarized semi-mirrors at about 45 degrees. The Lumus semi-mirrors are not wavelength dependent and thus bend all wavelength the same (technically very close to the same).
The above description is greatly simplified. Unfortunately, I can’t point you to an article on how it works short of reading the Lumus patents (Lumus’s website does really explain it). You should note that while the Lumus LOE has a number of slats, each slate is reflecting essentially most if not the whole image (and not each slat reflecting a different part of the image). The reason for the number of slates is to create a larger “eye box” or area where you can see the image.
See below for a diagram of the thickness of layers of the ML1 waveguide.
For how the Lumus LOE works, I would start with their oldest Patent which explains the basic concept:
Thanks for the grear comparison and review,
I have a question about do you have any testing for the perofrmance of “white field” for ML1, HoloLens and Lumus?
I am interesting about due to limited tolerence manufacturing of LOE assembling, coating on BS in LOE, it looks like still chanllenge to let LOE with perfect uniformality on “white field”, do you have any idea about how LOE can solve it?
I don’t have any testing on the “white field” beyond the pictures that were published of the test patterns. Every near-eye optical device has challenges, just some more than others.
The Lumus LOE has a much purer white across the field than the diffraction-based waveguides. That jumped out to me right away. There was also some coloration in the bottom left corner at a little bit of banding about 3/4th of the way down. But when compared with the eye, it is vastly better than the other waveguides.
I’m not an expert on the processing of the LOE so I don’t know how they can improve it. I know they are going from making the LOE in their small in-house lab at Lumus to using their manufacturing partner Quanta, a large maker of optical components.
What is the eyebox size for each of these technologies? Have you looked into it?
I am wondering how (if any) pupil expansion (duplication to be realistic) is done for LOE.
The size of the eyebox is a good question. I didn’t try to measure them (I don’t have the equipment set up). but I didn’t find any issue with the ML1, Hololens, or Lumus in terms of eye box. They are all generous compared with many other AR headsets I have tried.
The reason for all the “slats” with the Lumus LOE is to expand the vertical eye-box/pupil.
Thanks for the research. I enjoy your articles and interview on the ARShow.
I would like to point out that you are using Lumus’ high end r&d LOE, one that will probably not see the light of day for at least another 5 years, if ever. MS HoloLens is manufacturing v2.0 and Lumus hasn’t even put out a project. If a lab mock-up was the end game, Lumus would rule the sector. But there’s probably a reason their LOE’s haven’t escaped the lab. If you didn’t have access to the inferior models, it may prove beneficial for your audience to know that. Thanks.
Thanks for your comments.
First I agree as well as pointed out in the article that I was comparing a prototype to a “product” (although neither Hololens nor the ML1 are high volume products). I think you may be confusing the two directional expansion Lumus Maximus ~ 55-degree FOV LOE that they demonstrated at CES 2017 and which Lumus has admitted is going to take time to develop. The ~40 Degree diagonal (~35 degrees horizontal) Lumus prototype uses a simpler LOE without the 2-D expansion for the 55-degree FOV.
Lumus has been selling for years a similar “side shooting” 720p LOE. In particular, I know it is being used by Daqri in various products. Additionally, the Lumus PD-14 has been in military use for several years. So the Lumus LOE has “escaped the lab” but is not in high volume (but then neither are Hololens or the ML1).
My reason for picking the 1080p prototype was that it was a “top shooter” that supported configurations similar to Hololens and Magic Leap.
Any comment of color break-up? With two focal planes for the ML1, the 360 Hz field update rate will likely mean each focal plane will update at 180 Hz, which is generally too low and may induce color break up. But, these are superimposed images so it may be reduced somewhat. You look at that issue yet?
They only use one focus plane at a time. There is a noticeable jump when the switch between focus planes. I have not studied their field sequential rate in detail but it is clear that they have a significantly (I think more than 2X) faster sequencing rate than Hololens.
The whole two focus plane concept is not “integrated” but rather two modes and it takes several seconds to switch between modes. The far focus kicks in at about 30 inches away so most activities are in “far” focus mode. The near plane also clips at 14.5 inches. So this leaves a very narrow range were “near focus” is used, about 14.5 inches to 30 inches. There is some hysteresis in the region around 30 inches to keep it from toggling back and forth at a specific point, but you can make it toggle by looking at an otherwise flat image “edge on” so that a slight change in where you look will cause the focus plane to change. The whole thing works (rather crudely) on where your eyes/pupils are looking and triangulating back where it thinks you are looking.
Thanks Karl. Did not realize these image plane were presented in this way. does sound clunky and not very helpful especially considering the complexity and optical trade offs.
The only way I could see it being used is in a “close mode” that stays in close mode. The jump from near to far is not at all smooth. But then you are in a narrow range between 14.5 inches when it clips and about 30 inches when it changes to far mode. This is a very short range if you are standing.
I would love to see Magic Leap explain how they think it is going to be used without all the Light Field nonsense in their technically silly article:
It looks to me like they didn’t know what they were doing but raised a lot of money and in the end, the two focus planes was an excuse to say they tried.
You may well be close to the truth
Thanks for the comparison and review.
I am working as SW/System Engineer in the RAONTECH.
I also wonder our MP version already fixed or not about your concerned to our LCoS.
So, I personally did test using our MP Dev. Kit with your Test pattern(thanks for your test pattern sharing).
Based on our MP version LCoS, there is no any black/white asymmetry and display is clear.
I think as you expected you did test using DK-Vision used our Engineering sample.
Our LCoS also has a distortion correction feature, so likes optic’s pincushion distortion also can be compensated.
I will add a note to the article.
I have added a prominent update note to the article stating that Raontech does not see asymmetry issues with the production 1080p device. I left the original comments in the article (with the update note added) as they go with the pictures that I took of the DK-Vision prototype.
About the “black/white asymmetry” problem, adjust definition、brighttness and contrast of the image may have different result，I tested in PowerPoint(difinition 100%,brighttness 55% and contrast -8%). Meybe the “black/white asymmetry” affected by dynamic range and other parameters of camera.
I’m not quite sure I understand what you wrote. I can tell you that my camera was not fooled. I shot in RAW format where I could control everything. It is possible that the LCOS was not being driven correctly.
Karl, I see you comparing Lumus, Magic Leap and Hololens optics
Have you tried the Vuzix Blade yet?
Yes. Overall, the image quality in terms color control is similar to Hololens as it uses similar diffractive waveguide technology. It uses a DLP for the display device and has a significant higher color sequence rate than Hololens and so much less color breakup. The resolution and image size is less than Hololens and they have none of the SLAM sensing required for mixed reality. They are going after a more “data snacking” than “environmental” application.
Vuzix has stated that they are working on a their NextGen device with Qualcomm’s new chip as well as using Plessey MircoOled displays. What do you think about that combination of tech in their next device? That will surely be far more than Data Snacking and leap frog the form factor and capability compared to Hololens and ML.
[…] Firstly, and perhaps most crucially, the devices themselves speak to different user behaviors, with Hololens' entirely self-contained design emphasizing total freedom of movement more than the occasionally awkward semi-tethered design of the Magic Leap One. The ML1's Lightpack tether also precludes, or at the very least problematizes, the addition of an external worn power source. This is important given that both devices will need to address the issue of long-term field usage, and neither offers more than 3 hours of autonomy. In addition, the diffractive waveguide technology powering the displays of both HMDs is used to varying effect; ML1 offers a slightly wider FOV, but by many accounts lacks sharpness and definition compared to Hololens. […]
I just saw MSFT win a military contract for Hololens. The articles I have read talk about addressing the FOV issues. I would assume they have done so if they won the contract over Magic Leap and others. Do you have any guesses as to how they may have improved the FOV from the previous version? Are there other factors that make Hololens the right choice for this application?
I think you are too one-dimensionally thinking about FOV.
I don’t think either one is a particularly good fit for the military, but IMO Hololens is better than Magic Leap. Magic Leap blocks too much light, is not bright enough, blocks far too much peripheral vision, and has works light capturing artifacts (flashes of color due to real-world light sources). FOV would be way down my list of deciding factors.
[…] Raontech was demonstrating their WQHD (2506×1440) field sequential color LCOS microdisplay. They also made a point to show me that they no longer had the asymmetry I discussed in my article on the Lumus 1080p engine. […]
[…] demonstrates the poorness of Magic Leap’s optics with a direct comparison. As I discussed in my comparison of Magic Leap to Hololens and Lumus, Magic Leap optics significantly blur/soften the native display resolution and only deliver about 640 by 480 […]
[…] Lumus reflective waveguides are better than any diffractive waveguide I have seen. Back in October 2018 compared Lumus to the diffractive optics of Hololens and Magic Leap and it was not even… (see image below). Lumus also claims (I have not personally verified) to have a significant […]
[…] is noticeably brighter than the Hololens 1, which this blog reported as being 320 nits (cd/m2) in Magic Leap, HoloLens, and Lumus Resolution “Shootout” (ML1 review part 3). The HL2 also blocks about 60% of the real-world light (as measured by a meter). With the […]