Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
With all the hype about Hololens and Magic Leap (ML), Osterhout Design Group (ODG) often gets overlooked. ODG has not spent as much (but still spending 10’s of millions). ODG has many more years working in field albeit primarily in the military/industrial market.
I don’t know about all the tracking, image generation, wireless, and other features, but ODG should have the best image quality of the three (ODG, Hololens, and ML). Their image quality was reasonably well demonstrated in a short “through the optics” video ODG made (above and below are a couple crops from frames of that video). While you can only tell so much from a YouTube video (which limits the image quality), they are not afraid to show reasonably small text and large white areas (both of which would show up problems with lesser quality displays).
Update 2016-12-26: A reader “Paul” wrote that he has seen the “cars and ball” demo live. That while the display was locked down, the cubes were movable in the demo. Paul did not know where the computing was done and it could have been done on a separate computer. So it is possible that I got the dividing line between what was “real” and preplanned a bit off. I certainly don’t think that they detected that there was a clear and a black cube, and much of the demo had to have been pre-planned/staged. Certainly it is not a demonstration of what would happen if you were wearing the headset.
As I wrote last time, I’m not a fan of marketing hyperbole and I think calling their 1080p per eye a “4K experience” is at best deliberately confusing. I also had a problem with what Jame Mackie (independent) reporter said about the section of the video starting at 2:29 with the cars and balls in it and linked to here. What I was seeing was not what he was describing.
The sequence starts with a title slide saying, “Shot through ODG smart-glasses with an iPhone 6” which I think is true as far as it what is written. But the commentary by Jame Mackie was inaccurate and misleading:
So now for a real look at how the Holograms appear, as you can see the spatial and geometric tracking is very good. What really strikes me is the accuracy and positioning. Look how these real life objects {referring to the blocks} sit so effortlessly with the Holograms
I don’t know what ODG told the reporter or if he just made it up, but at best the description is very misleading. I don’t believe there is any tracking being done and all the image rendering was generated off-line.
Before getting into detail on the “fake” part of the video, it is instructive to look at a “real” clip. In another part of the video there is a sequence showing replacing the tape in a label maker (starting at 3:25).
In this case, they hand-held the camera rig with the glasses. In the first picture below you can see on the phone that that they are inserting virtual a virtual object, circled in green on the phone, and missing in the “real world”.
As the handheld rig moves around the virtual elements moves and track with the camera movement reasonably well. There is every indication that what you are seeing is what they can actually with tracking in an image generation. The virtual elements in three clips from the video are circled in green below.
The virtual elements are in the real demonstration are simple with no lighting effects or reflections off the table. Jame Mackie in the video talks as if he actually tried this demonstrations rather than just describing what he thinks the video shows.
The first clue that Cars and Balls video was setup/staged video is that the camera/headset never moves. If the tracking and everything was so good, why not prove it by moving rig with the headset and camera.
Locking the camera down makes it vastly easier to match up pre-recorded/drawn material. As soon as you see the camera locked down with a headset, you should be suspicious of whether some or all of the video has been faked.
Take a look at the black cube below showing the camera rig setup and particularly the two edges of the black cube inside the orange ovals I added. Notice the highlight on the bottom half of each edge and how it looks like the front edge of the clear plastic cube. It looks to me like the black cube was made from a clear cube with the inside colored black.
Now look at the crop at left from the first frames showing the through the iPhone and optics view. The highlight on the clear cube is still there but strangely the highlights on the black cube have disappeared. Either they switched out the cube or the highlights were taking out in post processing. It is hard to tell because the lighting is so dim.
2016-12-16 Update: After thinking about it some more, the rending might be in real time. They probably knew there would be a clear and black box and rendered accordingly with simpler rendering than ray tracing. Unknown is whether the headset or another computer did the rendering.
According to comments by “Paul” he has seen the the system running. The Headset was locked-down which is a clue that is some “cheating” going on, but he said the blocks were not in a fixed location.
Looking “too good” is a big giveaway. The cars in the video with all their reflections were clearly using much more complex ray-tracing that was computed off-line. Look at all the reflections of the cars at left. There are both cars reflecting off the table and off the clear cube the flashing light on the police car also acts like a light source in the way it reflect off the cube.
One of the first things that I noticed was the clear cube. How are the cameras and sensors going to know it is clear and how it will reflect/refract light? That would be a lot of expensive sensing and processing to figure this out just to deal with this case.
On the right is a crop from a frame where the reflection of the car is wrong. From prior frames, I have outlined the black cube with red lines. But the yellow care is visible when it should be hidden by the black cube. There also a reflection in the side of the cube around where the render image is expecting the black cube to be (orange line shows the reflection point).
2016-12-26 Updates (in blue): Based on the available evidence, the video is uses some amount of misdirection.
The video was pre-rendered using a ray tracing computer model with a clear cube and a perfect black shiny cube on a shiny black table being modeled. They knew that a clear and black cube would be in the scene and locked down the camera. They may have use the sensors to detect where the blocks are to know how to rendering the image.
They either didn’t have the sensing and tracking ability or the the rendering ability to allow the camera to move.
Likely the grids you see in the video are NOT the headset detecting the scene but exactly the opposite; they are guides to the person setting up the “live” shot as to where to place the real cubes to match where they where in the model. They got the black cube in slightly the wrong place.
The final video was shot through the optics, but the cars and balls where running around the a clear and black cubes assuming they would be there when the video was rendered. No tracking, surface detection, or complex rendering was required, just the ability to playback a pre-recorded video.
I’m not trying to pick on ODG. Their hype so far less than what I have seen from Hololens and Magic Leap. I don’t mind companies “simulating” what images will look like provided they indicate they are simulated effects. I certainly understand that through the optics videos and pictures will not look as good as simulated images. But when they jump back and forth between real and simulated effects and other tricks, you start to wonder what is “real.”
I think you’re off on this one. I personally have seen the cube and balls demo. Based on my experience, I saw nothing that couldn’t be rendered in real time 3D and the cubes were movable in the demo. Though the display was rigidly mounted at that time so it’s possible the displays were tethered to a PC. I am skeptical of a standalone Android based configuration being able to do that along with SLAM.
Thanks Paul. I can see how they could have rendered some of it in real time. But at a minimum they had to know before hand that there would be “clear cube” and a “black Cube” and something about the orientation of the cubes and the display. Some demos are like a magician’s trick, you think what you are seeing is “random” when actually it is carefully stage and you are being misdirected.
Also why would they have to lock down the display if they were doing it “real”? I guess they could map out where the clear and black cube are and then compute the imagery either in real time or with a batch processing to match were the cubes were. But once again the problem was a best constrained to a the type of cubes and their general placement.
So I could be off a little in dividing line between what was “real” and “fake” but they are not doing it all with the headset. I would be nice if they just said what they did for “real” and what was a simulation or off-line. In other words what are the really demonstrating and what should users expect their device to do.
Karl, good day.
Resolution hololelns 1.268 x 720 per eye? Correct?
If yes, which LCOS model in Hololens? I can’t found at Himax website panel with this resolution.
Thank you, Bob.
Bob, I don’t know if they are using the older HX7318 (1366×768 (WXGA)) or some newer device that they don’t list on their website. Himax does not list all the devices they make on their Website.
Himax produces custom displays for virtually all of their customers. They receive a large amount of NRE to create a very customer-specific solution for each application. It is in all of their financial disclosures, and discussed heavily in their quarterly conference calls.
You are not going to find a catalog part for these devices.