304 North Cardinal St.
Dorchester Center, MA 02124
304 North Cardinal St.
Dorchester Center, MA 02124
As a follow-up to my last post, I though I would show why Google Glass is most likely using a transmissive panel. It all comes down to size and shape.
Shown to the left is a Kopin transmissive panel more than capable of the resolution shown in the Google Glass videos and the picture I found on-line happen to have a dime in it and I use that dime to scale it to the same size at the Microvision laser beam steering (LBS) with its “dime picture” in the second image. I roughly scaled a Google Prototype with a color filter LCOS panel to the same scale in the third image. The Kopin panel is only about 2mm thick but it does require optics so I approximately scaled the figure 8 from Patent 6,747,611 filed by IBM in the year 2000 which shows a near eye transmissive optical engine and gives a Kopin panel as an example.
The Microvision engine is for a projector and does not include the “wave guide” that relays the image out to the eye. You are also looking at it from the top down but it is about 6mm thick which is similar to the others in thickness. Part of what makes the Microvision so big is the need to combine and aim the 3 independent lasers at a single mirror as shown (in an older post I showed the combining path). The engine is on the order of 5 times too big to fit into the space available in the Google Glasses, and that does not include the electronics for Microvision’s LBS which take about as much volume as the optics).
Next we have the color filter LCOS which is much more compact than Microvision’s LBS but has an awkward “T”/”L” shape to it caused by the orientation of the beam splitter with the panel on one side and the LED on the other. As I wrote in my prior post, this would not fit in the barrel shape of the newer Google Glass design.
Lastly, the IBM patent has a figure that shows a transmissive panel optical engine that looks remarkably similar to the Google Glass that have been seen. The optical path is straight through and comparatively compact. There is an adjustment knob (2600) that enables the apparent focus point (according to the patent) to be adjusted from about 18 inches to infinity. The Google Glass are said to be set for far vision (near “infinity”) and therefore dispense with this adjustment.
Another thing to note is that there is only an LED, panel and a single lens to generate the image plus the beam splitter (doing the function of the thinner wave guide used by Google). This is a relatively inexpensive device as the LED, a low resolution transmissive panel, and lens combined cost on the order of $10 (and probably less in high volume).
In the category of “everything old is new again.” look how closely the Fig. 8 (copied at the left) from the IBM patent filed in 2000 looks like the Google Glass of about 13 years later (left below). The main difference is that the “computer based device” (today a cell phone) is now wirelessly connected. A feature shown in the IBM patent is a sliding light shield to support viewing images without the distraction of the background. Google Glasses would require looking at a black background to clearly see the image in the transmissive wave guide.
Google’s design “cops out” and requires a nose bridge which others including the IBM patent and Golden-i avoid. The nose is very sensitive to any weight on it particularly over time and it interferes with glasses. Google has said that the device can be attached to a person’s glass frames but this is very problematic with the variety of frames on the market and the added off-balance weight.
The point I would like to make (again) here is that the display technology has been available to make Google Glasses for over a decade and as my prior post on virtual reality displays pointed out, the limiting factor is the use model (how you use it) and is heavily limited by how you control it. I don’t see it as practical to have people talking to their devices and looking shifty-eyed and blinking, not to mention looking like somebody who escaped from a lab.
Maybe someday they will add gesture recognition so you can type on a virtual keyboard but I don’t know of anyone that has perfected this technology yet. Also the images that Google has shown to date are pretty low resolution (on the order of only 320 by 240 pixels) and only fill a small part of one’s vision. I don’t see people doing a lot of internet browsing with the current Google Glass. Then we have the privacy issues as in when someone looks at you shifty eyed through their Google Glass, are they signalling to the computer to look up your information.
One last thing, believe it or not I’m not trying to be negative to about Google Glasses, I’m just trying to relate my experience and knowledge of near eye displays. I think even some people associate with Google Glasses are playing it down a bit trying to get people to understand that they are still looking for how people will use it. Maybe someday they will have a high-resolution color display that fits in a contact lens, selectively blocks out the real world, and picks up brain waves to control it, but it looks to me that that day is a ways off in the future.