304 North Cardinal St.
Dorchester Center, MA 02124
304 North Cardinal St.
Dorchester Center, MA 02124
I’m curious what people think will be the near eye microdisplay of the future. Each technology has its own drawbacks and advantages that are well known. I thought I would start by listing summarizing the various options:
Color filter transmissive LCD – large pixels with 3 sub-pixels and lets through only 1% to 1.5% of the light (depends on pixel size and other factors). Scaling down is limited by the colors bleeding together (LC effects) and light throughput. Low power to panel but very inefficient use of the illumination light.
Color filter reflective (LCOS) – same as CF-transmissive but the sub-pixels (color dots) can be smaller, but still limited scaling due to needing 3 sub-pixels and color bleeding. Light throughput on the order of 10%. More complicated optics than transmissive (requires a beam splitter), but shares the low power to panel.
Field Sequential Color (LCOS) – Color breakup from sequential fields (“rainbow effect”), but the pixels can be very small (less than 1/3rd that of color filter). Light throughput on the order of 40% (assuming a 45% loss in polarization). Higher power to the panel due to changing fields. Optical path similar to CF-LCOS, but to take advantage of the smaller size requires smaller but higher quality (low MTF) optics. Potentially mates well with lasers for very large depth of focus so that the AR image is in focus regardless of where the user’s eyes are focused.
Field Sequential Color (DLP) – Color breakup form FSC but can go to higher field rates than LCOS to reduce the effects. Device and control is comparatively high powered and has a larger optical path. The pixel size it bigger than FSC LCOS due to the physical movement of the DLP mirrors. Light throughput on the order of 80% (does not have the polarization losses) but falls as pixel gets smaller (gap between mirrors is bigger than LCOS). Not sure this is a serious contender due to cost, power of the panel/controller, and optical path size, and nobody I know of has used it for near eye, but I listed it for completeness
OLED – Larger pixel due to 3 color sub-pixels. It is not clear how small this technology will scale in the foreseeable future. OLED while improving the progress has been slow — it has been the “next great near eye technology” for 10 years. Has a very simple optical path and potentially high light efficiency which has made it seem to many like on technology with the best future, but it is not clear how it scales to very small sizes and higher resolution (the smallest OLED pixel I have found is still about 8 times bigger than the smallest FSC LCOS pixel) . Also it is very diffuse light and therefore the depth of focus will be low.
Laser Beam Steering – While this one sounds good to the ill-informed, the need to precision combine 3 separate lasers beams tends to make it not very compact and it is ridiculously to expensive today due to the special (particularly green) lasers required. Similar to field sequential color, there are breakup effects of having a raster scan (particularly with no persistence like a CRT) on a moving platform (as in a head mount display). While there are still optics involved to produce an image on the eye, it could have a large depth of focus. There are a lot of technical and cost issues that keep this from being a serious alternative any time soon, but it is in this list for completeness.
I particularly found it interesting that Google’s early prototype used a color filter LCOS and then they switched to field sequential LCOS. This seems to suggest that they chose size over issues with the field sequential color breakup. With the technologies I know of today, this is the trade-off for any given resolution; field sequential LCOS pixels are less than 1/3rd the size (a typically closer to 1/9th the size) of any of the existing 3-color devices (color filter LCD/LCOS or OLED).
It should also be noted that in HMD, an extreme “premium” is put on size and weight in front of the eye (weight in front of the eye creates as series of ergonomic and design issues). This can be mitigated by using light guides to bring the image to eye and locating a larger/heavier display device and its associate optics to a less critical location (such as near the ear) as Olympus has done with their Meg4.0 prototype (note, Olympus has been working at this for many years). But doing this has trade-offs with the with the optics and cost.
Most of this comparison boils down to size versus field sequential color versus color sub-pixels. I would be curious what you think.
Hi Karl, (1) I’m not well qualified to talk about the color break-up/rainbow effect in field-sequential color because I’m quite insensitive to it personally. However it is possible that the effect would be less in a head-mounted display than with a projection screen because the display is relatively stationary with respect to the head. Hence the break-up would be proportional to eye movements only rather than head+eye movements. (2) Surely there wouldn’t be a 45% loss due to polarization with field-sequential color LCOS if naturally polarized laser diodes are used?
The color breakup problem with HMDs is that the image comes from the end of a “boom” or “glasses” that are not rigidly anchored to the head. The image itself is physically small so even small movements translate into large angular movements of the image. It is not much of a problem when you are sitting but more of an issue if you are moving, running, or jumping out of airplanes (as per Google’s promotion video). You do tend to eliminate general head movements as a source of breakup, but if say you are running, the display will be bouncing relative to your eye. This is a big issue for use of field sequential color in HUD for automotive applications as well.
Certainly lasers would eliminate the 45% loss due to polarization, but I don’t think they are a realistic assumption for the next few years. Still, I should have pointed that this would not occur with LCOS if lasers are used.
hi Karl, thanks for the interesting comparisons. I always thought the power savings along with faster response times and higher contrast ratios gave OLEDs the advantage
Power savings is an important factor. Response times I don’t think is a significant issue for any of the technologies. LCOS is certainly more than fast enough for typical uses. It is not clear that the contrast ratio differences will be that big an issue either, particularly with see-through displays light Google Glass, contrast ratio can’t be that important, but brightness to give contrast against the surroundings becomes an issue. You can play silly games with contrast numbers that don’t result in a significant difference. In a high class/quality movie theater you only get a REAL (ANSI) contrast ratio of much more than 300:1 so all this numbers game of 10,000:1 “on-off contrast” is just a marketing game.
Some of this will come down to use. IF the display is meant for “information snacking” and is see-through, then resolution, color quality, and even contrast are pretty much non issues. Historically (PC, Television, and Smartphone) displays tend to drift to higher and higher resolution. I think it will be axiomatic, if near eye does not end up needing high resolution, then it will not be a very big market.
I’m interested in how much the resolution, versus size and cost will be a factor.
hi Karl and hope you are enjoying the weekend. While i am certainly not an expert i agree that if the displays are used for ‘info snacking’ the display quality is secondary to the price. It does seem that the OLED microdisplays have been grabbing market share from the LCD makers such as Kopin that require higher res, higher contrast, brighter, better contrast and power savings. I believe the big question is whether they can be manufactured in large production quantities to meet cost and demand needs. thanks again for your feedback and expertise… definitely educational!
I don’t know of a lot of real Head Mount Displays products using miniature OLEDs (or any other display device for that matter). The biggest market for microdisplays to date has been near eye viewfinders for cameras and video recorders but the problem there is that they are very cost sensitive.
Kopin claims to have 98% of the military market and while I don’t know how accurate this number is, I would guess they have the vast majority of the market. I think you will find that OLED has a bigger marketing “footprint” in that they are making more noise, but today head mount displays of all types is a very small market.
The question more is what will happen if or when this market takes off? What will drive the market and what resolution and size requirements will they have. The “physics” involved with the various display technologies will favor one technology over another based on the requirements (size, high depth of focus, field sequential color breakup).
Conventional wisdom favors OLEDs (seems good) and we will see if this is justified.
I’d be interested to know what the comparative costs are of the different technologies. That is, just the display device, DLP, CS-LCoS etc. I had suspected that Google’s first model used CF-LCoS because it was cheap. Some of the first picoprojectors used CF-LCoS. Also, why is a transmissive LCD so low transmission?
It it tricky to understand all the costs because a lot comes down to the R&D effort spent to develop high volume manufacturing to get the per unit costs down and I don’t have the number for the various manufacturers. At least in theory, DLP should be more expensive to make because it is a more complex and less standard process, but TI has spent over a decade and over $1B perfecting it. The fundamental cost of FSC LCOS are low because it uses relatively old standard CMOS processes and because the pixel sizes are small, the devices sizes are small. But the maturity of the LCOS processing by most companies is not that mature/well developed. I don’t think DLP is really a serious candidate for near eye, if for no other reason than the power consumption of the panel and the controller.
Color filter LCOS should be more expensive for a given resolution because it adds the color filter which has a cost to apply and a yield loss on top of being at least 3 times bigger. It has an advantage in then using white LEDs which are generally cheaper.
I don’t have any solid information on why Google’s earlier prototype had CF LCOS and then they switched to FSC LCOS. Maybe Himax was in the process of making them a special FSC device but it wasn’t ready so they prototyped with the available CF device.
As far as the low light throughput of transmissive, there are several factors. First there is about a 75% light loss from the color filters off the top. The next big problem is that electric fields from the neighboring sub-pixels (color dots) overlap and thus bleed the colors together. This is worse for transmissive than reflective LCOS because the LC is thicker (and thus the electric fields spread more). They put a black mask to high the region where the colors are significantly mixed to get color. I also think Kopin is using LC formulas that are less susceptible to the problems from the adjacent fields, but are less light efficient. One of the reason HMD companies that were using transmissive panels are looking at FSC LCOS is to get a brighter near eye display for outdoor use.
So just what is the pixel size of FSC LCOS ?
From the recent Emagin (EMAN) 10K :
” Our WUXGA OLED-XL microdisplay provides higher resolution than most HD (High Definition) flat screen televisions. With a triad sub-pixel structure this display is built of 7,138,360 active dots at 3.2 microns each. ”
That is 9.6 microns full color .
Their SXGA comes in at 8.1 micron full color.
There are field sequential (FSC) LCOS pixels that are smaller than 3 by 3 microns, or in other words about 9 times smaller (in area or 1/3rd the length and height) than the OLED pixels. From the description you quoted, the each color sub-pixel/dot is 3.2 microns wide by 9.6 microns long such that with 3 of them they come to 9.6 x 9.6 microns. The SVGA apparently is 8.1 by 8.1 so each sub pixel is likely (8.1/3) by 8.1.
At low resolution, the pixel size is not so much a determinant in the overall display size, but at higher resolution it becomes a very significant factor.
Currently Emagin uses a Side by Side pixel layout but is moving toward a Stacked design which will allow for a smaller footprint.
The question then is how small will the stack design be? At some point with a 3-color per pixel design, the 3 colors have to be individually controlled and the light has to get out. Often factors of the stimulation/control source (be it an electric field that controls a material or light that stimulates say a phosphor) there are some physical limits to keep the colors from bleeding together. Stacking can help pack things together, but it will still have limits.
Karl – your LCOS bias is showing
You need to do some checking on the current state of OLED and some of your claims on pixel size are misleading as to the total BOM cost of a system
Firstly you used the example of 3 micron vs 9 micron pixel – that is going to drive a high cost difference in optics – yes you could make an HD 3micro LCOS but the optics to take that out to say 65 deg FOV where you can use that resolution would be horrendous – resolution is useless if the MTF of the optical system means you cannot see the pixels due to distortion. The smaller the pixel the harder that gets and the wrong choice means not just added cost but weight and compromises in eyebox etc – Smaller is anything but better past a certain point. Even if the resolution is lower, you still have the same magnification factors so optics remains an issue. The truth is that whilst we can make chips and components smaller, we cannot address fundamental material issues on what light does through a material – some of the new work on new methods of building up optics shows promise but that will remain limited to military uses due to cost for some time.
Also for most applications, and particularly AR, field sequential LCOS is useless unless you go to a 3 panel system and some sort of horrid to make cube – even then you are limited on the base frame rate since you have to dither to deliver grey scale. Also a LOT of LCOS solutions only really perform properly unless HEATED to 40c which is significantly higher power costs.
It is not just an issue of being fixed to the head but that in a dynamic environment, between head and eye movement, 60hz FS is just not up to the task and the image will smear. I suspect this is because in your past was more focused on fixed projectors – HMD’s with head tracked VR and true AR will need non-sequential color at much higher than 60hz to avoid all manner of human factor issues – have a look at the truly nasty Silicon Micro Display ST1080 to see why FS-LCOS is not a fit. Even Google glass will need to solve this for just data let alone true AR which they are clearly not.
An OLED panel with far better contrast ratio will be better fit for both night and day, temp means nothing from -45 to +70c (2 reasons that OLED is largely displacing LCD in military applications) and the MICROsecond non-sequential response times mean that it is ideally suited to VR/AR since you can take the frame rate and PWM high enough to eliminate motion artifacts without compromising on color depth.
LCOS and LCD have their place but to really solve the human factors issues, OLED is going to be the best fit long term.
I concur with your statement, “resolution is useless if the MTF of the optical system means you cannot see the pixels”. Some colleagues and I did some measurements on a range of picoporojectors (LCoS, DLP and LBS) and this was one of our conclusions. If you are interested, it will be published in Optics and Photonics News in May.
So when you say reflective, you mean the prototype one from HIMAX? I think that is also transmissive since the light travels twice the path go and back to silicon electrode. It seems they just use pigment to make color filter, the cost won’t be a problem. I think the problem maybe because the size of LC, if LC size can shrink down as the camera pixels without noise problem, then color filter maybe a better way. But still when you think about larger projectors, color filter may cost lowest since they do not need complex optics.
I’m not sure I understand your question. Kopin uses a “transmissive panel” with glass on both sides and light passes through it. The early prototype by Himax with the color filters has glass on one side and silicon on the other; the light passes through the glass (with the filter on it) through the LC to reflect off the pixel mirrors which also control the LC. With reflective LCOS the light passes through the LC twice (in and out) so the LC is about half as thick.
The LC thickness is determined by the optical properties of the LC to change the polarization of the light. As I said, nominally for reflective it is half as thick. If you consider for a reflective device the LC might be 1.5 microns and for transmissive it is on the order of 3 microns thick. If you start taking sub pixels that are also 3 microns across it become pretty obvious that the electric fields are going cover a good bit of the neighboring subpixel/color and this limits how small they can make a pixel and still have color.
The problem is VERY different for camera pixels. In that case there is no LC with electric fields to control the light. The camera sensor only has a vacuum/gas between the cover glass with the filter and the pixel. Camera pixels are only limited by “diffraction.” What is limiting the the liquid crystal scaling is the spread of the electric fields and their effect on the liquid crystal. There is a limit to out thin the LC can be to work to control the pixel and the thicker the LC the more the electric field spreads.
Himax pixel size seems pretty big, the LC they use seems 3 microns of thickness, and 10*7.5 or larger. When I calculate the total size, I just find actually the reflector panel size only 1/3 to 1/4 of the total board. They must do some customized package for google(discrete panel and controller), then it can reach the size of the prism(2cm*1cm or smaller).
Sorry, but I am totally confused with what you are talking about.
The older Google prototype used a color filter LCOS panel. I would think the LC layer is less than 2 microns.
The newer Google Glass that is being shown uses field sequential color and I would expect the LC layer to be 1.0 to 1.5 microns. Generally the FSC LCOS LC layer will be thinner to support faster switching times (there are a lot of factors that affect switching speed and among them is the electric field strength which is affected by the LC layer thickness).
There are a lot of variables that affect the board size (I assume you are referring to the PCB board size). It gets shaped based on the form factor you are fitting and the connector that is being used. The word/rumor I hear is that Himax did a custom design for Google.
I was looking at specifications of the Syndiant SLY 2271 which is FSC LCOS and it looks pretty impressive . Why do you think the Himax FSC LCOS would be chosen for Google Glass over this unit ?
I don’t know exactly why but a big factor is that while Syndiant has some great technology in their silicon design (much more advanced and better in several key ways than Himax), Himax had the money to invest in LCOS “assembly” whereas Syndiant uses contract manufacturing. This put Himax in a better position on cost and the ability to ramp production. While it probably happened after the deal went down with Himax and Google, Syndiant has made some deals with JVC (I don’t know the details of the deal as it went down after I left) that may give Syndiant a way to be much more competitive in terms of manufacturing.
Thank you for the very thorough treatment of near eye options. I am curious about your take on the viability of Virtual Retinal Display options as well as systems that use contact lenses in the dual focal plane approach. Do either of these approaches hold promise to overcome the shortcomings of near eye projection?
By “Virtual Retinal Displays,” I’m assuming you are referring to laser scanning directly into the eye, but it can also be used for any near eye image device. All near eye displays have a “virtual image” that projects on the retina because there is no way to get the eye to focus on something so near the eye. The image is “virtual” in that there is no physical image such as on a projection screen.
Laser Beam Scanning (LBS) while it sounds simple has many drawbacks in terms of cost (very high), image quality (resolution is poor and speckle), and power (it takes a lot of power to controller the scanning mirror and the lasers which are not as efficient at LEDs). They are totally uncompetitive with the other display technologies and I don’t see this changing any time in the next 5 year.
I’m not totally sure what you question is relative to “contact lenses in the dual focal plane approach.” If you are talking about dual focus contact lenses, I would think they can cause complications with respect to near eye displays that focus at given point. For example with bifocal glasses, the lenses are designed to focus far away above the eye and closer up below the eye.
please comment on the post by ‘A’ if you do not mind. I was wondering what your thoughts were on this post. much appreciated
First, you should note that the post was originally automatically identified as spam (probably due to the lack of a legitimate email address) so “A” is anonymous but I though he made some good technical comments which is why I manually dug it out of the “spam bin.” I don’t want to prevent good discussion of technical issue.
There are definitely issues with the quality of lenses (MTF) involved with small pixel sizes and I missed that point and this could be a serious limitation to making small pixels. Also it should be noted that as with Apple’s retina displays, the pixels do not have to be fully resolved to affect the image quality (thus the lens does not have to fully resolve the pixels). There are other factors such as the f-number of the optics required that affect the optics cost.
The claim that OLED has the technical high road has been claimed for a long time (over 10 years that I personally know about) and who knows, that may eventually be the case. It also claims to be the high road in large television but that has been slow in coming and even with all the hoopla about Samsung and LG having OLED TVs, neither company has plans to ship them in large numbers (at least last that I have read). For high volume applications, cost always be an important factor.
“A” claims OLED is making big headway in Military displays, but Kopin claims to have 98% of that market. I don’t have the numbers and don’t know all the factors that the military are considering (not the least of which is pull from politicians). One thing for sure, the military does things that are impractical (for long times if not forever) in high volume consumer markets
I think “A” is wrong in his issues with “image smearing” with LCOS. LCOS is able to maintain very high field rates such that motion smearing is not an issue (but the “rainbow effect” is still an issue because it is not even close to being fast enough to eliminate all “rainbow” issues).
“A” is also overstating the importance of contrast ratio in most typical HMD environments. LCOS can certainly achieve high enough contrast ratios. Contrast ratio has become more of a marketing bragging right concept.
Hopefully, that about covers it.
While OLED TV’s have gotten a slow start , Smartphones have clearly adopted OLED technology . Samsung has recently reported worldwide sales for the Galaxy S3 with OLED display at 50 million units. I have read sales estimates for the new Galaxy S4 with OLED display at between 75 – 100 million units annually .
Can you name anyone using LCOS for Smartphones and what the annual units sold might be ?
The unit sales for LCOS in “smartphones” is very small (there are a few LCOS projectors in smartphones in India and China) but that is besides the point. Certainly OLEDs are being used in large number as still relatively large cell phone displays. But the question is whether this extrapolates to large TV displays (which looks like it may “someday” but is not going to be LCD type volumes for many years) and more to the point microdisplays for head mount displays. There are several million LCOS devices in near eye displays with most of this volume being in camera viewfinders (I know Panasonic is one of the big users). Kopin uses a “lift-off” transmissive LCOS so I’m not sure if this counts as as well as most people think of LCOS as being reflective.
The conventional wisdom (whomever sets it) seems to favor that someday OLED will be the preferred device for near eye displays, but that day is definitely not today. There are additions issues with making a microdisplay OLED that have made them very expensive to date. Additionally, LCOS and Transmissive panels can use laser illumination which let the leverage the very low f-number of laser light which gives unique properties (like very large depth of focus). My point is that it is not cut and dry.
hi Karl, any thoughts or comments on the new OLED microdisplay announced yesterday?
I just saw the news release on eMagin introducing a digital interface on an SVGA (800×600 pixel) OLED. While OLED continue to make progress, the last I heard they were still costing well more than $500 a piece in “volume”. At this price point they are going to be limited to very specialized applications no matter what their performance.
What is not clear to me is at what rate they could fall in price. I don’t know what are the factors that have limited OLED microdisplay cost reduction but there have been companies working on them for over a decade and yet they are still over 20X more expensive than other microdisplays of similar resolution.
The question becomes one of whether they can become cost competitive with other technologies fast enough? We are seeing this play out in the large flat screen market where OLEDs are not expected to make a big impact for many years if ever; the issue being that LCD display are “good enough” and continue to improve so that the advantages of OLEDs are not big enough to justify their increase in cost except maybe to very select customers. On the other hand we are seeing Samsung using OLEDs in cell phones and this is certainly driving OLED development.
thanks Karl. I appreciate the prompt reply. have a great day
Has anyone ever used google glass to watch a movie or surfed websites? I think the picture quality, especially contrast, will be really bad in this kind of see-through setup. Then, is google galss really limited to displaying GPS style brief direction and taking picture secretly? the idea of wearable display is fascinating and even good as long as it is a good display. good display means you should be able to enjoy a movie at least, right? any comment?
I have seen some comments that it is “ok” for watching short clips but not for whole movies.
I definitely agree that the display would seem to have very limited uses. The contrast by the very nature of being a see through display is going to be quite low. Beyond this the image size is small, only in one eye, and above the normal field of view.
Google Glass in its current incarnation seems primarily aimed at “data snacking” with very limited low resolution information. The optics are such that there is a very small field of view (fills only the upper corner). On-line videos show it only having 6 lines of text at least in the application shown which could be supported by about a 80 pixel high display. Reportedly the display resolution is 640×360 (known as Quarter-HD or QHD), but with such a small field of view, higher resolution text may not be readable or at least quickly grasped. From a display resolution point of view, GG is a big step backwards compared to modern smartphones.
Is ‘Lumiode’ a sufficiently different alternative?
Modulated LED array, creating an emissive microdisplay.
Like OLED but potentially directional and brighter.
Thanks, I was not aware of Lumiode. It might be interesting some day, but today they have only a 50 by 50 pixel prototype display today and are hoping to have 320×240 by next year. What will count in the end is the long term cost of manufacturing versus the advantage in image quality. I’m curious how the technology can scale over time and how expensive it is to process. Near eye OLEDs are still way too expensive for anything other than military applications after over 10 years in development after first sampling. In theory large panel OLED TVs are “better” than LCD TVs with LED (“white” or RGB) backlights, but they will cost so much more that only a very few are expected to be made in the next several years.
Don’t know much about LCOS or OLED, except the basics. but wondering have you ever read up on a company MicroOLED. They assume to have outweigh the cons of the LCOS in making their chips give a higher resolution and drains less power. If that really is the case what are the possibility of their chips replacing most of the LCOS microdisplay market? what can their cons be? price?
I don’t know anything specific about the company MicroOLED (of Grenoble France). I’m a bit more familiar with eMagin as they have been around for many years in the U.S. and then there is obviously Samsung that has made major investments in OLEDs of various form factors large and small. OLEDs have been touted as a solution since at least the late 1990’s when I first got involved in LCOS. OLEDs have been marketed as the holy grail of small displays for the last 15 years but they have yet to successfully compete.
I don’t follow OLEDs much these days but Organic LEDs are historically have been plagued by instability/lifetime issues and very high costs. Being “organic” the least amount of oxygen (due to manufacture or leaks in the seal) will cause them to break down. Last I looked they were still well over 10X more expensive than an equivalent resolution LCOS solution.
While the cost and lifetime are the issues most people talk about, another serious issue is brightness. If you want to make a see-through display that is visible outdoors in Sunlight then most OLEDs have had a problem providing enough brightness. A transparent display used in Sunlight can need about 20X the brightness of a non-transparent display used indoors. With LCOS you separate the problem of illumination with LEDs that have more than enough brightness from modulation (controlling the light) by the LCOS devices. Relatively small and cheap LEDs can provide plenty of light. With OLEDs they struggle to provide enough light without seriously degrading the lifetime of the OLEDs (driving the OLEDs harder can severely reduce their lifetime.
You may note that the biggest application (at least that I know of) is in Samsung’s cell phones. Cell phones are a bit of a special case in that they are generally not displaying for many hours a day and they get replaced after a few years. This means that the display does not have anywhere close to say the operating hours of a television in their expected lifetime. Also at least on the Samsung OLED displays I have observed they limit the brightness of the display based on the content (if there is a large area of the display that is bright they dim the whole display).
I think for Samsung, cell phone displays represent a market for them to perfect the technology precisely because cell phones don’t have the operational lifetime issues of other displays and yet are high volume giving them a lot of “learning curve.” In spite of this you have not seen OLEDs being used yet in any products where cost is an issue. Maybe the will someday, but the question is whether it is still 1, 5, or 10 or more years away.