Contact

This page is auto generated for your convenience. To change the phone number and email recipient of this page visit the SmallBiz Options Panel. This Page uses a WordPress Page Template. You can edit the code of this Page Template by going to Appearance -> Editor -> Contact Form Page Page Template (contact.php). You can change the appearance of the sidebar for this page under Appearance -> Widgets.

122 Comments

  1. Hello Karl

    I work for QD Laser, Inc which are developing a SHG Green laser.
    I have met the CEO of Syndiant in Japan, and know the technology well.

    I am a believer of flying spot type projections.
    BUT it is not easy to take off due to many issues for CE as you know well, and mostly agree with your comments.

    No matter how I wish pico-projections could jump over a chasm point and brake for CE world…

    Looking forward to your next article.

    Michael

    • Hi,

      I visited QD Lasers a couple of years ago on one of my visits to Japan and am aware that QD lasers were aimed at flying spot = LBS. I don’t remember if we met as there were a number of people at the meeting. I have been following lasers for projection for a long time.

      The price and performance points for consumer products is very severe.

      KarlG

      • Hi Karl

        I appreciate your blog. Actually I am the same person as Mike Takeda… I am sorry to say I cannot make comments by my real name on behalf of QD Laser. Because of I do not want to affect stock..etc to MVIS.etc. So I do my best to be in fair manner on blog comments.

        As far as I know, I think your blog is the most credible in the world for pico-projections at CE world.

        I may know more about DGL and SGL information in happening, but it shall not be on public at least from me.
        Since I had worked for Sony for 20 years, I know CE world very well. Laser prices are now in chicken and egg situation now.

        So I wish we could correspond directly by email.

        Best regards and stay warm,

        Michael

      • Are you talking about Zspace the company or something else? I have not seen them personally, but in taking a quick look at their patents, they use an LCD display and polarized glasses to support 3-D stereo viewing. The trick is that they have to have a special screen that has every other pixel polarized oppositely. They then can use polarized glasses, like with a 3-D movie, so each eye gets their part of a stereo image. They then have tracking of the glasses and the pointer device.

        Unlike a movie where everyone sees the same image, by tracking the glasses they can adjust the image based on the user’s position.

  2. FYI

    Karl Guttag Weighs In On Himax And Google Glasses 1 comment
    Mar 5, 2013 2:43 PM | about stocks: HIMX
    In the wake of my article on Google Glass and Himax, we had the honor of a response from Karl Guttag himself. Karl is one of the world’s foremost authorities on LCOS technologies (among others). His credentials are too extensive to list here, but I encourage everyone to check them out here: https://www.kguttag.com/about-karl-guttag/

    His response seems to have sparked the precipitous reversal in HIMX shares today. Keep in mind that I neither endorse nor condemn today’s move. That being said, I maintain my belief that shares of HIMX are poised to triple. How and how fast it gets there is not something I can forecast (in fact, I’m a terrible trader). My strength — and the basis of Pipeline Data — is in fundamental analysis.

    Without further ado, here’s Mr. Guttag’s note…and my reply.

    _______________________________________

    Mark,

    Thanks for referencing my blog and your kind words about my expertise, but I would like to correct/comment on a few things in your article and related comments.

    As my blog (and other comments) have pointed out, I think it is unlikely that Google is using a Himax LCOS panel in the newer design. I was just pointing out the fact that the old prototype used a Himax panel. Himax’s current LCOS site does have the appearance of being “abandoned” with broken pages and missing links.

    It is no simple matter to go from reflective LCOS to a transmissive panel as the technologies to form the transistors are radically different. The major technical different that it requires the transistor to be on a glass/clear substrate rather than silicon. There are other companies such as Kopin and Epson and others who are much more established players and are much more likely to be providing transmissive panel than Himax. I don’t know what information you got from Himax, but there is a chance that there was a communication error.

    Himax has sold their color filter LCOS panels into products sold in China and/or India for several years. So selling 20K in a quarter would not necessarily indicate a build up for Google Glass.

    Additionally as my blog points out, I am more than a little skeptical that any head mount display, including Google Glass, is ready for “prime time” high volume sales any time soon. Just because a lot of companies are looking at and researching something does not mean that it is about to happen in high volume. Head mount displays while solving some problems, have a hole host of issues that have I have yet to be convince have been solved.

    Karl

    ________________________________________

    Hi Karl,

    First, I thank you for your note…and more so for the expertise you routinely share on your blog. I believe it is a must-read on the topics you cover.

    In regards to your comments, I concede all technical points. You are by-far the expert on this subject. Along those lines, I read and understood your thoughts regarding the likelihood of Google of utilizing a transmissive panel in the commercial-launch version of Google Glass.

    It would be understandable to dismiss Himax on that basis. Indeed, their website could use some updating, but I assure you that it’s functional. http://www.himaxdisplay.com/en/product/info.asp

    Comparisons to past iterations (via Google’s Wayback Machine) show that changes are made fairly regularly. Most importantly, its ties to LCOS are alive and well. My colleagues and I have contacted several folks in the industry to check on the status of each competitor’s offerings. Specific to HIMX, I asked the company directly. This was the response:

    Mark,

    I just heard back from the CFO, Jackie Chang, on your question.

    The answer to your first question is “Yes” the Company does offer a transmissive color filter device.

    Best regards,

    John Mattio
    MZ Group |Senior Vice President – MZ North America

    I have no doubt in your assessment of the difficulty involved with moving from a reflective LCOS to a transmissive panel. Not surprisingly, my communications with Gartner Group yielded no refutation to any of your expertise either. However, I believe that the feedback from HIMX’s CFO would imply that they have indeed crossed those tough, but surmountable hurdles. Indeed, $80 million of annual R&D spending can go a long way.

    Further, as your excellent detective work revealed, they were in the prototype. While that’s no guarentee of being designed into the final product, indications seem to point in that direction, including the company’s claim to offering a transmissive CF device, along with its ability to offer Google a color sequential device, should they choose to pursue that path in future versions. I haven’t been able to identify any other company that offers this combination of capabilities.

    As for your skepticism regarding the readiness of Google Glass for prime time, that is surely the debate du jour! Sergey Brin certainly did his best to show what Glass can currently do and moved up Google’s timeline for launch. I can only assume that they feel confident in the progress they have made.

    I have no doubt that the first iteration will reveal issues that will need to be addressed. Battery life and price come to mind. That being said, it is my understanding that HIMX is mulling a multi-fold increase in its manufacturing capacity, which is already measured in the millions.

    Frankly, I wouldn’t consider a couple million units to be a runaway hit. However, for HIMX it would represent substantial upside relative to what is currently baked into the Wall Street estimates. Such upside and future prospects could move the stock toward the “growth” category in investors’ minds which would warrant a EV-basis P/E far in excess of the single-digits it commands today.

    In short, I don’t refute any of your points — our facts are not in conflict. Rather, your research and mine seem complementary.

    Thanks again for your generous contribution. I welcome the honor of any further thoughts. My curiosity is always piqued by the pursuit of the truth and you provide much to the world in that regard!

    Kindest Regards,

    Mark G

    Disclosure: I am long HIMX.

    Stocks: HIMX

    • geoffreyporter,

      Sorry, but for some reason my “auto spam” filter thought your post on 03/05/1013 was spam and when I was cleaning out my spam folder tonight I found it. I had to manual “un-spam it.”

      Karl

  3. Any update on the availability of dgl and pricing?

    Since early 2012, have there been any significant advances in reducing form factor size?

    I see that the guy at Microvision, head of R&D use to be a Texas Instruments. Did you know of him when you were at TI? Your opinion on his skills?

    • So far direct green lasers pricing, it is pretty much doing what a I predicted back in November 2011; namely that they are too expensive for high volume products. Witness the nearly total lack of products using them in the market (just a test the market very expensive HUD by Pioneer). It is also not clear what products are going to drive volume to justify high volume DGL development. Right now there is approximately zero volume in DGL. The DGL in addition to being very expensive “per lumen” don’t put out many lumens. There is a lot of progress with blue lasers stimulating phosphors (so called “hybrid” projectors), but you just can’t buy a high powered DGL for any amount of money.

      I don’t think I ever crossed paths with Dale Zimmerman while at TI.

  4. What do you make of the April 3, 2013 announcement by Microvision that they signed a development agreement with a Fortune 100 electronics company to incorporate MicroVisions PicoP display technology into a display engine that could enable a variety of products. They paid Microvision a $4.6 million development fee over 13 months. They go on to say that “the companies have begun commercial negotiations with the expectation that licensing and component supply agreements would constitute the next state of engagement leading to the OEM’s introduction of commercial product.

    Secondly, Microvison appears to no longer making their own Pico projector and has gone to a licensing model. Then, in Sept 2012, they signed an agreement with Intersil to partner on the development of ASICS for the Picp P, in order to:
    – intergrate advanced features such as virtual touch and proximity sensing;
    – increase brightness;
    – reduce power consumption and component size;
    – lowering overall cost.

    Your comments on the Intersil ASIC agreement and if that makes sense

    Thank you

    • Nothing much seems to change at Microvision, they have been a “startup” for 20 years (founded in 1993)! All the while as one product fails in the market, they announce some new “big deal.” By all appearances, the Pioneer HUD has failed (only sold a few units with no follow on orders to speak of) so now they have a new deal they are working on. Maybe someday we will see if the $4.6 million development fee turned into anything that will make money, more likely it will fade away like so many other “big deals” they announced with much fanfare in the past. I think the Intersil deal is being much overblown as well; WHERE are the product based on this deal?

      Wow, 20 years of failure and they have a “new business model.” What choice did they have? They were out of money. Each time they fail they announce a bunch of new “goals.” WHERE are the products? By going to a licensing model they reduce their burn rate. The problem is WHO is going to build products and then pay Microvision significant amounts of money?

      Microvision mislead people about the state of the direct green lasers. In 2011 they implied that DGL would be cost effective in 2012 and 2013; once again here we are in 2013 and WHERE are the products using direct green lasers?

      Microvision has a 20 year track record of selling new stock with a new story. The problem is that the story never pans out with real products. Their image quality is terrible, their cost are ridiculously high, they have poor power efficiency, and the few times they actual get a product in the market it fails miserably.

      What on their list of their “new goals” has made them competitive? Touch and proximity sensing can be done better and cheaper using camera sensors (Microvision’s virtual touch is VERY primitive), other technologies are an order of magnitude brighter with products in the market using LEDs, the competition has better power consumption per lumen than Microvision, and the competition has much lower cost. Microvision way behind and not catching up.

      • I find your negative take on LBS is most unsettling. This year old article that you commented on below

        http://www.globalsources.com/gsol/I/Mobile-phone/a/9000000125366.htm

        MEMS laser scan engines, meanwhile, are considered a suitable solution for smartphones and HUDs owing to advantages of better color saturation, focus-free function, good heat dissipation and low power consumption. In addition, these have a high contrast ratio, reaching 5,000:1. As such, suppliers expect MEMS penetration to rise beginning next year.

        and

        For luminous flux, DLP modules are aiming at 9 lumens, and LCoS and MEMS more than 30 lumens. MEMS units have the highest among the three technologies with up to 15 lm/W, which can rise further to 20 lm/W next year and 40 lm/W in 2015 or 2016.

        Progress is often on it’s own schedule and as you’ve stated elsewhere in the blog, just because a lot of companies are doing research into VR, I think you were talking about, doesn’t mean mass production is just around the corner.

        What do you feel about LBS currently?
        Or are you dead set against it still?
        Are there no redeeming qualities in it?
        Has image quality improved?
        Has brightness improved?
        Power consumption?
        Cost?
        Size?
        Usability?
        3D scanning ability ie “virtual touch” as you called it?
        Are they still “way behind”? and “not catching up”?

        I am a shareholder in MVIS and although I would rather hear good positive things about the tech I also would like a fair updated assessment even if included negative points. The key word here is fair.
        Where do you stand on LBS today?

      • I like to think my take is objective. It if is “negative” it is because the information about LBS is all negative. It is a concept that “looks good on paper but does not work out in practice”.

        If you go around on internet forums (Yahoo, Reddit, Invertorvillage) you have to understand that you have a VERY pre-selected audience of “true believers.” Microvision when public back in the mid 1990’s and so is a 20+ year old “startup” that has lost over $500M in shareholder money to day and continues to loose about $2M a month whether they ship units or not (generally they loose more when they ship more, not a good trend). People that stop believing in the stock after many years of failure normally just go away, particularly when faced with the religious believers that attack them for challenging the “religion.” The faithful there who often have little to no real technical understanding think they understand things better than those that do understand the technology.

        I don’t consider globalsources to be a reliable source of information. They are one of many self proclaimed “analyst” publications that make their money by selling reports to companies that WANT them to print big numbers. From what I have seen of their reports I seriously doubt they have a clue what they are reporting on and just reprint what they are told by their the people buying their report (this is usually zero line between their sales and analyst). But there is a built in bias for them to report big numbers as this will sell more reports (if they report the market is small, the companies won’t buy the report).

        In the case of MEMS Laser scanning you quoted, this comes directly from the MEMS laser scanning companies without any thought. 5000:1 contrast is a meaningless number when you realize that even a very dimly lit room will have about 1 lumen per square foot. So if you have say a 20 lumen projector (about the most an LBS projector can get into cell phone form factor) and you project a 2 square foot image, you only have 10:1 contrast. If you turn on a little light, your contrast goes down much below 10.

        The Luminous flux numbers you quoted are a bunch of nonsense/garbled. I don’t think anyone has hit 40 lm/W measured “end to end” including the display, control and lighting source. I think the best is in the 15 to 20 range today.

        The projection market in general has been shrinking. It it losing out everywhere to smaller, light, higher resolution direct view displays. Nothing in LBS changes that. The image quality of LBS is poor, the power efficiency is worse, and the cost is much higher. It is focus free (but so can DLP and LCOS if they use laser illumination). Microvision keeps going because they keep selling/diluting the stock and thus their stockholders keep subsidizing LBS.

        As a projection technology of images, it is a dead IMO. They will keep getting people to try and an the products will keep failing to find a lasting place in the market beyond some very limited sales. In the real world you find very few situations where you can use a picoprojector. Additionally the resolution of LBS is consistently about 1/2 in each direction or worse than they claim when objectively measured. Their image quality is still VERY POOR compared to other technologies. Also their power is consistent much higher in the lumens per Watt when objectively measured.

        Microvision plays a bit of a shell game when LBS fails at one thing they switch to something else (they went from near-eye to picoprojectors) and now they are touting gesture recognition. LBS is frankly a lousy way to do gesture recognition. Among the problems is that it is too slow and you can get “double images” if the hand is moving in the “wrong” direction (in the direction of the scan) or missed entirely or jump (if the motion is opposite the scan). You can get vastly better and cheaper gesture recognition. The better gesture technologies today use camera that support time of flight that give much higher resolution in addition to distance, without the “temporal aliasing” cause by scanning.

        Both the technical and market momentum is behind other solutions, particularly flat panel displays and cameras which have massive market momentum as well as technical success. IF you want projection, there are better ways to do it and projection in general is a declining market. LBS is a lousy way to do gesture recognition even for 3D compared to time of flight cameras (that are leveraging the technology advances from hundreds of millions camera chips being sold each year). There may be some small volume niche applications but not a large broad based market that supports a component supplier that only gets a small cut. There is a fundamental “catch-22” with Microvision; any market they can point to with any volume needs a very low price which they can’t meet.

        The one area where Microvision has moved to is LIDAR which is pretty much very low resolution 3-D scanning. If don’t really follow this market/technology, but people have been doing this for a long time without Microvision and I seem to be seeing them going more and more to camera based technology to lower costs.

  5. Hello Karl,

    you have a very interesting blog here.
    I would like to ask you on your opinion/advice on the usability of our packaging technology for LCOS devices.
    We have a unique method for depositing borosilicate glass at low temperatures and use this to make glass caps with a thin glass dam (5 to 20um high), manufactured on wafer-level( http://www.lithoglas.de/app_wlcsp.html ). The advantage is, that the glass cap incl. dam a made of a very reliable material with a CTE matched well to silicon. The cap dimensions are highly precise and very low profile caps can be realized. They can be applied by pick&place after singulation or on wafer-level.

    Looking at the LCOS devices I assume, that such glass lids could be used for encapsulating the actual optical LCOS-area in packaging. Would you agree and could you please comment on the specific packaging requirements of LCOS-chips?

    Your expertise would be very much appreciated.
    Thanks & best regards,
    Ulli

    • I’m not sure if I fully understand what you are proposing, but I took a quick look at the web page. At least on first look it would not appear to work LCOS packaging as it the dimension of your dam/gap are different (much smaller) for LCOS than say for camera sensors. Depending on the Liquid Crystal, the typical cell gap range for the LC in LCOS devices ranges from a little less than 1um to almost 2um (and the trend is to smaller gaps). The gap has to be very small and very precisely held across the device. Even if the gap could be made 1um with your glass process, I can foresee significant manufacturing issues in trying to use it.

      I which I could be of more help, but I don’t think it would work for LCOS manufacturing.

      Regards,
      Karl

      • Hello Karl,

        thank you for your quick response.
        Very precise gap heights of 1 or 2 µm or less are easily feasible, as we are using a well controlled (plasma-assisted) e-beam evaporation process. The lateral patterning of the glass films is done by a simple lift-off lithography (mask aligner).
        Nonetheless, the caps (or wafer-level-caps) need to be bonded to the device. This could be achieved by glue bonding (~1um bond line with wafer-level-processing) or metal/alloy bonding techniques.
        I would be very interested to learn, where you foresee the issues. Could you maybe please comment on how the glass lids currently used are fabricated and attached to the device?

        Thanks & best regards,
        Ulli

      • Most LCOS today is assembled by putting glass beads of the desired glass thickness suspended in glue. Silicon being crystalline does not conform/bend without cracking but the glass side tends to conform/bend.

        There are some variables in the processing. The older way is to leave fill port in the glue which allows the LC to be filled by first applying a vacuum and then immersing the cells in LC, releasing the vacuum, and sealing the fill port. Some more advance fill techniques require an extremely precise amount of LC being applied to the cell and a completely encircling glue (with beads) surrounding the cell so that when the glass is applied, there is no separate seal process. After the cell is filled and sealed, the device is heated to the “clearing point” (about 80c to 100c) of the LC and then cooled.

        The issue I see with your process is making a good seal. In the bead and glue process, it is mostly glue with a few beads to which gives a large amount of surface contact. I’m also concerned with how you more solid glass ridge would expand and contract relative to the silicon under temp cycling.

        I can’t say it wont’ work, but it takes many millions of dollars of investment to perfect a sealing process. I’m not sure they would see what advantage your process would have to invest in development a new process.

        Karl

      • Hello Karl,

        we have a developed bonding/sealing process with glue on wafer-level, that gives us a glue bond line thickness of about 1um with very good reliability results.
        However, I fully agree that it will be difficult to enter into LCOS packaging, if we cannot point out a clear advantage of our technology over the established processes. Therefore we are trying to better understand the specific requirements.
        Can you please comment on the temperature budget and typical reliability requirements (THS, TCT) for LCOS devices?

        Thanks & best regards
        Ulli

    • Compound Photonics has been around for about 5 years, but I have never seen any of their projected images. The images in the video are too brief, small, and often out of focus. Usually companies that have great looking displays are not afraid to show a lot more. I don’t know of Compound Photonics showing their images publicly anywhere. I know they use frequency doubled green lasers and it is very hard to remove speckle from the generally narrow line width these laser produce. There was talk of being widen the line width of frequency double green lasers but I have not seen any results.

      I also don’t understand where Compound Photonics expects to sell their product. As I point out in Whatever happened to pico projectors embedding in phones? I don’t think there is going to be a market for embedding pico projectors into things like cell phones.

      So far, I have seen CP pick up a lot of “distressed properties” such as Syntax-Brillian, The Motorola Com-1 Fab in Pheonix, and “Europe’s Largest Gallium Arsenide Semiconductor Manufacturing Facility from RFMD” which is quite a lot of buying for a company that has never been known to produced much other than some old Brillian near-eye products. It baffles me.

    • Thanks,

      I don’t follow the direct green lasers a close as a used to. From everything I can tell, direct green lasers are still a long way from being viable in high volume products. If anything the situation looks works than it did 2 years ago. There are not a lot of applications for DGL other than projection (there is some in Gun sights and some very specialized application) and the projection market as a whole is starting to shrink. There are still no lasers that can compete on cost, light output, and efficiency with LEDs for projector applications.

      I’m sorry to say this because at one time I too had big hopes for DGL, but when you look at the possible applications that might have some volume, DGL don’t look to be economically viable for a long time. “Large Venue” such as movie theaters which are starting to use lasers are using frequency doubled green lasers because they can output so much more light today, more efficiently, and are less costly.

      Google Glass and similar near eye products will be much smaller and cheaper using LEDs. It would be absolutely silly to even consider lasers for all but the most extreme (expensive) near eye products.

      If you look at the market, there was exactly one DGL projector product in the market in 2013, the Pioneer HUD, and they have turn around and gone with LEDs in their newer (and less costly) design.

      Maybe there will be a surprise at CES this year (but not likely), but I don’t see DGL being competitive for may years going forward. I expect that the laser companies will be concentrating on Blue lasers to simulation “white” phosphors for making all manner of lighting. There could be a huge market for blue lasers (in lighting), but I don’t see it for direct green lasers right now.

  6. Dear Karl,

    Thank you kindly for your your incredibly helpful blog and insights into pico projectors. I wondered if you would mind sharing your thoughts on the best type of projector to use in a bright room to project colorful images on to a 30″x 60″ surface at a distance of 50 – 60″. I looked at several pico projectors, but the image is quite washed out by the ambient light. Do you know of one that might be worth trying or is there another type of projector that you might recommend for this kind of application. I appreciate any thoughts or guidance you may have.

    Sincerely,

    Erin

    • There isn’t a pico projector that is going to be bright enough to support that large and image with much ambient light. Pico projector are generally in the 20 to about 300 lumen range and for a larger image in ambient light you would want over 1000 lumens.

  7. Hello Karl,

    I like your website and your expertise. Did you go to CES this year? Sony launched a small laser projector to turn tables into screens. Do you know how many lumen this projector has? Is it enough for a kitchen’s table? Is it a own development by Sony? If not, what technology are they using because all videos from the CES does not show any scan lines. That is very surprising for me. Did they come up with a new technology or is it not a real laser projector?

    Best regards,

    Chris

    • I didn’t get to see the Sony tabletop at CES but I think they if they are using a laser to illuminate a panel (either LCOS/SXRD or DLP). It could be a “hybrid” where they use red LEDs or laser and a blue laser that also creates green using phosphors (this Hybrid approach was made famous by Casio).

      A “laser projector” is one that uses lasers for illumination. This includes laser scanning (to which you are referring about the “scan lines”) and laser illuminating a panel which is much more common. Using a blue laser hitting a phosphor to create green would also be call a “laser projector” by a loose use of the term.

      There is no way that they could have a laser beam scanning projector that is bright enough with a large enough image that would work with any significant amount of Ambient light that would be safe to use. Also you would see noticeable speckle. All the signs point to it being some form of “laser illuminated” projection and since there was no speckle being reported, I’m guessing it is likely a “hybrid” laser (using green phosphors).

      I wouldn’t get my hopes up for this to make it as a product. You need a LOT of lumens to overcome the typical ambient light on something as big as a table. This stuff was just “demo-ware” in my opinion.

  8. Hi Karl – I really appreciate the awesome information on your blog.
    We are in the works of developing a vehicle HUD, and are working through finding the right technology for us (doesn’t wash out, price, size etc.). What are your thoughts on the best tech to use?

    Cheers!

    • I saw your video. I certainly wouldn’t use a transmissive LCD as they block too much light.

      Unfortunately, it could be a conflict of interest to help you right now.

    • The short answer is that there is no “best” de-speckler. There are options that may or may not work/help depending on the application.

      The best type of de-speckling will depend on the type of projector (panel such as LCOS or DLP or laser scanning) and the application and size and cost of the projector. A small pico projector is a whole different problem than a large theater projector. A pico projector is generally only going to use one laser per color and has very tight cost constraints whereas a large theater projector will have multiple lasers per color (and the speckle goes down by the log-base-2 of the number of lasers) and can afford to employ more expensive techniques. Additionally, most of the best options for reducing speckle are not possible/won’t work with laser beam scanning.

      The basic problem is the coherence of the laser light. Projector applications want the extremely high f-number (and low etendue) of laser light as it makes the optical system very efficient with much smaller optics. It should be noted that even a 20X or more reduction in instrument measurable speckle can still have human objectionable to a human.

      In applications where there is a screen (front or rear projection) a simple cheap vibration of a screen (say with a cell phone type vibrator) it quite effective in reducing speckle. But in cases where vibrating the screen is not possible/practical getting rid of visible speckle can be quite difficult.

      The “ideal” way to remove the speckle at it source, in the design of the laser diode (or in the case of large projectors to use multiple lasers that are mutually non-coherent). A “good” laser has very narrow “line width” (bandwidth) generally on the order of a 0.1 nanometers with high coherence. Projectors need “less good” lasers that have wider bandwidths and laser diode companies are claiming bandwidths on the order of 5 nanometers which has a significant impact on speckle.

      Among the design characteristics to reduce speckle in diode lasers is “mode hoping”, or the changing in the laser coherence/wavelength at moderately high frequency (10KHz to 100KHz). The speed of the mode hopping is high enough that it works for panel base applications that average out the hoping over enough time, but it shows up as modulation of the image (noise) in laser beam scanning applications.

      Panel applications have many more options than laser beam scanning. In addition to using mulit-mode lasers, they can accept spreading out the tight laser beam disturbing it with a combination of multiple path lengths and/or time varying path lengths (by either electrical path modulation and/or spinning diffusers) and then recombining/homogenizing the light.

      Hopefully the above is at least a start at answering your question. If you could be more specific about the application, I might be able to be of more help.

    • I have not heard anything specific other than what has been in the news. Lemoptix was one of many companies that tried to develop a laser beam scanning (LBS) solution; there are probably more than 20 companies that have developed LBS demo products. Lemoptics was targeting automotive Heads UP Displays (HUD) and near eye augmented reality glasses. They have been around for a while (since at least 2010 or about 5 years) with no product. I think they may have had a bigger chance with HUD than near eye.

      I’m not sure what Intel saw in them. Maybe they just wanted to cover a bet just in case and the price was right as they saw it. Usually when Intel invests it has something to do with other products they make that would couple to the product they are investing in, but I would not make too much of it at this point.

  9. Hello Karl,

    When we were both at T.I., I did product definition on the bipolar memory mapper, the 9911 DMA controller, and the 9909 Floppy Disk Controller.

    Perhaps the “killer app for LCOS picoprojectors is to integrate the projector into Cases for laptops and smart phones of presentation professionals in sales and marketing. That would eliminate the need to put the projector In the phone. The cases could be expensive “status” items (and allow for a separate power supply).

    …Just a thought. Have a great day!

    • Wow Rick thanks for connecting and the trip down memory lane. I actually started working on the 9909 for a few weeks when I first started at TI (I remember studying the encoding method and the issues with short and long term timing variation). But then they moved me to the 9918 VDP (the first “Sprite Chip” used in Colecovision, TI Home Computer, and the MSX computer in Asia) and I have been involved with pixels most of the next 37 years. I was thinking that the 9909 silicon was designed in Bedford UK, but I could be wrong on this (a number of chips I led the architecture of were designed in Bedford/Northampton).

      Embedding projectors in laptops and phones is problematical. The big issues are having enough brightness to work in typical situations and what/where your are going to project onto. I outlined this in the following article: https://www.kguttag.com/2013/08/04/whatever-happened-to-pico-projectors-embedding-in-phones/

  10. Dear Mr.Karl Guttag,

    I’m a Japanese video game historian. I’m deeply interested in the history of video game graphics, and your articles related to 9918 are always very helpful for me. Thanks a lot!

    By the way, have you ever read the following website?

    http://www.videogamehouse.net/gamemain/cartsfh/gamevisiondemo/

    Please take a look at ‘TOUCAN’S TRIVIA’ part. Anthony Cote says Milton Bradley “actually created the 9918 graphics chip in house and TI manufactured the die for us.” Of course, the 9918 was actually created by you and some other TI staff. His memory should contain something wrong. But still, do you think there were any possibilities that Milton Bradley people could have been related to the 9918 development in some degree? Please let me know your opinion.

    Best regards,

    • Thanks for your interest in the 9918. Of course you know it was widely used in Asia in the MSX computer by ASCII Microsoft.

      What a lot of people don’t know is that Yamaha did a register level compatible superset of the 9918 (a sort of Z80 version) that was used in the Nintendo Game System. What I don’t know is why TI let them do it (I don’t know maybe there was a licensing deal).

      As far as “TOUCAN’s TRIVIA”, I had not seen that before, and it is clearly untrue in general and an out an out lie about the 9918. I never even heard of anyone from Milton Bradley having anything to do with the 9918 and they certainly had nothing to do with the design of the 9918. Whether they work with TI on the 99/4 before I got on the program I don’t know. There was nothing defined in the way of a display chip beyond wanting background graphics and a number of “Sprites” (my understanding is that David Ackley coined the term that everyone else then used). Whether Milton Bradley met with the Home Computer group or not I can’t say, but they had nothing do to with the 9918.

      The 9918 was my first chip. Pete Macourek and I did all the architecture and design related to Sprites and the timing of all the memory fetches for the background (the coordination of memory fetches was the constraining resource). As we were the very first consumer video chip to directly talk to DRAM, it was really tricky to get enough memory accesses (and which let many year later to my work on Synchronous DRAMs and VRAMs/GRAMs). We were told to do 4 or 5 “Sprites” (TI’s terms) and Pete and I worked out how to do up to 8 but with the limitation of 4 per line. How it works is outlined in Patent 4,243,984 (http://www.google.com.jm/patents/US4243984)

  11. Hello Mr. Guttag,
    Thanks for all the effort you’ve put into this area. It’s helpful and fascinating.

    I was watching videos of the Sony and Celluon, thinking about the unique bi-directional scanning of the Microvision engine you described, and I noticed the effect of the laser scanning and video camera being slightly out of phase which normally causes the “rolling bars” on a CRT image causing a curious left-to-right wash effect on the MP-CL1 (most pronounced beginning at the 4m55s mark)
    https://youtu.be/_VZZJnfgGTI?t=4m55s

    I don’t know if it would serve any purpose beside satisfying curiosity, but have you ever seen or tried to acquire video of the LBS image from a high-speed camera? It would certainly illustrate things like beam alignment and fluctuations of scan speed from the edge to the center of the image. I’m sure there are one or two science-minded youtube users with high-FPS cameras who would be responsible and happy enough to accommodate someone with a projector to lend.

    One last question. It’s been over a year since the Lenovo concept phone was unveiled, to your knowledge are there any new products on the immediate horizon using the bTendo or having any improvements over the current Microvision engine? If one were holding off on buying a Sony/Celluon, how long might they be waiting before a new LBS projector came to market? I haven’t found any announcements that weren’t years old.

    Thanks,
    Joe

    • Joe,

      As my articles have pointed out, the scanning process is generally worst that than of CRTs. In particular it is interlaced and offset such that much of the image is only refreshed at 30Hz and because of the bi-directional scan (CRTs only have the beam on in one direction) the resolution varies horizontally (the outer parts of the image are 1/2 the resolution of the center).

      You don’t need a high speed video camera to isolate the scan process, all you need it a decent DSLR. I have captured many images with a Digital SLR cameras to capture individual scans (if you set the shutter to about 1/100th of a second you tend to capture over 1/2 of the screen of one filed with a roll bar in it) and I have used a few of these in my articles on the scan process. You can see the beam alignment and other issues in these still pictures if you lock down the camera and the projector and capture a number of images (getting the “roll bar”/field separation in roughly the same place).

      I don’t have any information good or bad on any production bTendo/STMicro based phone/projectors. It is hard to know with “technology demos,” many times they never make it to market and/or they take a number of years solve all the problems.

      The people pushing Microvision stock are saying (put this in the RUMOR category) that there will be a 50 lumen version of the Sony engine. This certainly might happen. I would not expect any dramatic improvements in measured resolution (the effective resolution of the “720p” devices is more like 640 by 360 pixels) but they might claim more (like the lie that they have 1920 by 720 resolution). Qualper (Chinese phone maker) has on the market a ~$1055 cell phone with a Microvision LBS in it, I have no idea what the real lumens or resolution is (their spec’s are highly suspect), but likely it is the same LBS mirror (and maybe the same Sony engine).

      LBS is one of those things that sounds in theory much better than it works in practice. Beyond that, you have to look at the practical application of pico projectors as I have outlined in my article about the subject https://www.kguttag.com/2013/08/04/whatever-happened-to-pico-projectors-embedding-in-phones/

  12. Hi Karl.

    I found your blog very informative and had very useful knowledge about AR and pico projection.

    I want to ask question. I live in Turkey. Im thinking a near field display that will act as second screen to smartphones. It will be helpful product to perform religious ceremony of some people here. I prefer it to be see-through.

    For the optical module part how can i move. Can you advise me company that will make design housing for optical module based on your knowledge, it should be prism type. I want to move with cheapest mp cost solution and need at least 420p resolution.

    • I assume you mean you would like a near eye solution that is see-through. Sounds like you only want modest resolution (about VGA or 640 x 480 lines).

      A “prism type” will be less expensive than say a light guide module (the light guides themselves end up being expensive).

      I don’t have a ready list of optical companies for near eye as most of these efforts are captive (Google, Microsoft, Vuzix, etc) to the company making the whole system. iView LTD http://www.iviewdisplays.com/en/products/products.php?cid=001 and ShinyOptics http://www.shinyoptics.com.tw/ are a couple that I know of that sell near eye optical engines. Lumus http://lumus-optical.com/ has a light guide based engine that is reportedly expensive.

    • Fritz,

      I have no plans right now to review the Motorola Mod Projector unless someone is going to give me a phone with a projector attached. I have looked at a number of DLP projectors through the years. I would guess that the Mod Projector is using the newer and smaller “Tilt and Roll” panels which have “square/Manhattan pixels” eliminate the “diamond pixel” artifacts I have written about with the earlier DLP Pico Projectors. My expectation is that the projector will be much more efficient due to the larger tilt angle of the tilt and roll pixels but I hear this comes with some loss in contrast.

      Is there something specific you would like to know in this regards?

      Karl

  13. Hi Karl,

    I noticed an interesting pico projector development in my email inbox today with the UO Smart Beam Laser, which appeared at #1 on the Amazon best seller list. I think it’s the first LCOS panel – laser based projector to be commercially marketed.

    The projector is comprised of lasers and LCOS panel(s) and is rated at 60 lumens.

    As we exchanged thoughts in the past about laser pico projectors, it’s finally good to see a laser being used for a “panel optical system” rather than the limited LBS scheme. As a result, the UO is described in marketing as KID SAFE for usage.

    I hope you might look under the hood of the UO Smart projector or similar type for an in-depth review in the future. I’m kinda curious about the optics and illuminating mechanism for this projector. –Ken

    • Ken,

      Sorry to be so long getting back to you. I have looked at the UO projector but it was about 1 year ago. First as a matter of disclosure, the panel inside uses a Syndiant LCOS panel that I worked on when I was at Syndiant.

      The good news is that it is a “true” 720P projector, it is laser class 1, and it is focus free. It has about 4X (2x in X and Y) the real/measurable resolution of the so call 720P Celluon PicoPro and Sony MPCL1.

      The bad news is that at least on the one I measured a year ago it is more like 45 to 50 lumens and not 60. More troublesome was that the color balance was pretty far off (red deficient – blue-green tint). Ironically the PicoPro is too red and thus blue and green deficient. This is a matter of bad “tuning” manufacturing and hopefully it has improved in the last year.

      The UO optics is “100-percent offset” which is what you want in a projector. This means if you set it down on a table the image shoots upward and will not hit the table top AND it is optical keystone corrected (with not loss in resolution), you don’t need a “kickstand” or the like as you find with the PicoPro and the Sony MPCL1. There is a little downside to this as there are some chroma aberrations as you go from the bottom (which is best) the the top which is worse, this kind of goes with the territory in doing 100% offset on a budget. This issue is really only that noticeable on high resolution black and white stuff and not a killer defect.

      Frankly, the color/red-tint problem is the one “stopper” for me on the product. I told my friends at Syndiant to tell UO that they should fix this. I don’t know if in the last year they have fixed it (hopefully they have). If it were me, I would support some form of user tint control in ANY pico projector. It turns out that US and Europeans tend to like warmer/red-er/lower-color-temp “whites” where most Asian’s prefer a slightly colder/blue-er/higher-color temp white so there is no “best” answer (but both the UO and the PicoPro are way off even the “black body curves” for white so they need more than just color temp adjustment). I’m also not wild about the form factor of a cube, I which they went with a flatter shape like the PicoPro or Sony device.

      Karl

  14. Hello Karl,

    you asked on the IV board the Robohon projector specifications.

    The warning label is here (output 1 mW):
    http://cdn.gsmarena.com/imgroot/news/16/01/robohon-ces2016//inline/-728x/gsmarena_015.jpg

    The technical data of the laser projector is:
    1 mW, class 2 laser, R: 638, G: 518, B: 448 nm

    Pictures of the two available modules from Sharp:
    http://news.mynavi.jp/photo/articles/2016/01/19/wearable_expo/images/004l.jpg
    The left one has 19 x 32 x 8 mm, the right one 23 x 40 x 8 mm.

    The full Robohon specification is here (Projector: HD (1,280 × 720 pixels) or equivalent):
    https://translate.google.de/translate?hl=de&sl=ja&tl=en&u=https%3A%2F%2Frobohon.com%2F

    • Fritz,

      I later found the picture of the label saying it was Class 2. This also means that it has less than 22 lumens (it could be a lot less) of light output as it is an LBS projector (even though I have not found them giving the spec/claim as the the number of lumens). My guess is is on the order of 10 to 15 lumens but I have not had a chance to measure it.

      Thanks for the image of the Sharp laser projection module, I had not seen that before.

      The 1280 x 720 pixels is a marketing statement and has nothing to do with its true/measurable resolution. The effective resolution is closer to 640 by 360 pixels, this is inherent in the Microvision beam scanning process with what they call (falsely) a 720P scanning mirror. You can have a projector with a 640×360 resolution DLP and then feed it 1920×1080 content that gets scaled down, but that does not make it an HD projector any more than the RoboHon has 720P resolution.

  15. Hi Karl,

    I’m currently an undergrad student in Arizona, studying physics – but on the weekends I’m a skydiver. Going skydiving is an amazing experience, which if you’ve never been, I highly recommend. And although it’s gripping, I constantly find my mind elsewhere, even in free fall. You see, I want to race. My dream is to one day have full face helmets that can display an augmented reality path through the air to fly down – like floating hoops. If you’ve seen Star Trek, it would be really similar to Captain Kirk and Kahn’s space jump scene (https://www.youtube.com/watch?v=4DHE7VS7lyw). Imagine if skydivers could sync their helmets up, and all race down the same track in the sky! That would be so cool!

    So that’s my dream, and I’m working towards it. I decided to start by just trying to put a HUD in a helmet, and I’ve just recently done that (https://www.youtube.com/watch?v=BaTm-QxCoEE) by using a pico projector and a projection screen (a rear projection screen, unfortunately, that’s all I’ve been able to get a hold of so far). However, the display is barely visible in full sun. While googling for solutions I came across Navdy, and you. Any advice for increasing display visibility in very sunny conditions?

    Also, if you had to put a display in a full face helmet, would you use pico projection? I’m a novice when it comes to comparing optical technologies for real world application. I decided to go with projection since it was the most intuitive to me, and required parts I could easily get my hands on (projector, projection screen, etc). But I’ve wondered a lot if there would be a much better tool for the job, like a transparent LCD screen. Especially since it seems like a full faced helmet inherently has less strict constraints for an HMD (like you noted in your HMD post).

    Anyway, any advice would be greatly appreciated, and I’m a big fan of your blog – thank you!

    Jack

    • Jack,

      I’m can’t tell from a quick look at your video your configuration but it looks like you just have some flat panel display reflecting off the front plate.

      The problem with HUD’s in full sunlight is that you need very high “nits” (Candela-per-meter-squared) which is a measure of light per solid angle. A typical flat panel phone or tablet has about 500 nits, when you reflect it off say an un-coated glass plate you might get 15-20% reflected (depends on the angle) which would have you down to only 100 nits. You could use a brighter panel or the like but you are a long ways off. For a HUD in sunlight you want on the order of 15,000 nits or a factor of 15X. By having say a 50% light blocking in the visor you could cut this by a factor of 2. Common LCD panels are designed to have a wide viewing angle for ease of use and thus the light is very diffuse they are typically about 500 nits in the center and don’t fall off that much to the outsides. What you want is something where ALL the light is concentrated over a small angle.

      I don’t think any transparent display is going to work in the sunlight, period. A transparent LCD would not have a light source for making the image and a transparent OLED would never have high enough nits (the light is too diffuse and not bright enough).

      You have an additional problem in that you really want the “focus point” of the image to be a meter or two beyond face plate so the image will reasonable in-focus when you look ahead.

      You should know that with typical “HUD” using a projector you DON’T project directly onto the windshield or combiner (it does not work as you need to have a “real image”) If you shoot a projector directly at the glass it will bounce off at an angle that will not go toward your eye and would not be an image you could see anyway; try looking at a projector in a mirror, you either see a bright circle of the projector lens or nothing and never an image you could recognize. What is done with car HUDs is that your create a very bright small image with very high nits and then you use optics to magnify the image and move it out in space; Then the windshield or separate combiner acts as a semi-transparent mirror to reflect the magnified image back. With automotive HUD they use a small LCD screen with focused bright LEDs to generate the image. An alternative to the LCD screen with bright LEDs and more power efficient, a pico-projector can form an image on small high gain screen (usually transmissive but it can be reflective) and then the optics (which might include a spherical combiner) magnify the image and move the focus point.

      Note that the combiner acts as a mirror. If it is curved it is going to in some way distort and/or magnify the image. You will notice that separate combiners are either flat or roughly spherical. The spherical ones magnify and move the image out in space. When going with a flat (or nearly flat combiner) magnification and moving of the focus is done usually by one or more curved mirrors. Continental’s web site has some good diagrams on how their automotive HUD optics works (see http://continental-head-up-display.com/)

      Alternative to the the “auto HUD” design would be near eye AR type design. You would need one much brighter (by about 5X to 10X), i.e. designed for outdoor use (there are some but I think they are expensive).

      There are several companies I have heard of working on Motorcycle helmet HUDs that might work for you (some may be “near eye” and some may be more like an “auto HUD”, but I don’t know anyone that has successfully done one. Skulley (https://www.skully.com/) tried a complete helmet design but has recently when bankrupt and is being sued http://money.cnn.com/2016/08/11/technology/skully-indiegogo-weller-lawsuit/. Unfortunately for you, while closer in terms of needing to fit in a helmet, they are not trying for a large field of view.

      A much lower tech and much less expensive (development and final product) AR way to go would be more of the VR route ala Oculus Rift and the many similar product. You would have a LCD/OLED screen with optics in front of it to focus the image out in space and a camera built into goggles to show you the “real world” and then combine the “AR” imagery. Sadly, this would leave you effectively watching TV as you skydive.

      Anyway, those are some ideas,
      Karl

      • I left out another potential “HUD” technology which is a “Phosphor-Film” from Sun Innovations (http://www.slideshare.net/suninnovations/sun-film-turnswindshieldintoheadupdisplay), but I don’t think this will work so I am including it only to be more complete in the options. With this technology you project a UV and/or Deep Blue (which means needing a special projector) and if you want color you need multiple different wavelengths. This is a “direct view” technology that will make the image appear to be on the glass surface which is NOT what you want as you “glass” is likely only a few inches from you face, too close for you eyes to focus. Another issue with it as a HUD technology is that the phosphors radiate in all directions and thus it will have a problem in direct sunlight as the light is not concentrated in the direction you want (toward the viewer’s eye).

        Also, beyond all the optical issues, if you want AR pathways in the sky, you have the major issue of how to keep the AR image registered with this device mounted on your head. I would suspect that the image will be bouncing all around while you are buffeted by the air as you fall. This will be particularly true with a “near eye” display as small movements of the device turn into very large movement of the image as it is so near your eye.

  16. Hi Karl,
    Firstly, congrats for your website, it’s very good!
    Well, i’m trying to make a augmented reality adapter for bike helmet. My idea is to reflect a lcd screen (smartphone or smartwatch) on the visor. I was thinking to use a spherical visor in order to amplify the imagem. This is my doubt, which diameter could I use? Do you know the curvature in the meta2 visor? Acording my calculations, I found a 40cm curvatura a good one. What do you think?
    Additionally, a good AR viewer os the universe2go.

    Thank you,
    Victor

    • Thanks,

      First, did you really mean 40cm = 400mm? And was the “radius” or “focal length? Either way sounds way too flat/too large a radius. Even a 400mm radius would be a 200mm focal length which is about what people are using for automotive aftermarket HUDs which have much larger distances from the display to the combiner and combiner to the eye.

      I assume this is a “home brew” for yourself rather than a “product.” The big thing you need to accomplish with a spherical combiner is getting it to FOCUS, the magnification will sort of go along for the “ride” so to speak and drive the LCD size. The apparent focus distance is a function of the focal length of the combiner/mirror and the distance of the “object” (LCD in your case), and distance from the eye to the combiner. As the object (lcd) approaches the focal length of the mirror, the magnification and apparent focus distance goes to infinity. While you want to move the focus distance out pretty far, as you approach the focal length everything becomes pretty unstable.

      Hopefully know the focal length of a spherical mirror is 1/2 the radius of curvature (you didn’t say whether the 40cm was the radius of curvature or the focal length — this is a big confusion in talking to people, some spec radius and some spec focal length). You should also know you need a very good surface on a mirror as any errors are multiplied by 2 (in and out), if it is at all wobbly, the image will be very distorted. You need an optical quality mirror/combiner. Also for this configuration to work, the combiner HAS to be reflective coated (say about 20% to 50% reflective). The other thing to know because the combiner/mirror is curved, the ends are nearer the flat display than the center and thus it will magnify more in the center than the sides (something you can deal with by pre-distorting the image.

      The LCD probably going to be near the LCD and then the eye is near the combiner with a helmet setup. So you will need to move the focus quite a bit just for you eye to see it and then some more to move the focus out in space. It might even be tough to work well at all with a single mirror/combiner unless the combiner is pretty far from your eye (which is what the META 2 does).

      But if you want to give it a try, I would start with something closer to about a 200mm Radius/100mm focal length or 100mm Radius/50mm focal length. A cheap way to experiment is to buy cheap concave mirrors meant for educational work (they are available on Amazon. These will let you play with the configuration and see what you need (unless you are an optical expert with modeling software this is easier, faster, and cheaper than modeling).

      Hopefully this is helpful and get back to me if you have more questions.

  17. Hi Karl,
    Thank you very much for explanation. The radius is 200mm (400mm diameter).
    My idea is something closer to meta2, but only for 2d imagens (not side by side).
    The idea is to have usefull information about the ride for ciclysts.
    I’ll Look for those mirror on eBay. Thank you again.

    • I think you are going to find that a 200mm radius of curvature for the combiner, you are going to have to have the display pretty far away. I’m guessing Meta 2 has about 75mm radius of curvature.

      Another big issue for you will be how to get the nits (candelas/m-squared) to be bright enough for daytime use. It is not an issue of the total lumens/light it is an issue that the light is so diffuse.

      Anyway, good luck.

  18. Karl I was reading your recent article about waveguides, Magic Leap and Hololens and ODG. There is a company called Vuzix that is also developing waveguides smartglasses and they presented a video with amazing graphics incorporated in a looks like regular sunglasses. You don’t mentioned them https://www.youtube.com/watch?v=x-F_o8SM_XU Seems that their image quality is very good

    • Thanks,

      Actually, if you stop the Vuzix video you pointed to at say 17 seconds you will see a very large blueish glow around everything. This is classic “waveguide glow.” I was thinking about using this in my article to illustrate the waveguide glow but decided that the Hololens was a better example to use.

  19. Thanks Karl, that means that what I see is what it is, not a lab created video. That “glow” is what I should see if I wear the googles, not “Photoshop” style lab created video. I consider that taking all that into consideration and the sunglasses form factor the “waveguide glow” is reasonable. It’s not present in all the video, or perceptible, they also has another video https://www.youtube.com/watch?v=hyxppK0sy6U

    • The glow I firmly believe is cause by the waveguide and not a deliberate effect. In the video you pointed to, the character is soft/blurry and very importantly is at the bottom of the screen. With waveguide the glow can be directional. I the other videos you will notice that the glow is below the big object. This glow would be cut off in the video you reference. I’m not saying they did this deliberately, but cleaver editing and setup can hide a lot of problems.

      What you would like to see in a demo to PROVE how good (or bad) the waveguide is is a mix of simple sharp geometric shapes (a large square is a good starting point) and some high resolution information such as SMALL text. You want to see if there are “high frequency” (resolution) and “low frequency” large area errors.

  20. Hi Karl,
    Really enjoying your blog. Have you any commentary on the “CastAR” system, which seems to be an interesting – and simplified – take on AR, sort of “reverse-AR” using head-mounted pico-projectors. It has a fairly specific use-case as it requires the viewable area (e.g. desktop) to be covered in retro-reflecting material, but I really like what appears to be the relatively simplicity and practicality of the idea. There are some videos online and clearly it works – and it’s definitely in its own niche which may or may not succeed against “regular” VR/AR headsets – I just wonder if you have any commentary about it.

    • Thanks for asking, I always like looking at “different” display technologies. Unfortunately, “different” is often worse.

      Frankly, CastAR to me looks like at best an extremely limited concept. To me is is just a bad idea to project an image from a person’s head into the real world and have a special screen that is required. The video https://www.youtube.com/watch?v=iHgXT4UBi_I from 2015 shows a lot of the limitation. Note the screen has got folds in it and I would expect that unless you treat it very well it is going to progressively get worse over time. Any small head movement causes the whole image to jump around; remember there is a lot of “leverage” in that a very small head movement gets multiplied by the distance; they are then one or more frames behind processing the head movement and changing the image which I think would be nausea producing.

      EDIT: They are not using LBS, but rather field sequential LCOS. I looked at the video to hastily.

      The bottom line is that this is a specialty at best concept IMO. If you are going to go with headgear, you might as well go with a combiner solution. It will take much less power and produce a far better image and tracking will be vastly better.

      • LCOS projectors apparently; a teardown by the excellent “Mike’s Electric Stuff” :

        Mmmm the extreme sensitivity to head movement is a good point -they use IR camera tracking @ reportedly 120fps.
        Thanks!

      • Thanks, on my first quick look, I though it was LBS. It still looks like a backwards way to do everything. Not clear what advantage it would have.

  21. Dear Karl,
    I just happened to come to your site. I read a ton of your posts, you do great work. I have a question if you don’t mind. I was reading up on OLED vs LCOS and I was wondering what are your thoughts about Himax Technologies? I ask cause if AR/VR are headed towards OLED, doesn’t that mean Himax is out of the game before it starts? And something like KOPIN is strategically in a good position? Thank you for your time.

    • I just got back from CES and have a fresh perspective on these issues having talked to a lot of people, mostly companies using the various display technologies. I will be getting into over the coming weeks.

      OLEDs today can’t come close to the brightness of DLP and LCOS (by about 10X) using LED illumination. I think the OLED Microdisplays are going after the VR market, at least for the foreseeable future.

    • Yes, I have seen the video. As you wrote, not a lot of concrete information. I generally go by the rule “if the spec’s were good, they would give the spec’s.”

      There is a lot of talk of how wonderful and important the application would be, but then they never say why theirs is so much better. They do emphasize the vergence/accommodation conflict, but there are other approaches to solve this issue. I would feel a lot better if they would publish their informationa and images rather than just talk about how wonderful it is.

  22. Hello Karl!

    How OLEDs actually useful in displays for virtual reality helmets compared with LCD? Does it make sense to say that their widespread introduction implies a significant improvement of consumer qualities of virtual reality?

    Thanks!

    • I assuming you are referring to flat panel (phone size) displays. The big optical advantage of OLEDs is that they have a darker “black” and a wider viewing angle (no variation in color or brightness with viewing angle. OLEDs also have power consumption, thinness, and weight advantages. The downside is lifetime and cost but both of these are improving. Thus purpose built VR headsets such as Oculus and HTC use OLED panels. So as a practical matter, OLEDs appear to already be the panel of choice for non-see-through VR.

      It is a different issue for AR (see-through) and light weight non-see-through near eye displays. These displays generally want to be small and light and use silicon substrates, in this area LCOS and DLP dominate. OLED is making inroads in non-see-through displays, but can’t output the light necessary to support see-through well (as I pointed out, ODG while using OLED in a “see-through” display block a huge percentage of the ambient light). With LCOS and DLP manufacturers can crank up the illuminating LEDS to get to 10X to 50X the light levels of the available OLEDs; this is particularly critical for see-through displays that typically will lose 80% to 95% of the image light to support being see-through.

  23. Dear Karl,

    I am currently preparing a conference talk about speckle reduction methods for LBS projectors and was wondering whether I could use an image from your blog which demonstrates what speckle noise from a LBS projector actually looks like? ( .. naturally giving credits to you on the respective slides of the presentation)

    This one, for example, would do the job:
    https://www.kguttag.com/wp-content/uploads/2012/01/IMG_0326-white-test.jpg

    This one looks even better (/worse), but this is from a LCOS device, right?
    https://www.kguttag.com/wp-content/uploads/2012/01/Focus-Free-at-Angle-0576-Crop.jpg

    Or maybe you have some other image that convincingly shows the speckle noise of a LBS projector?

    Many thanks in advance and best regards,
    F. Doetzer

    • No problem with you using the pictures with credit.

      Those pictures are now about 5 years old. Things have gotten better in the newer products but the speckle is still present. The Celluon pictures are a bit more up to date (look though – https://www.kguttag.com/?s=Celluon ). I don’t have anything handy for a newer LCOS projector.

      Of course the human eye does NOT see speckle the same as a camera. The camera settings, sensor size, and aperture not to mention any processing the camera does will greatly affect the speckle captured.

      There are lots of tricks that have been tried but the biggest improvements in small projectors comes from laser diodes with wider line widths. Big theater projector have to use frequency converted lasers that have narrow line widths and they vibrate the screen to eliminated the perceived speckle (of everything I have seen, vibrating the screen works best with narrow line width lasers); it does not take much, something like a cell phone vibrator will work for a fairly large area of screen.

      • Thank you for your reply and the permission to use your pictures!

        You are right – the perceived speckle contrast crucially depends on the conditions under which it is observed. This will be discussed later in my talk.
        Said picture of the speckled test pattern is just intended as a motivation at the beginning, so it does not have to be quantitatively accurate at all.

        Larger linewidths only help if you have a screen with relatively large surface roughness or volume scattering.
        Vibrating the screen is in fact the only method where you don’t get any tradeoffs. In the case of a LBS projector you would have to vibrate really fast though. And, obviously, it can not be integrated into the projector directly.

        Unfortunately, due to the small size of the scan mirrors, there is not much you can do about speckle in LBS projectors except polarisation, laser linewidth or vibrating screens.

        Best regards,
        F. Doetzer

      • Sorry for being so long to reply to this message. I left for a U.K. trip on the day it came in and overlooked it.

        Per my article (https://www.kguttag.com/2015/07/13/celluonsonymicrovision-optical-path/), I think Sony was using “path length diversity” to reduce speckle with last beam scanning. But you are generally correct that there are more tricks that can be used with laser illuminated panels rather than laser scanning. The biggest impact on speckle seems to be laser line width as you really need orders of magnitude reduction in speckle with a narrow line with laser. If you start with narrow line with lasers, about the only thing that will “work” is screen vibration. Large (theater) laser projectors, such as Dolby Cinema, all use frequency doubled laser which have very narrow line width and they use screen vibration (which works very well) to reduce speckle.

  24. Dear Sir Guttag
    I have been very intersted inthe Varjo foveat display and it make me a problem , I wanted to patent a 180° FOV AR technology but if the Varjo’s system arrives on the market my optical system will have problem to hit the market, however I have spend a fews days trying to see how they can improve the system and I have found around of 5 ideas, in optics I m an idiot but could I use a wide microdisplay of the same lenght than the big display just in lengt and sweep it on the horizontal line
    it could adpat to the position of the eye
    with eye tracking it decreases the resolution around the circle of the vision to look like natural
    there are other ways but I prefer to keep them

    Am I wrong

    • I can’t fully follow what you are doing. Foveation (rendering particularly and less so display) are fairly hot topics today so many people are working in the area. Also Varjo has indicated that they are not talking about all they are doing and have claimed to have patented multiple ways to do a foveated display. I don’t think you could afford a microdisplay the “same length as the big display” as you put it, even if you used the microdisplay the wide direction against the typical larger display’s narrow direction. You have to combine them optically, as with Varjo, because of the circuitry and wiring AND you have to have them focus at the same distance.

  25. Hi KarlG,

    The report you wrote is amazing. It really help me understand a lot about the world or AR/VR

    I have a quick question want to ask you.

    What’s the best ambient contrast ratio for user to read the content in AR devices? and what’s the maximum and minimum ratio?

    Let’s say if we want to design a AR device without any shielding cover, how much nits we need in order to see the content under an outdoor sunny day condition?

    Thank you very much for your answering.

    • I’m not sure if I understand your first question. But people can generally make out text (depending on size) with a contrast ratio of about 2 to 1 but colors will be very washed out. For “barely decent color” I think you want a contrast ratio of at least 8 to 1 (colors will be washed out but at least you can easily tell the colors). NOTE, this contrast ratio is usually dominated by the display brightness versus the ambient light and has next to nothing to do with the display device contrast (see the next paragraph).

      The other issue is the “native” on-off contrast of the display device/optics. With say only about 100:1 (as with say Google Glass) you will clearly see the boundaries of the display device’s “black” (supposed to be clear) in a dimly lit room. You really would like something in the 400:1 on/off contrast or more range to get rid of the display rectangle. So to be clear, for visibility outdoors you care about the brightness (in NITs = candelas per meter squared) and for dark environments you care about the display contrast.

      Your outdoor use question without any shielding might be a shocker (it was the first time I looked at it). To work well outdoors without shielding they usually want 3,000 nits or more. Industrial outdoor computer screens can have 5,000 to 6,000 nits. This compares with needing less than 200 nits indoors (the SMTPE spec for a movie theater is only 55 nits). When I designed the HUD for Navdy, my spec was greater than 15,000 nits (when driving you can’t avoid driving into the sun). Consider that new white concrete on a sunny day has about 17,000 nits. This is why when people talk about using OLED microdisplays in see through headsets (ala ODG) it makes no sense to me. The brightest OLED microdisplays start with 2,000 or less nits and then lose about 80% to 90% in a reasonably transparent combiner, thus leaving less than 400 and likely less than 200 nits; this will be invisible in sunlight unless you are wearing almost the equivalent of welding goggles instead of sunglasses.

      • Thank you for your reply.

        In the first paragraph, how you define the contrast? assume the outdoor background is 500 nits, and the light emitting from the AR device is 1000 nits at full white image, the ratio is 1500:500 or 1000:500?

        In your opinion, what’s the best solution (display) for AR device now?

        Thank you!

      • Typically, contrast is measured between the brightest pixels the display can generate and the black level. For a see-through display the “black” level is whatever you are looking at with whatever optics are in the way (you could have say sunglasses to drop the brightness of the ambient.

        In your example if the background is 500 nits and the display is 1000 nits then a “white” pixel on the display will be 1000+500 nits (in this case the background adds to the white) and the black is 500 (actually a little more). With a 500 nit background the contribution to the black of the display device will be negligible and safe to ignore. Say you had a 1000 nit display and a native contrast of only 100 to 1, it would add 10 nits to the “black” and if it was 1000:1 it would add only 1 nit, either way it has only a very minor effect on the contrast ratio (the display contrast only becomes a major factor in dim environment). So the contrast ratio will be about 1500/500 or 3:1 (generally expressed with one side of the ratio equal to 1).

        There really is no “best” AR display right now. They all have strengths and what I would consider fatal flaws. There is a reason that you don’t see people using them in everyday life or even in industrial applications beyond some very selective studies.

  26. Dear Karl,

    The report you wrote are amazing, truly help me a lot.

    I have a question about AR devices.

    What’s the best ambient contrast ratio for designing a AR device?
    and what’s the minimum and maximum ratio that human eye can take?

    let’s say if I want to build an color sequential LCOS AR device, how many nits we need under outdoor sunny day condition without any shielding cover?

    Thank you very much for your answering

  27. Karl,
    Correct me if I am wrong: the Microvision engine uses magnetically driven MEMS. Do you know of any commercial (in production, being sold) three-laser (RGB) projectors using electrostatically driven MEMS?

    • While I know of Daqri, I have not followed Daqri that closely. I know the make big (ugly) helmet like devices for the more industrial type market. They are typically big, heavy, and expensive.

      Daqri also bought Two Tree Photonics which was working on laser-hologram displays for automotive HUD. I know Two Trees Photonics had some success with Jaguar/Tata Motor before Daqri bought them but I have not heard anything since.

    • First, it is a patent application and not a patent, at least yet.

      Second, the Google application put collimator mirrors behind a group of pixels, this makes no sense to me. This would lead to a hodgepodge set of tiled images if it worked at all. Micro-LEDs are an important display type but the way they are using them in this application seems ridiculous.

      As far as comparison to the AR3000 it is a bit of apples to oranges. The AR3000 are waveguide with DLP based glasses which while thin, compromise on image quality and are pretty low resolution (WVGA per eye). They may work for what I think is their intended industrial applications but they will not be confused with being a high volume consumer product.

  28. Hi Karl

    Love your site!

    I was wondering will you be at CES2018 and what are the booths you are looking forward to see.

    Thank you

    • Thanks Mo,

      I have a pretty big list this year. Basically I am looking at almost everything in the AR and VR sections of South Floor. I’m also very interested this year in any Micro-LED announcements such as the one by Samsung. I will also be checking out the automotive display area for any Heads Up Displays.

      Karl

  29. Hello Karl,
    I am a student at Ramaiah Institute of technology, Bangalore.Thank you for your inforamtive blogs on Augmented Reality and Near eye Displays. I have found them very helpful.

    Me and my team are currently developing a prototype for a Smart Augmented Reality Safety Helmet for bikers which enables early warning and detection of dangers on road and rendering this information into the biker’s field of view along with navigational information. We are using a DLP Pico Projector for this purpose.

    We would like to know which waveguide would be ideal for this application and where they would be available for purchase.

    Thank You

  30. Dear Karl Guttag,

    I am part of an engineering startup from the University of New South Wales. I just finished reading your blog about CES aftermarket HUDs part 2 and after looking at the Navdy prototype you built, I felt a great sense of relief as everything you had mentioned in the blog confirmed everything I learned myself in the past 2 months. Before the blog I just had two questions: one of which was answered when you wrote about the reflectivity of the combiner that the Navdy uses. May I ask what material the Navdy uses for the projector screen? So far I gathered it is some sort of high gain material however what I don’t understand is what the layer of plastic of the “silver layer” is, and how it is necessary. Also, if you don’t mind, may I ask you a few more questions perhaps in private?

    Yours Sincerely
    Andrew

  31. Karl

    It sure if you saw it or not, but in aSEC filing released today, Emagin disclosed that the buyers of its recent private stock offering included Apple, LG, Immerex and Valve…..doubt any of these buyers were simply looking for an investment….

    • That is interesting. For these companies, it could be just “bet covering.” After all, if Apple or LG was really serious, it could buy eMagin (with a market cap of about $50.7M) and not even notice it (pretty much just petty cash). Heck, both of these have bought out startups for a lot more money.

      A strang mix with Immerex being a startup and Valve being a software company and working with LG. To Immerex and Valve, it may have something to do with assuring supply.

  32. Hello, I’m searching for micro-LED in automotive HUD and its advantages over current technologies such as DLP, LCD. Most of the technical info I found compares micro-LED to OLED in TVs, wearables, and automotive clusters but not HUD. Could you point me to any articles or research about this? Any help is greatly appreciated. Thank you.

  33. Dear Karl,
    Thank you for your quick reply. In your opinion, do you think micro LEDs are better for auto HUDs, if they can overcome existing manufacturing and design challenges?

    • Someday Mico-LEDs will likely be useful. The problem is that it really hard to judge how far along they are in terms of manufacturability. You really have to dig down to know whether you are looking at a one-off prototype or something that will scale down in cost and up in yield.

  34. Hi Karl,

    I really hesitate to bother you with this, but I’m sort of stuck on something. I’m trying to design and build an era-appropriate graphics/sound card for my friend’s TRS-80 Model 1 Computer. It was his first computer when he was a kid. As you probably know, that first generation of 8-bit computers (TRS-80 and Apple 1) had pretty rotten graphics and sound, and your TMS9118/TMS9918/etc. series of VDPs was a welcome change to the hobbyist-class stuff that preceded it.

    I’ve designed a circuit to interface with a TMS9118 (along with a couple of SN76489 sound chips and an input buffer for an Atari-style joystick), but before I crank out a PCB that will plug into the address/data bus of the TRS-80, I wanted to just get to know the chip. So, I’ve got it in a breadboard with a couple of appropriate DRAMs wired up and a rather specific crystal that was non-trivial to acquire, and I’m trying to drive it with an Arduino UNO. I can get it to power up, and I can get the Arduino to control the RESET* pin as necessary, but beyond that, it seems like the chip is just ignoring my register update commands (I’m driving two ports on the Arduino, with 8 pins connected to the data pins on the TMS9118, and 3 pins connected to the bus management pins [CSW*, CSR*, and MODE]). The Arduino is clocked separately from the TMS9118, but it looks like that’s not unusual in how this chip got used in industry (CPU and VDP don’t seem to have to share the same clock), so I’m struggling with what exactly this chip needs to see in order to accept a register 0 command and a register 7 command. I’m just trying to change the background color at this point. Thus far I just get a nice, unchanging, black background color. I can tell that it’s not doing anything with my commands because one of the commands I’m sending is supposed to turn off the Interrupt signal, and I can still see that signal on the scope.

    Is there something special you need to do at the end of each register write to tell the VDP “Okay, I just sent you a command, now do something.” I’ve read and re-read the programming guides until I’ve practically memorized them, but clearly I’m doing something wrong. Is there some kind of timing magic I need to be careful of?

    Any guidance you might be able to offer would be greatly appreciated, and I certainly apologize for bothering you with something this far back in the Wayback machine. Thanks!

    • As I remember it, the Host interface is asynchronous to the registers. I think the only synchronization was on the signal that said the host loaded something in the data and address register to/from the DRAM.

      In 1979, I designed and had built an interface to TI-59 pocket calculator so you can’t get much more different a clock than that. You can find my handwritten documentation on the TI-59 to 9918 interface here: http://spatula-city.org/~im14u2c/vdp-99xx/e3/1979-or-1980_TI59_Calculator_to_9918_Interface.pdf. Hopefully, you know about this website: http://spatula-city.org/~im14u2c/vdp-99xx/ which has a bunch of documentation I scanned.

      It was very simple to get the 9118/9918 up and running. NOTE the 9918/9118 was really a “little endian (LSB= bit zero) design” with big endian numbering (it was changed after the design was done by TI marketing). This really screwed up people with the DRAM interface, but it could also cause people to get the bits wrong in the registers.

      I even remember getting the VDP up and running when I was probing it with a test station and you only had to get a few pins connected and wiggling. Pretty much the VDP can’t help putting out RAS and CAS if it is at all alive.

      A few suggestions:
      1. Hopefully, the chip you have is good (I think it is out of warranty :-)).
      2. Get the power and crystal hooked up and the pins should be wiggling. You might have to hit the reset pin. I always started by looking at CAS and RAS. As I remember it, the VDP will start putting out RAS and CAS if the Crystal is working (along with a bunch of other things).
      3. I would then try and change the background color and look at the video output. But it sounds like you can’t write to Register 7.
      4. Another trick is to try and write something to the DRAM and see if the write to DRAM signal activates (it will only blip so you have to have the scope triggered on it).

      I know the advice above is pretty basic and you may have already tried it. Perhaps the biggest help I can give is that IF you have a good VDP, and you connected it up correctly, then it is pretty simple to get it working (no issues with clocks between the CPU side and the VDP). The VDP pin number was a constant problem back in the day and it screwed up many first time designs.

      Let me know how it goes.

  35. Hi, My name is Metodija Mihajlov and i follow every post that you publish on your blog.
    Love your work.
    i have question. If you can answer i will be happy to hear it. Or better its to write about it.
    MagicLeap is making a noise because they collect lots of money from investors and they need to brag about it. So they are still not there and we know that.

    What about companies that are working for decades on the same project and they are holding patents as they property for a long time. Companies like Himax Technologies or Akonia or any other that you think they are on the right track to build the hardware. Can you post something about they work and patents? like this seams that no one is able to build the hardware and this amazing think (AR) it will not happen. At the beginning the content was the problem. Now as i see we have content, so many companies are building it. And we don’t have the Hardware.

    Best Regards
    Miki

    • AR is a very hard problem as I have written about for years on this blog. All the various technologies have their problems, some worse than others.

      The best AR display looks terrible when objectively compared and measured against even a cheap LCD TV or Monitor.

  36. Karl: What do you about the technology of the company that Apple recently acquired Akonia Holographics. They claim to have over 200 patents and that “Akonia Holographics is pioneering the world’s first commercially available volume holographic reflective and waveguide optics for transparent display elements in smart glasses. Volume holography offers a unique combination of performance, transparency and low cost that will revolutionize the smart glass display industry. With its ultra-clear, full-color performance, Akonia’s HoloMirrorTM technology enables the thinnest, lightest head worn displays in the world. Looking forward, Akonia has already defined the technology required to achieve future improvements including even greater field-of-view (FOV) and light efficiency.” http://akoniaholographics.com/products/index.html

    • You are basically requoting marketing copy. Everyone says their technology looks great in the marketing statement.

      Take a look at the image put out by Akonia (https://mspoweruser.com/wp-content/uploads/2017/01/akonia.jpg) and it is a glowing hot mess. Holograms have very similar optical issues to diffraction gratings (Microsoft, Magic Leap, Vuzix Blade, and Digilens).

      Akonia was also going to revolutionize mass storage (didn’t happen). They have something that Apple at least think they wanted or they just could be covering an R&D bet just in case. It is tough to read to much into what for Apple is a petty cash purchase.

  37. Hello Karl,

    Really big fan of your work, It’s great to find somewhere you can get straight to the facts!

    I’m an industrial designer working in consumer electronics and I’ve been fascinated with the VR/AR field for a few years now. I’m curious of your opinion as im looking to move into the field. If you had to choose one display technology/method for the industry to focus on, what would it be and why?

    Thanks,
    Marcus.

  38. Hi Karl,

    I would like to have your thoughts about how your picture the upcoming years, and see what you would predict with just a set a basic rules.

    Imagine I can provide you with screens from the future of almost any definition you want, any form factor and any max brightness, (8kx8k, 5x5mm, 5 million nits is possible by example), then what would be the design succeeding at doing the best best AR then VR headset (actually goggles because they would be thin) possible ?

    I guess you already tried to think about things like this, so I would be curious to have your development and conclusions about it.

    Best,
    Em

    • There is a few problems in your question. First is that if you had an 8k by 8k display with say a 6-micron pixel (much smaller than that and the optics become impossible. If you had 8000 pixels in 5mm then the pixel pitch would be 0.625 microns or just about 1 wavelength of visible light. Diffraction would cause all kinds of problems. If you could fit an 8k by 8k display into 5mm on the side, the cost of the optic would be a fortune and they would be huge. You are almost forced to consider foveation where the display is either moved or adapted to have variable resolution based on where the eye is looking. With AR, you have to keep everything in “balance” and if you go too extreme with say the display, it will make the optic impossible.

      VR is a whole different issue as they typically don’t have the form factor constraints. They are easily going to get to displays that are 8K in the future and will likely curve them for a wider FOV.

      It is the size constraints that make AR so tough when you want something small, very light, very thin, and very near the eye. You also get into all sorts of issues when you allow for vision correction.

      • Thanks for the reply Karl

        About my question, the 8kx8x spec was just an random dream example, you could choose a more realistic one 🙂

        It’s that the only way we can picture the future of the technology is through somehow optimistic/marketing speeches from companies CEO/CTOs and having your more technical/unbiased point of view on how you think it will likely happen is interesting to know.

        Cheers,
        Em

Leave a Reply

%d bloggers like this: