Halliday AR(not?)/AI Glasses

Introduction

I’m in the process of organizing all the information I collected at CES and want to combine it with things I see at SPIE’s AR/VR/MR next week. Many companies showed AR glasses with AI capability at CES, and many more are in the works. Next Wednesday at 4:05PM, I am going to be on a Panel at SPIE’s AR/VR/MR to discuss the subject.

Most AR with AI glasses use diffractive waveguides with monochrome green MicroLEDs or with X-Cube combined three-chip MicroLEDs for full color. Some are using LCOS or DLP full-color microdisplays. I also expect to see some using Lumus’s reflective waveguides rather than diffractive waveguides. Some are using birdbath (ex., old Xreal) and freeform (ex. Xreal One Pro and P&G Metalens 2). I plan to discuss these other designs and their trade-offs after SPIE AR/VR/MR.

Halliday stands out in that it has a very different optical design for their AR/AI glasses, and it received a lot of attention from the media at CES. I only had time to get to their booth early Friday morning, and unfortunately, there was no one at the booth at that time. At least one YouTube video I saw during CES reported that Halliday was using lasers, but I have since confirmed that it uses a green MicroLED with projection optics. The Reddit Topic Halliday Glasses – Smart Glasses with AI Assistant reported (correctly) that the optics worked similarly to the MojoVision Contact lens display.

This article considers the pros and cons of Halliday’s optics in AR glasses as well as some of the other Halliday design decisions.

Halliday Glasses

Rather than using waveguides or other combining optics (such as a birdbath or freeform optics), Halliday glasses have a single (monocular) projector that projects directly into the eye. It uses a monochrome green MicroLED for the display device and a set of mirror optics (more details on the optics later).

The projector is manually aimed in the direction of the eye via a horizontal slider and up/down rotation. Rotating the front ring of the projector lens changes the diopter/focus to adjust for vision differences.

To see the image, the user must rotate their eyes up in order to look into the projector located at the top of the frame. The slider and up/down adjustment must be made so that the user can rotate their eye to look axially into the projector. While adjustment does not have to be perfect, the eye box is much smaller than with most other optics, such as waveguides, but is larger than direct laser scanning, such as North Focals (see: North’s Focals Laser Beam Scanning AR Glasses – “Color Intel Vaunt”).

Shown below is a still from Halliday’s Kickstarter Video (below left) and from a ben’s gadget reviews video (below right) demonstrating how the user has to rotate their eyes upward. To see the image. If you want to see what it takes and feels like, put on a pair of glasses and try to look at the middle of the top of the frames.

Looking up like this with the eyes is not very comfortable for most people to do for any length of time, and the first big drawback of Halliday’s glasses. As my friend David Bonelli (from Pulsar) pointed out, it is not really “AR” as you are looking at an image in the frames and not overlaid with the real world.

Another “human factor” is the social issue. The user will not be looking at people when using the display but rather very awkwardly looking up. It will be obvious to any observer that the person is looking up at something.

Halliday’s “Cassegrain Telescope”/MojoVision Optics

Halliday’s optics work is similar to the MojoVision contact lens optics, but it is scaled up since it is farther from the eye. A figure (below left) from a Spy Eye (became MojoVision) patent shows how the optics work. Light from the MicroLED that hits the curved secondary mirror (350) is reflected to the larger concaved primary mirror, which then directs the “image forming rays” out around the outside of the small secondary toward the eye. The effect of the primary and secondary mirrors is to move the focus from being very close (as the display is very close to the eye)to very far (likely near infinity). You can see the primary mirror blocking the light in the closeup of the display optics (below right).

Looking again at the MojoVision patent figure (above left), you will see “stray rays 347.” One of the inherent problems with this design is that LEDs tend to output Lambertian (somewhat diffuse) light, which means that light rays from the LEDs are going to be emitted with a wide range of angles. The range of angles can be reduced somewhat by using Microlenses, which most MicroLED microdisplays have. However, even with microlenses, a large percentage of the light rays are still going to miss the absorbing sidewalls and the secondary mirror and come out as “stray rays.” These stray rays will cause some level of overall glow everywhere in the image, but more prominently in a ring around the outside of the image. This glow and ring can be seen in the Smartphone view above (upper right).

After CES, Halliday put out a YouTube video, Halliday AI Glasses: CEO Shares Insights Behind the Vision and Concept of Our Product! This video includes some through-the-optics views taken with a cell phone camera, and a few still captures are shown below. The stray light causes some glow around the individual text. As the amount of text in the image grows (from left to right), the ring of (stray) light in the outer ring gets brighter. BTW, I do appreciate that Halliday put out honest/true through the optics videos.

The images below are screen captures from ben’s gadget reviews video of Halliday with the display on and off and taken from different angles. In the lower right image, you can see the Primary mirror and the back of the secondary mirror. The upper right image shows the horizontal and up/down rotational adjustment of the projector.

Below is a diagram of a Cassegrain telescope from Wikipedia, which shows why I call the MojoVision/Halliday optics a “reverse telescope” optics. The MicroLED display is put roughly where the eye would be, and then the projected image comes out where the light would go into the telescope.

Pro’s of Halliday’s Optics

Diffractive Waveguide “Rainbow”

Halliday’s optical design has major advantages in terms of brightness/efficiency and the ability to use ordinary prescription lenses. Below is an outline of what I see as the most obvious advantages of Halliday’s optics:

  • Efficiency and Brightness – By shooting light directly into the eye, this optical design is vastly more efficient than any combiner/waveguide approach. This means that the display can be bright enough for outdoor use and uses much less battery power.
  • Works with ordinary lenses for any prescription – Since the projector projects directly into the eye and not off or through the prescription lenses, any ordinary lens will work.
  • Privacy and no forward projection
  • Eliminates external light capture, which causes rainbow artifacts (example from a diffractive waveguide on the right)
  • It does not block or in any way disturb the forward view. Long-time AR expert Thad Starner considers this critically important (see AWE 2024 Panel: The Current State and Future Direction of AR Glasses and FOV Obsession).

Halliday, in their marketing material, claims that it is less bulky and heavy than waveguide-based AR headsets. While this may be true for “full-featured” AR glasses, there are many “minimalist” AR glasses with waveguides similar in size and weight.

Con’s of Halliday’s Optics

Unfortunately, there are also some severe drawbacks inherent in Halliday’s optical design.

  • Having to look up with the eyes is a major fundamental problem, as has been discussed previously. I think for most people, it will be painful to use for long periods, and it will be distracting to see people looking awkwardly up to see the image.
  • The eye box, or where the image can be seen, is relatively small. If the projector is not aim well or the glasses shift position, it might not be possible to see the image at all.
  • The glow/loss of contrast due to stray light is inherent in the optical design.
  • It would seem to be limited in the ability to increase resolution. The display device has to be small, or the optics will get too large to fit in the frames. Due to physics, there is a limit to how small the pixels can be, so going to a higher resolution will likely require a bigger display and bigger mirrors.
  • Support color, which many consumers will want, will likely have to wait for full-color MicroLEDs to become available and at a reasonable price point.

Halliday’s Input Methods

Halliday will be supporting capacitive touch on the frames, a capacitive touch ring (included with the glasses), and voice input. According to Halliday’s CEO, the touch ring is their preferred input method. I don’t know anyone who particularly likes capacitive tough on the frames, but it seems to be a necessary basic level of input on most AR glasses. Halliday says they will be supporting voice input, but the CEO says it is his least preferred input method.

The problem I find with ring and slider-type input devices is that you are typically forced into a sequential tree where you have to go through a series of branches to get to what you want. Voice lets one go directly to what they want (if they can remember the right word). I think that one of the long-term expectations of AR/AI glasses is that AI will improve so that it will understand better what you want with a few words. You certainly don’t want to be going through a series of selection menus with voice input with other people around.

Speakers but No Camera – Like Even Realities’ G1

As I wrote in Even Realities G1: Minimalist AR Glasses with Integrated Prescription Lenses, I think it is a big miss for AR/AI glasses, Including Halliday, not to have camera input. It is one of the features that seems to have made Meta Raybands take off. Beyond the obvious picture-taking, cameras enable AI image recognition which is a gateway to endless possibilities.

Both Halliday and Even Realities have given the size and power consumption (and thus weight) excuse for not including a camera. I appreciate that supporting the power of the camera and, more importantly, the wireless data communication of a smartphone will drive up the power budget. I know the old Google Glass Glass-Holes memes, but there are cameras everywhere and on every smartphone, and I think camera input will be a fundamental requirement if AR/AI is going to succeed.

Halliday’s Post-CES CEO Video Discussing Design Decisions

I very much appreciate Halliday’s “Halliday AI Glasses: CEO Shares Insights Behind the Vision and Concept of Our Product!” video (below). The CEO goes through the operation and what was behind many of their design decisions. While I don’t agree with all their decisions, it is interesting to understand the philosophy of their design, and I used this information when writing this article.

Conclusion

There is a lot to like about the way Halliday has marketed their product. I particularly appreciated that their latest videos include true/real images through the optics. I like the way the CEO came out and explained their design decisions. They were very successful at CES in garnering media attention, both with the press and media influencers. But all that said and absolutely nothing personal, on a technical level I don’t like it as a product.

While their optical approach is different and has some significant advantages, I believe the disadvantages are so massive that they greatly outweigh the many advantages. I think the need to look up rather severely is a huge problem. They may be able to get away with it in a short demo, but I believe the average person (there may be exceptions) will find it painful with longer use. Additionally, a person regularly looking up is going to look strange.

As stated in the article, I think anyone wanting to have “AI” glasses is going to have to figure out how to support a camera. I don’t think that “lower power and lower weight” is a good excuse, and if “privacy/Glass-hole” is the issue, then AI glasses will be doomed from the start. Having camera input, I think, will be the bigger potential driver of AI glasses.

I also believe Halliday’s optical design is a bit of a dead end in terms of improving resolution due to the size limitation of the display and image quality due to the stray light.

Appendix: Some of My History – The TMS320C80 Multimedia, Video and Image Processor

Thanks to this blog, I am best known today as someone who writes about Augmented and Mixed Reality. But from 1977 through 1997, I was a Graphics, Imaging, and CPU I.C. designer and architect. My work included:

  • Designed the TMS9918, the first Sprite Chip (the term “Sprite” was coined for this design). I co-defined how the sprites worked and defined the DRAM interface (the 9918 was the first consumer device to connect directly to DRAMs).
  • Led the design of two 16-bit CPUs (TMS9955 and TMS99000)
  • Defined and led the design of the first programmable graphic processor, the TMS34010 (and later the TMS34020), and was the technical leader of the TMS340 Family from the 1983s through 1990. The TMS34010 was the first CPU to directly interface with DRAM and Multiport Video Memory (they were co-designed).
  • Defined the first Multiport Video Memory (VRAM) and helped make it an industry standard. This development directly led to the first Synchronous DRAM (SDRAM)
  • Led the definition and technical team developing the TMS320C80 MVP, the first fully programmable image processor that integrated a RISC CPU with Floating Point and Four 32-bit DSPs on a single chip.

It was the TMS320C80 MVP effort that I was reminded of when making my statement about the importance of video/image input for AR/AI glasses. I started working on the TMS320C80 in 1989 (it took 5 years to develop, design, and make the 320C80, starting with a team of two people in 1989). I believe then, and I do now, that image input is the future. Below are links to a 1982 IEEE CG&A and a 1994 Byte article that I wrote that gives an overview of the TMS320C80.

Karl Guttag
Karl Guttag
Articles: 296

4 Comments

  1. Certainly reminds me of older near eye displays like the Vuzix M300. Occluded, limited FOV, and having to look away to read. I think the biggest improvement is inclusion of LLM, which is great for functionality, but has nothing to do with smart glasses.

  2. According to the information provided by Halliday, eye strain or difficulty in clearly viewing the module’s content doesn’t appear to be a significant issue. This module is not designed for long periods of staring, much like a smartwatch—when you need it, you just glance at it. Some YouTube reviewers have also mentioned that it may take a little time to find the optimal display position, but once it’s in the right spot, the display is quite clear.

  3. I don’t think Halliday is trying to position itself as “AR Glasses,” and honestly, looking up to see the screen and having a green monochrome display aren’t really negatives for Halliday. In fact, I think this display style works great in a professional setting. I don’t want people knowing I’m wearing smart glasses or being able to see what’s on my screen, and the monochrome display is subtle and low-key enough that it doesn’t get in the way of my day-to-day tasks. A lot of other smart glasses either go all-in with full-color screens or skip the screen entirely and just use audio, which, isn’t ideal for a work environment.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading