This article was published on August 21, 2018

Forget smart speakers — AR headsets are the true home for AI assistants


Forget smart speakers — AR headsets are the true home for AI assistants

In an interview with The Verge, Magic Leap CEO Rony Abovitz revealed that his company will be working on two AI assistants for its much-anticipated augmented reality headset Magic Leap One, which shipped to developers in select cities this month.

Given that we’re still in the throes of an artificial intelligence hype cycle, it’s easy to dismiss such comments as an attempt to jump on the AI bandwagon.

Many companies have already exaggerated the capabilities of their artificial intelligence features to create hype around their products and services and secure funding for their companies.

Therefore, it is only natural to take Magic Leap’s AI claims with a grain of salt.

That is evident in Verge’s coverage of Magic Leap One’s debut, because there’s only a passing mention of the company’s plans to create AI assistants for the ARheadset at the end of the article.

And to be fair, Abovitz needs a reality check on his vision for AI assistants (which I’ll get to later).

But the truth is that artificial intelligence and AI assistants will play a very important role in the future of augmented reality and mixed reality headsets (and to a lesser degree VR headsets). The two will probably be very interdependent.

AI assistants will find their most useful domain in AR applications. Meanwhile AR headsets will largely depend on AI assistants to enable their users to accomplish tasks and interact with applications.

AI assistants are a luxury for computers and smart phones

Apple iPhone 4s Siri
Image credit: depositphotos

Advances in artificial intelligence in recent years have enabled computer software to perform tasks that were previously exclusively limited to humans.

Breakthrough in machine learning and deep learning has propelled AI subsets such as computer vision, natural language processing and generation and voice recognition.

These technologies have enabled our computing devices to better process and understand the world surrounding them. They’ve become key components in domains such as health tech, facial recognition, autonomous vehicles and more.

But more importantly, they’re enabling us to interact with our computers and environment in new ways.

The new capabilities in AI have led to an increase of interest and funding in the industry.

Many companies are frantically trying ways to incorporate AI into their applications because analysts have pushed forth the thought that machine learning will be the differentiator in most industries.

But in many cases, the companies develop artificial intelligence applications that are either irrelevant or broken. Or both.

A stark example is AI assistants and chatbots. Under the illusion that users will like anything that’s powered by AI, startups develop AI assistants and chatbots that try to accomplish tasks such as making purchases, ordering a cab and scheduling appointments on smartphones and computer.

But in most cases, those tasks already have apps on our phones and computers, and those applications have interfaces that are less confusing and easier to use.

For instance, why would I want to tell my chatbot to call an Uber when with a couple of taps, I’ll be able to access the app itself, which will provide a richer experience and more choices to make the request?

The key here is that computers and smartphones have been designed for interaction through their traditional input mediums: keyboards, mice, touchscreens, buttons.

Their applications have also been designed to provide better options and features when the user is directly staring at their display screens.

So, it’s interesting to see The Rock talk to Siri in the famous commercial that aired last year, but the truth is that most users rather go through their schedule and email by directly accessing their respective apps, because that’s where all the features are and how the applications were meant to be used.

How many times do you use Siri on your iPhone every day? Probably very little. And even less on your MacBook.

And let’s face it, listening to your email while cooking dinner doesn’t make sense, because you’ll either end up becoming distracted from your email or your cooking (the latter is worse).

Smartphones and computers are probably the wrong nail for the AI assistant hammer. To be clear, AI assistants are very powerful tools and will continue to be an integral part of computing devices.

But they’ll always be a complementary feature, a nice-to-have, not a critical must-have.

In order to make their AI assistants and chatbots more relevant, companies try to broaden their features without taking into consideration the limits of deep learning and neural networks.

But some of the promises that companies make are simply not achievable with current blends of AI. Consequently, those features either end up making too many mistakes and frustrating users, or the companies that develop them are forced to hire humans to make up for the shortcomings of their AI.

Smart speakers are a limited use case for AI assistants

Amazon echo alexa

Smart speakers are an example of hardware in which AI assistants are very pertinent. Most smart speakers don’t have a graphical user interface, no app icons, no menus, buttons, touch screens and keyboards. The only way users can interact with them is through their microphones.

That’s why the role of AI assistants such as Alexa, Siri and Google Assistant is much more important in smart speakers.

The more capable their artificial intelligence is, the more useful the smart speakers become.

But smart speakers are meant for specific environments, in your car, home or office. You don’t carry them around with you. And there are only so many things that a device can accomplish with a speaker as output.

As soon as you try to perform complex, multi-step tasks with smart speakers, their limits become accentuated and they can fail in spectacular and dangerous ways.

A recent report by The Information reveals that only 2 percent of the users of Amazon Echo use the smart speaker for shopping. Of those who did use Echo to shop, only 10 percent did so again.

Other reports show that aside from privacy and security concerns, the lack of features to see product details, compare products, and choose between different products are some of the major hurdles.

This further proves the point that smart speakers and their AI assistants are only suitable for simple tasks.

Some manufacturers are adding touchscreen displays to their devices to make them more capable. But those displays will more likely push users to use the assistant less and the touchscreen’s user interface more.

AR headsets are begging for suitable user interfaces

Microsoft HoloLens AR headset

Augmented reality is one of the fastest-growing sectors of the tech industry. The AR/VR market is slated to be worth over $100 billion in the next few years, and most of the revenue will go into AR and its more specialized subset, mixed reality.

All of the large tech companies have become involved in the space, creating AR software, hardware or both.

There are currently two main mediums for augmented reality applications: mobile (e.g. ARCore, ARKit) and head-mounted displays (HMDs) (e.g. Google Glass, Microsoft HoloLens, Magic Leap).

There are also stationary AR devices such as the Lampix, but those are specialized and limited in their use cases.

High-end smartphones such as iPhone X and Pixel can provide quality AR applications and the user can interact with the apps through the devices’ touch screens and buttons. However, mobile AR has a serious flaw: It occupies at least one of the user’s hands.

Augmented reality is meant for users to interact with their immediate environment. The fact that you always have to hold your phone in front of you becomes a seriously limiting factor.

The real future of AR will be headsets that free the user’s hands for other tasks.

Contrary to smart speakers, there are countless possibilities and use cases for an AR experience in which users seamlessly interact with their environment and applications. This is why AR/VR has earned the title of the “future computing platform.”

Computers and smartphones won’t go away, but AR headsets will become much more prominent. They’ve already carved a cozy niche in professional work environments, in factories, hospitals, farms, and other places where users want to access and process information while performing other tasks.

They will also make their comeback in the consumer space, if they can find the right applications.

However, AR headsets are limited in the way users interact with applications. Some provide hand controllers, but controllers betray the point of freeing the user’s hands and limit the use of AR headsets.

Other companies have developed sophisticated hand gestures for interacting with the components of AR applications. Hand gestures are a good medium, but they have their limits.

Given the limited field of view that AR headsets have, hand gestures require users to keep their arms extended so that they enter the active area of the headset’s sensors. This can cause fatigue and again limit the use of the headset. Gestures also fail when the user’s hands are partially occluded.

This is why AI assistants can become very crucial to AR headsets. They will easily blend in with all the other features that the headsets provide to their users.

AI assistants can help users accomplish tasks with their AR headsets that previously required hand gestures or swipes on the handle of smart glasses. Users can use AI assistants to open and close applications, activate features, or interact with virtual objects.

When combined with other technologies such as eye tracking, AI assistants can become even more useful. For instance, users can query for information about the object their staring at, or ask the AI assistant to revolve, move, or manipulate a virtual object without using gestures.

In the AR space, AI assistants can quickly become a must-have feature, because they will become one of the main ways we interact with our AR applications.

Some thoughts on AI assistants

Now I’d like to turn back to Magic Leap’s vision on how AI assistants will integrate with its mixed reality headset.

According to Verge, Magic Leap’s double AI assistants will consist of “a simple robotic creature for performing low-level tasks, and a separate human-like entity that you’d treat as an equal, to the point that it will leave the room if you’re rude.”

Here’s what I think: Trying to give AI assistants human-like appearances and characteristics is not a good idea, because it would create false impressions and expectations. Let’s not forget that we’re still in the era of narrow AI, no matter how impressive our achievements have been.

Instead of wasting energy on creating humanoid models to interact with users in emotional ways, Magic Leap should focus on creating an AI assistant that can perform distinct tasks and commands, even if it doesn’t even have a graphical appearance.

This is especially crucial for augmented reality headsets, because it’s a mix of virtual elements and the real world. AI assistants should help augment our interactions with the real world, not distract us from the tasks we’re accomplishing.

Imagine what you would look like to the people surrounding you if you were arguing with your AI assistant. After all, if a graphical assistant was useful, Tony Stark would’ve thought of it and given JARVIS a face.

But those are the kinks. They will be ironed out as the industry matures. What’s important is that AI assistants are about to find a cozy and stable home in the AR industry, if the expensive headsets manage to overcome their challenges.

This story is republished from TechTalks, the blog that explores how technology is solving problems… and creating new ones. Like them on Facebook here and follow them down here:

Get the TNW newsletter

Get the most important tech news in your inbox each week.