You walk down a hallway and arrive at a door, but when you reach out for the knob, your hand grabs nothing but thin air. You’re driving down a road toward a bridge, only to find out that the bridge did not exist—when your car is falling off a cliff. In both cases, you’ve been the victim of an augmented reality hack, in which attackers compromise your AR glasses or windshield to display inexistent content and tricking you into making fatal mistakes.
These incidents haven’t happened yet, and they might sound a little too sci-fi for the moment, but they’re not unimaginable given the speed at which the augmented and virtual reality innovations are advancing. According to advisory firm Digi-Capital, AR could reach $85 billion to $90 billion and VR $10 billion to $15 billion revenue by 2022, up from $3.9 billion in 2016. And all that money won’t go into games and entertainment. Both AR and VR are finding their way into a variety of domains, including healthcare, sports, education and professional work.
While all these inroads are helping us make the best use of these cutting-edge technologies in our daily lives, they will also be exposing us to new security threats. We do not yet know the precise extent and variety of these threats, but it is imperative to pause and reflect on how the expansion of AR and VR will affect our privacy and security.
Different kind of data means new privacy risks
When most applications were running on desktop and laptop computers, the data collection capabilities of companies running online services were limited to things such as browsing habits and interactions with user interfaces. With the advent of mobile devices, those companies found the power to track users’ locations and movements and see the world through their smartphone cameras. Wearables enabled the collection of health data, smart speakers pushed you to give away samples of your voice, and IoT devices brought with them the capability to sense the world in ways that were previously impossible.
AR and VR headsets collect information about your eye and head movements and all kind of reactions you show to different visual content. In case they’re equipped with hand props and gesture detection technology, they will be able to record even more data about your physical behavior. This has been a domain that had remained closed to big tech companies. I’m not surprised that all big tech companies have shown interest in both technologies. Facebook made the $2 billion acquisition of VR startup Oculus in 2014 and has introduced lofty plans to create VR social experiences. The added data will help them better understand (and monetize) their users.
One of the privacy challenges that the AR/VR companies will be facing is securing the attention data they collect from their users. Like any other company that collects personal information, they will have to be transparent about how they store, handle and mine that data, how and whether they share it with third parties and how they protect it on their own servers. Users should also be wary of the services they sign up for and make sure their data remains safe in the hands of the companies that provide them with services and applications.
New ways to manipulate users
Data per se is not a bad thing. Big data and AI can do wondrous things, such as fighting cancer, improving the quality and accessibility of education, and dealing the scarcity of food across the world. But in the wrong hands, they can also be used for evil purposes. Already, the ways companies such as Google, Facebook, and Amazon are mining and using their users’ data has become a real privacy concern.
These companies are in a race to collect consumer information, mine that data to create digital profiles of each of their customers, and then employ those profiles in profitable ways such as displaying engaging content that will keep users glued to their applications, showing relevant ads or making tempting purchase suggestions. The data provided by AR and VR headsets will amplify their powers by giving them more precise details on how users interact with content.
Things get creepy when these companies or other actors that use their platforms engage in activities that steer users in intended directions by showing them targeted content. We’ve already seen this play out in the past elections, where political ads were used to manipulate voters. Facebook’s ad platform is very effective because it enables advertisers to filter out their audience based on fine-grained data. AR and VR will add even more parameters to those ads, including the kind of colors users are drawn to or the locations on the screen where they’re most likely pay attention to.
AR and VR applications are very immersive experiences, which means there will be lots of opportunities to target users in ways that can be convincing and persuasive.
This means more control for Big Brother and less for the users.
The security risks of AR and VR
At this stage, we can only speculate on what the future security threats of AR and VR will be, such as the sci-fi scenarios we examined at the beginning of this post. But there are some things that we already know.
Augmented reality will be about overlaying graphics and information on the real world. Gamers, shoppers, architects and professional workers will rely on the information provided by AR applications to make real-world decisions. If hackers compromise an application and start showing fake information and graphical objects on a victim’s AR display or glasses, they can potentially cause harm. For instance, imagine a doctor checking on patients’ vital signs through an AR display, only to be presented with the wrong numbers and failing to tend to a person who needs immediate attention.
AR can become an effective tool for deceiving users as part of a social engineering scheme. Imagine how fake signs in the streets or on top of shops can misguide users into making mistakes. We’ll probably see some funky use of AR deceit in the next iteration of the Ocean movies.
Another potential attack I see here is a denial of service, in which users who rely on AR displays for their job are suddenly cut off from the stream of information they’re receiving. This is something that can happen in every application domain. But AR is especially concerning because many professional workers will be using the technology to carry out tasks in critical situations, where not having access to information can have disastrous or fatal consequences. This can be a surgeon suddenly losing access to vital real-time information on her AR glasses, or a driver suddenly losing sight of the road because his AR windshield turns into a black screen.
VR security threats are a bit different, and maybe a little less critical than AR, since the use is limited to closed environments and doesn’t involve interactions with the real physical world. Nonetheless, VR headsets cover the user’s entire vision, which can be dangerous if hackers take over the device. For instance, they can start manipulating content in ways that will cause dizziness or nausea in the user.
Lack of security precautions in designing, building and distributing IoT devices has already created a cybersecurity problem that has become very hard to fix. In their rush to hit the store shelves and avoid being left behind by competitors, IoT device manufacturers shipped millions of devices with easy-to-exploit vulnerabilities. They never thought that those innocent devices would become the key perpetrators of global cybersecurity crises such as the 2016 Dyn DDoS attack. That is a lesson that the AR/VR industry should take to the heart. We must think about security incidents when creating products, not after they happen.