This article was published on April 13, 2019

Smart cities are an AI-powered dystopia that’s already happening


Smart cities are an AI-powered dystopia that’s already happening

Separating the futuristic from the dystopian in the minds of the general public can be a challenge, particularly when trying to conceptualize a diverse technology such as AI recognition.

In today’s markets, a tangible example can be seen in consumer products such as the iPhone XS, with its Face ID facial-recognition unlocking. By and large, this has been warmly received, with the majority of complaints relating to UX and design, thanks to the removal of a fingerprint scanner, and the screen notch to accommodate the Face ID sensors. But what of more sinister applications?

The gradual progression of fingerprint identification into facial recognition could be seen as a soft introduction to more complex AI foundations, the likes of which are poised to be implemented across the infrastructure of the so-called ‘smart cities’ of the future. AI recognition R&D is rapidly progressing, and the near future will see the reach of the technology extend further.

Concentrated CCTV coverage in metropolitan cities such as London has long been a controversial topic, raising concerns of a ‘surveillance state’ among many civil-rights groups. As more sophisticated AI programs that make use of user recognition are developed, there is the theoretical potential of adapting this existing surveillance infrastructure to cater for AI recognition.

Public opinion of this technology has not been aided by reports of its introduction by British police forces. It was used in an attempt to spot suspects at the 2017 Notting Hill Carnival, for example, with a reported failure rate of 98%.

Although AI-based recognition applications go far beyond the targeting of police suspects, its association with surveillance at such an early stage of its introduction could greatly impact public perception. After all, there is no shortage of Hollywood portrayals of AI recognition technology acting as an oppressive public safeguard, as seen to great effect in 2002’s ‘Minority Report.’

Binary District Journal spoke to Terence Mills, CEO of AI.io and Moonshot, to explore the possibilities of AI recognition in the coming years. Mills works heavily in the AI space, as well as sitting on the Forbes Technology Council, and is perfectly positioned within the industry to see how new iterations of the platform could be received by the larger consumer market.

Hollywood scripts aside, what are the prospects and implications of AI recognition in the smart cities of the future?

Consumer-based recognition as it stands

Apple’s facial recognition software is widely seen as a stepping stone from which the public will associate subsequent developments in similar AI technologies. Mills agrees: “I think what’s going on is extraordinary. It’s going to really pave the way for what we do in the future.

“We’re already seeing it in the ability to buy things on your iPhone or online and authorize a purchase via facial recognition. I think biometrics is probably the next step – we’re pioneering a lot of work around the ability to generate and invoke purchases via voice recognition.”

One of the strongest applications associated with AI recognition technology is the enhanced security it will give consumers. “I think we’ll see banks start to pick it up in order to complete transactions,” he adds. “It’s probably one of the most secure ways to move forward and I think it’s going to really have a defining role in cyber security, as it goes well beyond thumbprint or fingerprint technology.”

AI recognition’s association with privacy concerns

To make full use of the security provided by AI recognition technology, integration is likely to be comprehensively interspersed across the smart cities of the future. At present, using such tech is more of a conscious choice – holding your phone up to your face to unlock it, for example. The notion of widespread cameras and sensors, especially throughout metropolitan areas, might be met with the same uncertainty as an overabundance of security cameras.

“I think facial-recognition technology is scary,” Mills says. “If you use it in the public domain, where police officers are running around with cameras, or people are walking down the street with a technology that can recognize others via their face, that’s a worrying proposition. It’s intrusive, and it definitely presents challenges in relation to privacy.”

The integration of AI recognition technology into society will make a person’s identity a valuable commodity. As AI integration progresses, a person’s face, voice or other defining personal traits will likely become intertwined with that individual’s entire digital footprint. Despite the levels of account protection made possible by the sophistication of the tech, there remains a weak link: those with access to the data itself.

“Why is [what’s revealed by] someone’s face any different from [what’s revealed by] their social security number? Think about it from that perspective. If I’m walking down the street and I pass you, and I have a device that can recognize your face and tell me all sorts of information about you, that’s really scary,” says Mills.

The level of access that various official entities will have to a user’s facial/biometric data is unclear, but any access at all raises the potential for abuse. “Are police officers capable of carrying around that technology?” Mills asks. “What about the military? The thought of police officers carrying that stuff around scares people. I think there’s a big discussion to be had about the invasion of privacy and it’s going to be had over the coming weeks, months and years.”

Citizen tracking and targeting

The implications of a person’s exposure to constant identification aren’t restricted to the state’s use of the technology. At a consumer level, there is the potential for users to open themselves up to entirely new levels of targeted advertising through retailers’ use of AI identification in stores.

“AI technology puts you in a particular place at a particular time as a consumer – the company knows you’re there because you’ve used their AI technology to get there, whether that be a theatre, in an airport, on an aeroplane,” Mills explains.

“Think about it – they know you’re there for the next two or four hours or whatever. So, if they decide to, they can sell your data to somebody who wants to proactively market you. Do I think that limits the development of AI? I don’t. Do I think people are going to want to ensure their privacy has been thought about? Absolutely.”

When it comes to the development of this kind of AI, public engagement is an important factor. It’s one that may well influence the speed with which officials develop regulations for this type of location-based tracking. With data protection laws, like the European Union’s General Data Protection Regulation (GDPR), acting to address current issues in this field, there has been a renewed focus on proper data use in recent years that will lay the foundations for continued vigilance going forward. Indeed, Mills notes that regulators have already begun to consider the framework necessary to cater for the onset of these future AI platforms.

The current state of AI development

Mills points out a potential issue with AI that may influence the natural course of regulation. “The AI situation is peculiar. We’re seeing this weird situation going on all over the world – I’m certainly seeing it on the ground in the US, China and Europe – and that’s this strange phenomenon behind AI.

“All of a sudden, all these developers said, ‘There’s a huge opportunity here, so let’s go out and get it, develop our own solution, and look for the problem our solution solves.’ But we never used to do that! That was not the way we did business in technology – you started with the problem and then developed the solution, not the other way around!”

Development for development’s sake could pose a real problem as AI makes the transition from the casual consumer tech of the present into the essential infrastructural tech of the future. Public trust needs to be there, but so does the public’s recognition that these AI solutions actually solve a specific problem.

“There’s a pullback happening right now,” Mills says. “We’re trying to figure things out – we’re thinking, ‘OK, AI has a place, and the place it has is that we look for a business problem and then use the technology and the science to provide a solution for it.’ But that solution needs to be explainable and accountable.”

Creating a transparent system

This concept of accountability will most likely be a crucial condition in the public willingness to accept AI recognition as an ingrained part of future cities. As mentioned previously, the potential applications for the technology are many, but these applications must be developed through a framework of problem-solving.

Crucially, the way in which the public sector builds its own surveillance framework will be key in shaping public opinion. A lack of transparency as to how, for instance, police forces use AI recognition programs will only encourage speculation about the potential for bias within these systems. If such bias exists, and it’s not addressed quickly, the prospect of a dystopian surveillance state could seem a lot more plausible.

This post was written by John Murray for Binary District, an international collaborative technology community which creates unique competency-based workshops and events on new technologies. Follow them on Twitter.


TNW Conference 2019 is coming! Check out our glorious new location, an inspiring lineup of speakers and activities, and how to be a part of this annual tech bonanza by clicking here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with