Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on October 30, 2021

Face recognition in schools normalizes surveillance — and threatens kids’ privacy

How much risk are children at?


Face recognition in schools normalizes surveillance — and threatens kids’ privacy Image by: Shutterstock

In conversation with my teenage daughter last week, I pointed out a news report which flagged concerns over the use of facial recognition technologies in several school canteens in North Ayrshire, Scotland. Nine schools in the area recently launched this practice as a means to take payment for lunches more quickly and minimize COVID risk, though they’ve since paused rolling out the technology.

When I asked my daughter if she would have any concerns about the use of facial recognition technology in her school canteen, she casually replied: “Not really. It would make things a lot faster at checkout though.”

Her words validate the concern that children are much less aware of their data rights compared to adults. And although there are special provisions and safeguards for children under a range of data protection legislations, the use of facial recognition technology on children could pose unique privacy risks.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Facial recognition technologies identify and authenticate people’s identities by detecting, capturing, and matching faces to images from a database. The technologies are powered by artificial intelligence (AI), specifically the technology known as machine learning.

Machine learning predicts outcomes based on historical data, or algorithms, that have been fed into the system. So for facial recognition, machine learning predicts the identity associated with a digital representation of a person’s face, or “faceprint”, based on a database of facial images. The software adapts through this experience, in time learning to generate predictions more easily.

Facial recognition technology is now used in a variety of ways, such as to verify the identity of employees, unlock personal smartphones, tag people on social media platforms like Facebook, and even for surveillance purposes in some countries.

Facial recognition technology on its own is not the problem. Rather, the issue is how it’s used and, in this instance, the fact the technology has now infiltrated school corridors and targeted a vulnerable demographic: children.

So what are the privacy issues for children?

Your face print is your data, so for any facial recognition system, it’s important to understand how the image databases are collated and stored. Although I may grudgingly agree to the use of facial recognition technology to enter a concert venue, I wouldn’t be thrilled if my face print was retained for “other commercial purposes of the company” (a phrase that appears quite commonly in the fine print of ticket sales regarding the use of personal data).

If facial recognition technology is used in school settings, we’ll need clear information as to if and how students’ images will be used beyond the purpose of the lunch queue. For example, are they going to be shared with any third parties, and for what purpose? Issues could arise, say, if face prints are linked to other data on the child, like their lunch preferences. Third parties could theoretically use this data for marketing purposes.

We would also need information as to how the images would be protected. If the students’ face prints aren’t properly secured, or the system isn’t robust enough to fend off hackers, this creates cyber-security risks. It may be possible for hackers to link children’s face prints to other data about them, and track them.

The heightened privacy risk surrounding the use of facial recognition technologies in schools also relates to informed consent. Although UK data protection law specifies that children aged 13 and over can consent to the processing of their personal data, this doesn’t mean they fully understand the implications. For example, one survey found children between ages eight and 15 had difficulty understanding the terms and conditions of Instagram.

Children, parents, and guardians should be provided with nothing less than full information, couched in language children can easily understand. Any data subject, including a child, has the right to know exactly how their personal data will be processed, shared, and stored, and can specify the conditions under which their consent will apply. Anything less than prudence and transparency will risk jeopardizing children’s privacy.

Normalizing the surveillance of children?

These are just some of the questions the use of facial recognition technologies in schools raises. Facial recognition technology also carries other risks, such as errors, which could, for example, lead to students being charged incorrectly. And as with any AI system, we should be concerned about whether the algorithms and data sets are free from bias, and have clean, complete and representative training data.

Importantly, employing facial recognition technologies in schools also goes some way to normalizing the surveillance of children. It’s possible the knowledge they are being tracked in this way could impact some children’s wellbeing.

It’s not surprising that the UK’s data watchdog, the Information Commissioner’s Office, has stepped in to investigate the use of facial recognition technologies in school lunch queues. And in light of the inquiry, it’s pleasing to see North Ayrshire Council has paused rolling out the practice.

But as we move further into the digital age, it’s possible the use of facial recognition technologies among schoolchildren will resume, and even be taken up more widely. If this is to happen, the use of facial recognition must yield substantially more benefits than risks, taking into account the special circumstances of using the technology on children.The Conversation

Article by Pin Lean Lau, Lecturer in Bio-Law, Brunel Law School | Centre for Artificial Intelligence: Social & Digital Innovations, Brunel University London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with