Increase your ROI and get inspired when you attend TNW Conference with your team 🎟 Save up to 40% today when you buy in bulk →

This article was published on March 3, 2018

Voice assistants aren’t built for kids — we need to fix that


Voice assistants aren’t built for kids — we need to fix that

Voice assistants have recently made the leap from your personal smart device into the home via home devices, such as Amazon Alexa or Google Home. It’s been estimated that 39 million Americans now own a voice-activated smart home speaker. These assistants are now being integrated into other smart devices in the home such as TVs, lights, fridges, headphones, etc.

It’s clear that voice as an interface has suddenly gone mainstream and we are regularly seeing advertisements of families enjoying and interacting with their home devices. But as voice interactions become ubiquitous in the home, I think there’s an important question we need to ask ourselves: “Are these voice interfaces appropriate for children?”

Like you might expect, there’s no simple answer to that question. However, we can get closer to an answer by looking at a few key aspects of this issue that must be addressed.

Systems must engage with kids appropriately

Firstly, the system must respond appropriately to children. White House press secretary Sarah Huckabee Sanders recently called out Amazon after she claimed that her two-year-old child was able to use the Echo to purchase an $80 Batman toy. It‘s worth noting that simply calling out the word ‘batman,’ even repeatedly, is not going to purchase batman for you on Amazon; the UI simply doesn’t work that way. However, it does highlight an interesting issue; that home devices should not allow children to access the same environments as adults.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Alexa is not alone in her indiscriminate behavior. My eight-year-old was recently using Siri on my phone and the word ‘bitch’ appeared. Note that she had not actually said the word, which speaks more to its poor accuracy for kids voices. Shocked, she showed me the phone. While Siri did reply with an amusing ‘there’s no need for that,’ Siri should have identified that this was a child speaking and responded appropriately, choosing to not print the obscenity to the screen.

For voice assistants to work accurately and appropriately, they must be able to identify children’s voices and create, implement, and adhere to proper protocols in regards to how they handle requests. Otherwise we’ll just get more of incidents like this:

Privacy and data protection for kids’ voice data

Secondly, children’s voice data must be processed and stored in accordance with data privacy and protection laws globally. This throws up many more challenges which so far, seem to have been mostly pushed aside. We need to understand more about how our children’s data is being used and stored.

The FTC’s Children’s Online Privacy Protection Act (COPPA) dictates how operators must handle personally identifiable information, such as voice, collected from children under the age of 13. To comply with COPPA, explicit permission must be sought from parents or guardians for operators collecting, processing, and storing kids voice data. In Europe, the forthcoming General Data Protection Regulation (GDPR) legislation of May 2018, is set to equal COPPA in these respects.

A mainstream example can be found in an episode of HBO’s comedy series, Silicon Valley. The show highlighted the problem of collecting children’s data without permission when the character Dinesh unknowingly violated the rules, resulting in billions of dollars worth of fines.

Home devices such as Google Home and Amazon’s Alexa work pretty much out of the box, allowing you to immediately use voice commands to do simple things like set timers, tell jokes, read the news, and check weather forecasts. So while both provide the option of creating user profiles for kids where parents can give their consent, you are not required to do so prior to using the device.

So, what happens if a user has not yet given permission for their child’s data to be collected, but they are still using the device? How about when a user has given permission for their child’s data to be collected but their friend is visiting the house?

To address such scenarios, the FTC recently relaxed the rule, just enough that common tasks like voice searches can be done for kids without risk to the company. Specifically, the FTC will not take an enforcement action against an operator for not obtaining parental consent before collecting the audio file from a child’s when it is collected solely as a replacement of written words, such as to perform a search or to fulfill a verbal instruction or request — as long as it is held for a brief time and only for that purpose. So, we’re all good right?

Well, yes, but just so we’re clear — the FTC now require operators, without parental consent, to immediately delete children’s voice data. This includes not storing data or extracting information from the data to improve the voice service. It’s worth noting that the EU GDPR has not provided any such leniency or guidance on this matter. So, is this actually being done? Can all operators in this space hold their hands up and confirm that they are immediately deleting voice data for all kids they don’t have consent for? It’s not likely, but we certainly should hope so.

Companies must state how they will use the data

Lastly, all operators need to clearly articulate how they intend to use legally acquired kids voice data. Will the voice data simply be used to improve the speech recognition services, with data remaining in the company? Or will the child’s voice data be sold to third parties for data mining and marketing purposes?

We must acknowledge and expect that children will use these voice assistants and smart devices when they are so easily accessible in the home. Voice interactions have many positive benefits allowing kids to naturally interact with technology without the need for screens. As we watch the rise of the voice interface as it becomes more integrated into our appliances, our cars, our personal devices, and what is needed is for child-specific interfaces with appropriate responses. This means not allowing children open access to web browsing, purchasing, inappropriate language, or content — or allow video calling to your contacts, etc.

While full compliance with US COPPA and EU GDPR is required by law, grey areas remain in this fast changing technology space. Companies leading the voice assistant space need to provide more transparency with respect to children’s voice data: Do operators immediately detect kids voices as distinguished from adults and if so, is all kids voice data immediately deleted where permission has not been obtained? If permission was granted, how is the data used and who has access to it?

We need answers and we need them now.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with