Algorithms have taken over our lives. That’s not an exaggeration, nor the plot of a futuristic movie: it’s the reality. Algorithms used to be something we could use to our benefit, to automate everyday processes like buying a train ticket or to help us choose where to go on vacation. Now, we’ve reached the point where almost all of our decisions are actually determined by algorithms.
This transformation is of particular interest to Sander Klous, Partner for Data and Analytics at KPMG in the Netherlands. For a firm like KPMG – one of the Big Four auditors – which relies on analytics to guide business decisions, the need for the data and algorithms to be accurate is immense. That, coupled with the fact that companies that collect personal data, like Google, know everything about us, means it’s now more important than ever that we can trust the systems we are using to manage our data ethically.
Mikko Honkanen, co-founder at Vainu, a prospecting and account insights platform, notes that “companies that collect data know personal things about you even before your closest friends and family”. Say you want a divorce. Before you discuss this with your spouse, you might look up divorce lawyers on the internet, or search for advice about how to navigate a separation when you have children. In this sense, the company that collects your data knows you want a divorce even before you tell your spouse. And the troubles don’t stop there.
The past couple of years have presented a whole host of examples showing what happens when the management of data and algorithms doesn’t go to plan. Sander Klous, speaking at TQ, cites a number of instances in which the ethics of data collection and storage has come into play.
Take Ashley Madison, the dating site for married people, as one such example. In 2015, the company suffered a data breach stemming from an algorithm failure. The personal details of everyone signed up to the site were published online, and as a result, some people who had been members committed suicide. Additionally, back in 2011, TOMTOM came under fire for collecting data on how fast drivers drive and selling it to the police, and engineers are still struggling with the question of self-driving cars and programming decision-making into their systems.
Data is also directing our decision-making in more subtle ways. Klous points out that 15 years ago, if you didn’t want to buy a train ticket, you just hopped on without buying a ticket and hoped the conductor didn’t pass by. Now, however, the algorithms in the ticket-checking machines at train stations stop you passing the barrier if you don’t have a valid ticket. “This actually impacts you more than a law prohibiting you from doing something”, says Klous. And even when we’re not forced to do something, faulty algorithms can also lead us into taking actions that we otherwise wouldn’t, if we were relying on common sense alone – like when your satnav tells you to drive into a ditch, or a newly-laid road covered in wet concrete.
Balancing good versus bad
With data being a force for good in many instances, Simon van Duivenvoorde, Managing Director of Wakoopa, describes the delicate balance between data richness and data privacy.
“On the one hand, having a lot of data allows us to better understand our consumers and optimize our service to benefit them. But how do you balance the ethical aspects with the commercial benefit?”
Wakoopa – which unlocks user-centric behavioral data in 25 different markets worldwide – achieves this balance by anonymizing its data. This way, the company can provide market research companies with the data they need, without sacrificing the privacy of the consumers. Van Duivenvoorde notes that this use of a human-centric model is so important because “even though data might seem very anonymous, and the size of it might appear a shield to hide behind, it’s really not – it’s really personal”.
Vainu’s Mikko Honkanen agrees. Vainu counts 100 million companies in their database and uses data-driven processes in almost every part of its business, from sales and customer success to marketing and software development. To navigate the ethics versus data richness issue, Honkanen explains that Vainu uses public and open data. “We also make sure that the data is really critical to collect”, he adds, to minimise the public’s worry that companies are collecting too much unnecessary data for the sake of it.
Not all bad news
It’s also important to remember that the omnipresence of data in our lives is not a worrying thing per se, and the media plays a role in over-hyping the situation. We’re often told that if we give up our privacy, we can intercept terrorist attacks before they happen, but if we want to prioritize personal privacy, the government will know nothing about anyone, including those with nefarious motives. But as Sander Klous points out, “this typical political dichotomy presented to us is wrong – it’s not actually a situation of binaries”.
These days, we can build systems that ensure both privacy and safety at the same time – Klous explains that the next step will be ensuring one is not sacrificed for the benefit of the other. Although this may come as good news, every minute counts. Mikko Honkanen ponders the amount of personal information search engines already know about us today, and wonders if “tomorrow they might know something about me before I know myself”. We as a society are now racing against the clock to place controls on the algorithms that control us, in order that they always perform as intended. As an expert on the topic of data and analytics, Klous wants to make clear that “we have about five years to implement the constraints we want on these algorithms” – otherwise, we’ll have a real issue on our hands.
This post was brought to you by TQ, a curated tech hub in the heart of Amsterdam. We help push startups towards exponential growth.
Pssst, hey you!
Do you want to get the sassiest daily tech newsletter every day, in your inbox, for FREE? Of course you do: sign up for Big Spam here.