This article was published on September 2, 2017

Algorithms should work in favor of customers — for everyone’s sake


Algorithms should work in favor of customers — for everyone’s sake

When I started my healthy snacks startup five years ago, I knew I would have to juggle a lot of balls at the same time to make it successful.

I would have to make terrific hiring decisions, line up top suppliers to buy the choicest ingredients for our snack bars that we would produce, buy the best manufacturing equipment we could find to ensure product quality and shelf life, create a network to distribute and sell our products, and figure out the best way to promote it.

I would never have added keeping on top of algorithms to my — pardon the pun — success equation. And yet I have been learning as much as I can about them because in this brave new business world, they figure in a company’s success. One thing I’ve learned is that if you don’t watch them, they can hurt customers — and when customers are unhappy, there’s a boomerang effect: You’re hurting your business.

Let’s define algorithms for a moment. They are actually a kind of artificial intelligence, a step by step mathematical instructions written to achieve particular goals — and that can be automatically refined over time to get closer and closer to the goals.

More and more articles are popping up on whether the algorithms that some companies use to maximize their business — and thus their profits — are harming consumers. The articles have pointed in particular to giants such as Amazon, Google and Facebook, but also to pharmaceutical companies, credit-scoring companies and others.

You can write algorithms to focus on various goals — in business, for example, maximizing sales or profits. But some goals can shortchange consumers. Not only can poor satisfaction backfire on a company, but using algorithms solely to maximize profit raises the ethical question of whether a company is doing right by its customers.

In addition to the business side of algorithms, there is social side to them as well: Intentionally or not, they can discriminate against minorities and other groups.

Before I go into their detrimental social side, let me give you an example of an algorithm hurting consumers. And, no, I’m not going to mention the recent articles about Amazon using its algorithms to maximize profit while shortchanging consumers. That’s received tons of attention already.

My example is from a February 2016 article in Harvard Business Review. “A popular consumer packaged goods company was purchasing products cheaply in China and selling them in the United States,” the article said. “It selected these products after running an algorithm that forecast which ones would sell the most.”

As predicted, sales soared. But several months later, customers began returning the items. The algorithm had failed to figure in product quality. In doing so, it had hurt both consumers and the company.

A big company like the one in this example can suffer a short-term loss, fix the problem, and rebound, of course. But an algorithm goof like the one I mentioned has the potential to deal a fatal blow to a small company or startup.

The lesson is clear. Whether you’re a big guy or small potatoes, you need to learn about algorithms — what they do, how they can help you, and also the potential harm they can cause to your business and customers. And if you use them to try to improve your business, keep in mind that providing value and satisfaction to a consumer should be just as much a part of an algorithm as increasing sales and profits.

One way that algorithms can hurt consumers is to discriminate against them on the basis of race, class, income level or other factors. Algorithms are mathematical tools, so they’re not discriminating because they have bias in their hearts and minds — because they don’t have hearts and minds.

Their discrimination is unintentional. But it’s there nonetheless, and ethical business leaders should take steps to prevent it, and, if they see it, reverse it. They can do this by having those creating their algorithms take factors other than maximizing profits into account.

The Obama administration released a policy report on this very topic in 2014. It warned that automated decision-making “raises difficult questions about how to ensure that discriminatory effects resulting from automated decision processes, whether intended or not, can be detected, measured, and redressed.”

The good news is that the report prompted a number of experts to take a close look at the problem and develop principles of algorithm-use best practices and accountability.

The goal of an algorithm that assigned a risk-of-repeat-offenses score to defendants in criminals proceedings was to help justice system players make decisions on setting bonds, paroling offenders and the like.

However, the COMPAS algorithm had a harmful unintended consequence. Investigative journalists from ProPublica discovered that the system is almost twice as likely to assign too high a risk score to African-American as white defendants. It errs on the side of false positives, labelling many black defendants greater risks than they are. This of course triggers greater punishments for them.

The bottom line was that COMPAS discriminated against African-Americans in a way that harmed them.

Algorithms can also discriminate against women.

The news organization Quartz looked at the way the four home digital assistants — Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google’s Google Home — respond to sexual harassment.

None of the programmed responses involve fighting back. Instead, they range from embarrassed to flirty. Siri’s “I’d blush if I could” is one example, and Alexa’s “Well, thanks for the feedback” another.

The narrow range of reactions portrays women as passive acceptors of abuse, reinforcing sexist ideas, Quartz reported.

How should business — and the world at large — try to prevent algorithms from discriminating against various groups?

One way is for businesses to be more transparent about what they design their algorithms to do. Many people are deeply suspicious of the motives behind algorithms. They are convinced that companies design them to serve their own interests rather than other swaths of the population, including consumers.

I’m convinced that making algorithm approaches transparent would actually help companies by making them think beyond maximizing sales and profits.

These days the most respected companies take social responsibility into account in their decision-making. Why not create algorithms that do the same? When I talk about social responsibility, I use the term broadly to include consumer interests as well as “great public good” interests such as protecting the environment and promoting the arts.

As someone running a startup that is growing rapidly without algorithms, I have yet to use them. But sooner or later, I probably will.

What I’ve learned about this so far has convinced me that any algorithm that my startup uses must be designed with both the company and consumers in mind. If we continue putting the customer first, as we’d like to believe we’ve done so far, I’m convinced we’ll be even more successful. All after, customer satisfaction is the key to any business.

And one thing more. Using algorithms that take the customers’ needs — such as fair prices — into account is not just good business. It’s the right thing to do.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top