Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on May 8, 2022

Your company’s AI implementation isn’t perfect — and that’s okay

Organizations that wait for the technology to be perfected risk missing out on its benefits


Your company’s AI implementation isn’t perfect — and that’s okay Image by: Shutterstock

I like imperfect things. I like my sweater with its holes at the elbows, that painting of mine that my cat walked over while it was drying, that source code I’m using for my doctorate that never seems to execute as I’d expected it to. I like it that way, though. Imperfection makes things more interesting.

When you’re talking about business, however, there’s money to be made — potentially lots of money. And unlike other parts of life, in the business world, a small imperfection might result in millions of dollars in losses.

That’s scary. What’s even scarier is when these losses happen because engineers make mistakes while trying to implement a new and rapidly evolving technology whose risks nobody fully understands yet and the regulations of which are just getting written. If business leaders are hesitant about this potential minefield, it just proves that they’re human.

I’m talking about AI. However scary it may be, though, many people, including many business leaders, remain incredibly enthusiastic about AI. The potential upsides are huge because AI can finish processes that used to take hours in seconds. That time savings is an improvement of several orders of magnitude. Given such returns, it’s no wonder that companies are pouring billions of dollars into AI every year.

Despite this massive investment, AI uptake is still fitful. The difficulty comes not only from uncertainty regarding the risks and regulations but also the fact that many businesses fail, at least initially, to make a realistic assessment of the types of changes that AI can and can’t bring.

The all-or-nothing mentality

Larry Clark shared an anecdote in Harvard Business Review that perfectly encapsulates the problem. He spoke with a consultant whose client was making correct predictions about their industry 25 percent of the time. The consultant advised them that an AI solution could get this number up to 50 percent. The team’s executive, however, refused to implement a solution that was “wrong half the time.”

A failure rate of 50 percent is, no doubt, enormous in most cases. But it would still have been twice as good as the existing solution!

Many executives get disappointed when they see that AI won’t revolutionize their company overnight. But as Kevin Kelly, founding editor of Wiredput it: “The future happens very slowly and then all at once.”

I think this rule applies to many areas in tech, and especially AI. Sure, great new developments are on the horizon, but you can’t expect them to happen tomorrow. Good things need time to develop. Even in the fast-paced world of tech, patience is a virtue.

Leaders, therefore, shouldn’t be disgruntled when AI doesn’t suddenly transform their business into the next Google. In fact, if a new AI solution brings many small improvements over time, that may be more valuable in the long run anyway. A big disruption tends to impair other business processes that were standard to this point, and this shake-up can end up being a risky move despite the upsides of a big wave of disruption.

https://miro.medium.com/max/1400/1*tFMvAwXDFxL1wFSHX8iOXg.jpeg

AI isn’t always the best solution. Image by author

Keeping it simple

If you’ve worked with AI before, you’ve no doubt heard of concepts including accuracy, precision, recall, F1 score, underfitting, overfitting, false positives and false negatives. But most business leaders will look at you like you’re an alien if you come to them with technical jargon like that. Executives care about results more than the technical details.

Ron Glozman, who founded a company that builds AI solutions for the insurance industry, has made this exact point. What really matters is whether the AI solution makes things easier for human workers, reduces costs or increases margins. Whether or not you get spectacular results on a technical level doesn’t matter so much as long as your solution improves the status quo in your company.

Of course, data scientists will continue to phrase their goals in technical jargon because it’s useful for them. In order to translate this jargon into business terms, though, executives need to work closely with data scientists, involve them in business operations, and never stop asking them how the performance of different technical metrics might impact the business as a whole.

Complicating matters, however, is the fact that data scientists are in high demand. Many companies are therefore understaffed in this area. In consequence, many data scientists with too many projects on their plates need to prioritize the hard analytics and don’t find the time to think much about the business part of their job.

To avoid this situation, hire data scientists before you actually need them, and provide in-house training to new team members. Adding training inside the company requires some upfront investment, of course, but there are two big upsides to doing so. First, in-house training gets data scientists acquainted with the specifics of the company from day one. Second, this type of training is especially attractive to younger job candidates who often bring in fresh ideas and don’t demand salaries as high as those of their senior peers. A rigorous in-house training regimen may take a while to set up, but it will pay off in the long run.

Accuracy isn’t everything

Machine learning algorithms should be as accurate as possible, right? After all, we don’t want our machines to make wrong judgments and, for example, misclassify a cancerous tumor as a benign one. This notion sounds right, but accuracy isn’t always the goal. Let me explain.

First of all, there’s the risk of overtraining. An AI model can learn a data set so well that it discerns even small details that aren’t actually relevant for the outcome. For example, consider an AI solution that classifies a data set with lots of different animal species. Let’s further imagine that this data set contains only one type each of cats, dogs and giraffes. But it also contains two types of monkeys: black and orange.

What happens if you train this model too well so that it doesn’t only recognize a monkey for a monkey but also knows whether it’s a black or an orange one? That may sound sweet, but it gets problematic if you test the model on a picture of a gray monkey. How will the model classify that animal? A cat? A gray dog?

In this example, the risk of misclassifying new data arose because the model became too accurate during training. To avoid this problem, data scientists and business executives need to care a little less about accuracy during training and a lot more about performance during testing. Perfection isn’t the goal here.

In the tumor example above, this would mean allowing the algorithm to misclassify tumors while training. This recalibration could mean aiming for 90 percent accuracy instead of 98. Then, when the algorithm is deployed in real life, it will be better prepared to classify a tumor that doesn’t look like any of the ones it saw in the training stage. That’s paramount because encountering a data point unlike any others happens a lot. In addition, you’re giving the algorithm a chance to improve its accuracy in real life because every new data point gets fed back into the system and helps retrain it.

https://miro.medium.com/max/1400/1*aWLXcWZhNFGQDQQv3g-7jw.jpeg

Take it step by step. Image by author

Start with baby steps

The training step isn’t the only place where executives need to temper their ambitions. As Jon Reilly writes for Dataversity, businesses have a tendency to throw AI at huge problems and expect meaningful results.

That isn’t how AI works yet, however. Instead, it works best on smaller, very specialized tasks in which a big volume of data needs to be processed somehow. Start incorporating AI on jobs that will get too repetitive for humans and then build it out from there. Consider this a bottom-up approach. Top-down approaches are difficult to nail with today’s AI. We’re still quite far from AI that can transfer knowledge from one domain to another, and even further from generalized intelligence. Currently, teaching a machine how to do boring and repetitive tasks at warp speed is much easier than making it complete a complex task, even if there’s ample time at hand. That doesn’t preclude that this situation might change in the future, though.

If executives really want to implement AI wherever possible, they should remember the classic 80/20-rule, which states that 20 percent of your tools and resources lead to 80 percent of your output. Focus on those tools and resources first to make sure your solutions have the biggest possible impact.

Here again, it’s better to start with the easier parts than to redesign the whole company as an AI-algorithm. You should prioritize incorporating some patchwork solutions that actually work rather than a big, overall solution that’s too complicated to be effectively deployed.

Hesitant companies will lose

As with every new technology that hits mainstream, the early adopters are the ones who’ll collect all the cash. The good news is that it’s still not too late to get into AI.

That’s not an excuse to meticulously perfect your AI model and go live with it five years down the road though. Despite all the obstacles I’ve mentioned (and there are others besides these), more and more companies are seeing the potential benefits of AI and getting started with it now, however small or buggy things may be at the beginning.

And that’s the right approach. The technology is new enough that we haven’t yet tested all niches and edge cases. You should test half-baked solutions and then iterate on them. If you don’t push your AI updates regularly and make them available to all stakeholders, you risk missing out on key lessons.

This exact problem has happened to me during my studies. I was working on a procedure to process a large amount of data in a more efficient way than previously. The procedure was my part of the project, so I thought I’d develop and perfect it alone as far as I could before sharing it with my team.

When I finally shared it after three months though, I realized from my colleagues’ feedback that I’d been missing out on some key ideas. I had managed by myself to make the code three times more efficient than the old version. After implementing my colleagues’ ideas, however, the improvement wasn’t three- but five-fold. Although my work was a public research project and not a business, and even though there was virtually no money at stake, the thought of having wasted several weeks by not speaking to my colleagues earlier still stings.

Companies that aim for perfection too early or still haven’t decided to implement AI will be left behind. Paradoxically, you need to be able to turn down your ambitions and sit with an imperfect solution if you want to end up ahead of the pack.

https://miro.medium.com/max/1400/1*rQICAuHQEj8yqB1WLI4Ghg.jpeg

Don’t fret too much about imperfect code. Image by author

Seeking perfection will leave you waiting forever

Imperfect solutions are uncomfortable because you can’t ever drive home from work and pretend that your job is entirely, totally done. There’s always a bug to find, a tweak to make, a feature to add.

You’ll need to learn to love this reality if you need AI for your business. This rule isn’t just about business, of course. Many life situations work out better with rough-and-dirty pragmatism rather than with perfectly orchestrated processes that fail as soon as the bus is one minute late, metaphorically speaking.

That isn’t an excuse to be lazy, or to only do the absolute minimum necessary to keep up with the competition. Always do the best you possibly can. Just remember that the best is often far from perfect.

This article originally appeared on Built In.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with