IBM’s gone by just its initials for so long that many of us have to stop and think about what the letters stand for. International Business Machines.
I was reminded of the corporation’s singular focus last week during the TNW 2022 Conference when Seth Dobrin, IBM’s first chief AI officer, took the stage to talk about artificial intelligence.
As Dobrin put it, IBM “doesn’t do consumer AI.” You won’t be downloading IBM’s virtual assistant for your smart phone anytime soon. Big Blue won’t be getting into the selfie app AI filter game.
Simply put, IBM’s here to provide value for its clients and partners and to create AI models that make human lives easier, better, or both.
That’s all pretty easy to say. But how does a company that’s not focused on creating products and services for the individual consumer actually walk that kind of talk?
According Dobrin, it’s not hard: care about how individual humans will be affected by the models you monetize:
We’re very stringent about the type of data we will ingest and make money from.
During a discussion with the Financial Times’ Tim Bradshaw during the conference, Dobrin used the example of large-parameter models such as GPT-3 and DALL-E 2 as a way to describe IBM’s approach.
He described those models as “toys,” and for good reason: they’re fun to play with, but they’re ultimately not very useful. They’re prone to unpredictability in the form of nonsense, hate-speech, and the potential to output private personal information. This makes them dangerous to deploy outside of laboratories.
However, Dobrin told Bradshaw and the audience that IBM was also working on a similar system. He referred to these agents as “foundational models,” meaning they can be used for multiple applications once developed and trained.
The IBM difference, however, is that the company is taking a human-centered approach to the development of its foundational models.
Under Dobrin’s leadership, the company’s cherry-picking datasets from a variety of sources and then applying internal terms and conditions to them prior to their integration into models or systems.
It’s one thing if GPT-3 accidentally spits out something offensive, these kinds of things are expected in laboratories. But it’s an entirely different situation when, as a hypothetical example, a bank’s production language model starts outputting nonsense or private information to customers.
Luckily, IBM (a company that works with corporations across a spectrum of industries including banking, transportation, and energy) doesn’t believe in cramming a giant database of unchecked data into a model and hoping for the best.
Which brings us to what’s perhaps the most interesting take away from Dobrin’s chat with Bradshaw: “be ready for regulations.”
As the old saying goes: BS in, BS out. If you’re not in control of the data you’re training with, life’s going to get hard for your AI startup come regulation time.
And the Wild West of AI acquisitions is going to come to an end soon as more and more regulatory bodies seek to protect citizens from predatory AI companies and corporate overreach.
If your AI startup creates models that won’t or can’t be compliant in time for use in the EU or US once the regulation hammers fall, your chances of selling them to or getting acquired by a corporation that does business internationally are slim to none.
No matter how you slice it, IBM’s an outlier. It and Dobrin apparently relish the idea of delivering compliance-ready solutions that help protect people’s privacy.
While the rest of big tech spends billions of dollars building eco-harming models that serve no purpose other than to pass arbitrary benchmarks, IBM’s more worried about outcomes than speculation.
And that’s just weird. That’s not how the majority of the industry does business.
IBM and Dobrin are trying to redefine what big tech’s position in the AI sector is. And, it turns out, when your bottom line isn’t driven by advertising revenue, subscriber numbers, or future hype, you can build solutions that are as efficacious as they are ethical.
And that leaves the vast majority of people in the AI startup world with some questions to answer.
Is your startup ready for the future? Are you training models ethically, considering human outcomes, and able to explain the biases baked into your systems? Can your models be made GDPR, EU AI, and Illinois BIPA compliant?
If the current free-for-all dies out and VCs stop throwing money at prediction models and other vaporware or prestidigitation-based products, can your models still provide business value?
There’s probably still a little bit of money to be made for companies and startups who leap aboard the hype train, but there’s arguably a whole lot more to be made for those whose products can actually withstand an AI winter.
Human-centered AI technologies aren’t just a good idea because they make life better for humans, they’re also the only machine learning applications worth betting on over the long haul.
When the dust settles, and we’re all less impressed by the prestidigitation and parlor tricks that big tech’s spending billions of dollars on, IBM will still be out here using our planet’s limited energy resources to develop solutions with individual human outcomes in mind.
That’s the very definition of “sustainability,” and why IBM’s poised to become the defacto technological leader in the global artificial intelligence community under Dobrin’s so-far expert leadership.
Get the Neural newsletter
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.Follow @neural