There are thousands of articles on artificial intelligence and, if you follow the field, the vast majority of them seem to be warnings of impending doom or technology experts shrieking that AI isn’t as big a deal as the media would have you believe, “Move along there’s nothing to see here,” they appear to say. All of those people are wrong.
AI is a big deal; it’s going to save human lives. If you’re worried about whether or not we should risk creating machines that are capable of thinking and learning: stop worrying because it is too late. We haven’t reached the singularity, but we have certainly reached a point where almost any human can create a neural network capable of solving problems.
If you’re still on the fence – perhaps thinking “This guy is just buying the hype” – consider these practical applications for AI that are already commercial:
- AI can determine, with better accuracy than self-reporting, how someone felt about a movie or TV show.
- AI can glean the same information as an EKG using a wireless device no more complex than your router.
- AI can beat the best human players at games more strategically complex than chess.
There are a ton of ethical concerns when it comes to creating AI, and rest assured they’re being addressed by really smart people and, also, governments. The important thing to realize is that machine-learning isn’t a weapon; it’s the raw materials from which a near-infinite variety of tools can be built, including weapons.
AI exists to solve problems and, right now, it’s not really better at doing that than people. It’s better at reading data, but so are regular computers. We’re still figuring out how to teach AI how to fend for itself.
Computers, which are incredibly useful, perform calculations that most humans can’t. AI needs to automate things that people don’t know how to, like art and science, if it wants to prove itself necessary.
The question is how do we build a robot that can solve problems human scientists haven’t thought of yet, or one that can paint an original picture? The answer is to create a device capable of curiosity, and then figure out how to control it’s gaze. We’re dealing with a child-like attention span when it comes to autonomous systems.
When we were in the infancy of AI there were plenty of far-fetched goals, like virtual companions, but the future was entirely uncertain. A newborn human baby exists, and we feed it and nurture it, but it doesn’t really take over and start contributing to the team right away.
In fact, newborn babies – for developer purposes – are pretty much just buggy applications that do nothing but utilize resources that were previously dedicated to other systems. The reward in having one is knowing you’re contributing to the future.
Thankfully AI is further developed than an infant now, but it’s not a teenager yet, you can’t send it off to the store to buy milk and expect it to be able to carry out the task with relative autonomy. Right now the collective field of AI looks like a toddler exploring the world and soaking up the environment and stimulus it is exposed to.
The problem with people’s perception of AI isn’t based on too much hype – if you ask us there isn’t enough hype about the real applications for machine-learning – it’s that we’ve been hearing the hype for so long it’s starting to feel like “AI will change everything” is a lie companies use to sell boring algorithms.
AI research isn’t new, but the AI era is nearly upon us. Our robot baby is growing up.
Get the TNW newsletter
Get the most important tech news in your inbox each week.