Tristan GreeneEditor, Neural by TNW
Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: He/him
Robotic surgery systems are used in thousands of hospitals around the world. A decade ago they were clunky machines built to assist with routine procedures. Today, they’re capable of conducting end-to-end surgeries without human aid.
Recent leaps in the field of deep learning have made difficult tasks such as surgery, electronics assembly, and piloting a fighter jet relatively simple. It might take a decade to train a human in all the necessary medical knowledge required for them to perform brain surgery. And that cost is the same for each subsequent human surgeon thereafter. It takes about the same investment for every human surgeon.
But AI is different. The initial investment to create a robotic surgery device might be large, but that all changes once you’ve produced a working model. Instead of 8-12 years to create a human specialist, factories can be built to produce AI surgeons en masse. Over time, the cost of maintaining and operating a surgical machine – one capable of working 24/7/365 without drawing a paycheck – would likely become trivial versus maintaining a human surgical staff.
That’s not to say there’s no place for human surgeons in the future. We’ll always need human experts capable of informing the next generation of machines. And there are some procedures that remain beyond the abilities of modern AI and robotics. But surgery, much like any other precision-based endeavor, lies well within the domain of modern AI.
Surgery is a specific skill and, for the most part, robots excel at automating tasks that require more precision than creativity. And that’s exactly why robot surgeons are commonplace, but we’re likely decades away from a fully-functioning AI-powered nurse.
And this is exactly why AI didn’t have a huge impact during the pandemic. When COVID-19 first hit, there was a lot of optimism that big tech would save the day with AI. The idea was that companies such as Google and Microsoft would come up with incredible contact-tracing mechanisms that would allow us to tailor medical responses at an extremely granular level. This, we collectively figured, would lead to a truncated pandemic.
We were wrong, but only because there wasn’t really anything for AI to do. Where it could help, in aiding the rapid development of a vaccine, it did. But the vast majority of our problems in hospitals had to do with things a modern robot can’t fix.
What we needed, during the last patient peak, were more human nurses and PPE for them. Robots can’t look around and learn like a human, they have to be trained for exactly what they’ll be doing. And that’s just not possible during giant emergency situations where, for example, a hospital’s floor plan changes to accommodate an increase in patients and massive quantities of new equipment is introduced.
Researchers at John Hopkins university recently conducted a study to determine what we’ll need to do in order for robots to aid healthcare professionals during future pandemics. According to them, modern robots aren’t up to the task:
A big issue has been deployability and how quickly a non-expert user can customize a robot. For example, our ICU ventilator robot was designed for one kind of ventilator that pushes buttons. But some ventilators have knobs, so we need to be able to add a modality so that the robot can also manipulate knobs. Say you want one robot that can service multiple ventilators; then you’d need a mobile robot with an arm attachment, and that robot could also do plenty of other useful jobs on the hospital floor.
That’s all well and fine when things are going perfectly. But what happens when the knob pops off or someone brings in a new kind of machine with toggles or a touch-screen? Humans have no problem adapting to these situations, but a robot would need an entirely new accessory and a training update to compensate.
In order for developers to create a “nurse robot,” they’d need to anticipate everything a nurse encounters on a daily basis. Good luck with that.
AI and machines can be adapted to perform certain tasks related to nursing, such as assisting with intake or recording and monitoring patients’ vital signs. But there isn’t a machine in the world that can perform the day-to-day routine functions of a typical hospital staff nurse.
Nurses spend the majority of their time responding to real-time situations. In a given shift, a nurse interacts with patients, sets up and breaks down equipment, handles precision instruments, carries heavy objects through people-filled spaces, solves mysteries, keeps meticulous notes, and acts as a liaison between the medical staff and the general public.
We have the answer to most of those problems individually, but putting them together in a mobile unit is the problem.
That Boston Dynamics robot that does backflips, for example, could certainly navigate a hospital, carry things, and avoid causing injury or damage. But it has no way of knowing where a doctor might have accidentally left the chart it needs to update its logs, how to calm down a scared patient, or what to do if an immobile patient misses the bedpan.
Get the TNW newsletter
Get the most important tech news in your inbox each week.