Ben Goertzel, the author of this piece, is hosting a TNW Answers session today at 7:00pm CEST — 1:00pm EDT — 10:00am PDT — 10:30pm IST. Ask him a question NOW.
When finalizing a few research papers for the AGI-20 conference, scarfing up related info from Arxiv and Google Scholar and GitHub and n-category Cafe — I was struck by the obvious realization of how utterly different this process was from when I got my PhD back in 1989.
I was hit by the long-rusted memory of what it was like to have some wild, crazy, obscure science or tech idea — and then sit on it and think about it for months or even years, without having anyone appropriate around to talk to about it, and without having any easy way to figure out how original the idea was Looking through the research literature in big old bound volumes in the library was — somewhat different. Conferences were almost the only way to find out what other researchers were thinking about.
The last N years when I come up with some wacky exciting new idea, it’s rare that more than a day passes before I’m deep into the Net, trying to understand if my original-to-me line of thought is actually well-known by some sub-community somewhere, just using totally different vocabulary.
As iconoclastic as I am and as peculiar are some of my lines of investigation (mathematical models of psychic powers! AGI emerging from blockchain based networks! quantum logical reasoning engines!) — by the standards of 30 years ago, I am by no means any sort of independent thinker anymore.
I’ve become just another neuron in the goddamn global brain.
And to be clear, I have no intention of going back. Modern tools squash some individual creativity but they just save so much time. Which is key given the harsh finitude of my mortal life — from which my only chance of escape is developing radical longevity tech. Which I’m working on in the Rejuve project — but I’m way more likely to discover the key to biological immortality if I’m not chasing down dead-end paths.
There is partial consolation in the knowledge that basically everyone else today is in the same boat. And I do have the mildly perverse pleasure of knowing I’m a somewhat unusual sort of neuron. I seem to be a neuron hooked up to channel various sorts of anticipatory processes. The AGI concept that I launched 15 years ago — and am pushing forward hard now with SingularityNET, OpenCog and TrueAGI — is now known to megacorporate CEOs and national leaders, and serves as the mandate of billion dollar R&D initiatives like OpenAI and Deep Mind.
The quest to use AI to cure death is no longer so quixotic and marginal, but is the sort of thing that gets funding and serious attention. Even my goofball idea of using brain implants to eliminate pain is starting to seem mainstream. Using AI on genomics data to specialize medical therapies to the individual is no longer sci-fi; my colleagues and I are actually doing this now in the context of some COVID-19 clinical trials.
The global brain is evolving in ways that nobody can predict
The global brain’s cognition is getting faster and faster, and the time-gap between crazy and obvious has never been shorter. Some of my current goofball notions like applying extended laws of physics to transhuman, transphysical mindspace — or teaching AIs unconditional love so they can grow into post-Singularity minds that are not only smarter but more compassionate and creative than humans — may be tomorrow’s common sense.
The beauty of being a neuron in the global brain is that the global brain is an open-ended intelligence — the exact opposite of the stilted, constrained reinforcement learning AIs so popular among Big Tech AI teams today. It’s evolving in ways that nobody, including it, can predict — and redefining itself and expanding its boundaries with every step. An open-ended intelligence tends to grow best with components that are also open-ended intelligences.
While I sometimes think otherwise upon perusing Reddit posts or YouTube comments, in the end being a neuron in the Global Brain at the dawn of the Singularity is not so bad.
Or at least, that’s what the GB wants us neurons to think…