In this piece published a year ago, Ted Chiang pours cold water on the idea of a bootstrapping singularity.
How much can you optimize for generality? To what extent can you simultaneously optimize a system for every possible situation, including situations never encountered before? Presumably, some improvement is possible, but the idea of an intelligence explosion implies that there is essentially no limit to the extent of optimization that can be achieved. This is a very strong claim. If someone is asserting that infinite optimization for generality is possible, I’d like to see some arguments besides citing examples of optimization for specialized tasks.
I think our destination is neither utopia nor dystopia nor status quo, but protopia. Protopia is a state that is better than today than yesterday, although it might be only a little better. Protopia is much much harder to visualize. Because a protopia contains as many new problems as new benefits, this complex interaction of working and broken is very hard to predict.
Kevin Kelly’s thoughts at the time of coining of this term seven years ago:
No one wants to move to the future today. We are avoiding it. We don’t have much desire for life one hundred years from now. Many dread it. That makes it hard to take the future seriously. So we don’t take a generational perspective. We’re stuck in the short now. We also adopt the Singularity perspective: that imagining the future in 100 years is technically impossible. So there is no protopia we are reaching for.
I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations.
Related: if you want to see the paperclip maximiser in action, just look at the humans destroying the planet by mining bitcoin.
This is the same territory that Daniel Suarez explored in his book Daemon. The book is (science) fiction but as Suarez explains in his Long Now seminar, the reality is that much of our day to day lives is already governed by algorithms. In fact, the more important the question—e.g. “Will my mortgage be approved?”—the more likely that the decision will not be made by a human being.
In his novel Accelerando, Charles Stross charts the evolution of both humans and algorithms before, during and after a technological singularity.
The 2020s:
A marginally intelligent voicemail virus masquerading as an IRS auditor has caused havoc throughout America, garnishing an estimated eighty billion dollars in confiscatory tax withholdings into a numbered Swiss bank account. A different virus is busy hijacking people’s bank accounts, sending ten percent of their assets to the previous victim, then mailing itself to everyone in the current mark’s address book: a self-propelled pyramid scheme in action. Oddly, nobody is complaining much.
The 2040s:
High in orbit around Amalthea, complex financial instruments breed and conjugate. Developed for the express purpose of facilitating trade with the alien intelligences believed to have been detected eight years earlier by SETI, they function equally well as fiscal gatekeepers for space colonies.
The 2060s:
The damnfool human species has finally succeeded in making itself obsolete. The proximate cause of its displacement from the pinnacle of creation (or the pinnacle of teleological self-congratulation, depending on your stance on evolutionary biology) is an attack of self-aware corporations. The phrase “smart money” has taken on a whole new meaning, for the collision between international business law and neurocomputing technology has given rise to a whole new family of species—fast-moving corporate carnivores in the Net.