The Singularity is Ne’er

Written by Shawn Funk

The futurist’s job is to project current trends far into the future and make bold predictions about the world of tomorrow based on these trends. Ray Kurzweil is bold indeed. He posits the idea of a technological singularity, a moment in time when artificial intelligence (AI) eclipses the intelligence of the smartest human beings, leading to a technological explosion and super-intelligent AI (Schneider, 2016). This statement is loaded with assumptions which I will not mention. Instead, I am just going to assume it is all true and have some fun. I will add that as it stands, we are far from developing fully autonomous AI robots that are equivalent to or even close to human intelligence. Like every other lunatic with prophetic visions of the future, Kurzweil puts an exact date on when his singularity will occur; he thinks that it will happen in 2045 (cough). While this topic is highly speculative, many technophiles now believe that The Singularity is inevitable and vital to human progress, as evidenced by its capital letters. Kurzweil believes the next era of human evolution is technological, underscoring the impact these new technologies will have on the concept of “the human” (Schneider, 2016). It is hard to picture what exactly he means by this, but in one long sentence, here is how I understand it: 

Autonomous super-intelligent humanoid robots that are motivated by a learned set of values or beliefs will radically reshape the world through rapid technological progress that will threaten humanity in ways we can’t quite comprehend yet. 

Breathe… Confession, I have consumed far too much sci-fi to envision super-intelligent robots in any other form than humanoid. I admit that my bias has led me to make this leap in logic! Print is where irony goes to die. I digress, back to the other thing. What will it mean to be human in a post-singularity world if there is a technological singularity?

In his book Future Shock (Toffler, 1970), Alvin Toffler suggests that the challenge of our technology is psychological. The speed of change accelerates beyond the mind’s ability to process that change. This is what Toffler calls future shock, and the concept can be easily grasped through our understanding of culture shock, but instead of being relocated in place, the relocation happens in time (Toffler, 1970). Behold! A visitor from the 1950s wanders into a supermarket in the 2020s and attempts to use the self-checkouts. Are they successful? Ask your grandpa. Toffler’s book was written in 1970; a few things have changed since then, but the concept of future shock is relevant because if Kurzweil is right in his declaration “The Singularity is near”, tomorrow will not resemble today. Kurzweil, Schneider, and Chalmers, all well-known futurists, agree that a world built by a superintelligence will not be recognizable to a human being (Schneider, 2016).

A technological explosion aided by intelligence far beyond our own would put humanity on edge, and we would likely act out like a monkey in a supermarket. Remember, this is all very scientific. Let’s assume that the difference in intelligence between a super-intelligent AI and a human is like that of a human to a chimpanzee (Schneider suggests the difference will be far greater but aren’t we all just making shit up at this point anyway). Okay, grandpa had a hard time at the checkout, but he eventually got his groceries with a little help and is now on his way home. How successful would a chimpanzee be trying to use the self-checkout? The minute that monkey was guided through the door, it would have gone berserk, ripped off a few fingers, thrown some turds, and then maybe it would have found the bananas. The concept of payment never entered its monkey brain, and it probably had no idea where it was and what was happening. The chimp might be wondering why the bananas and other fruits are not on a tree. Maybe it was wondering why it was led to this place in the first; it is certainly no place it would ever think of going for food. Scientifically speaking, (scoff!) this monkey is suffering from acute future shock: terror, confusion, and fear brought on by the rapid re-ordering of its world. That monkey will have to be put down; there is no coming back from that one. Like a monkey in a human world, humans will be in a super-intelligent world. It is easy to posit our own extinction in a scenario like this, but we will not just roll over and die; we are human, damn it! Where there is a will, there is a way.

How can we solve the (so far non-existent) existential crisis of acute future shock? According to the nerds who will eventually create this problem, there are two things we can do. Either we find ways to restrain the AI systems, curbing the speed of technological innovation, or we find a way to accelerate our own mental processes so we can comprehend the changes that are occurring in real-time and adjust.

I think the first suggestion will fail. Speaking from no authority at all, I would have to say that, as a species, we would all have better luck pissing into the wind than trying to outsmart a super-intelligent AI. To be clear, “superintelligence is a creature with the capacity to radically outperform the best human brains in practically every field, including scientific creativity, general wisdom, and social skills” (Schneider, 2016). Constraining an AI would be tantamount to a monkey imprisoning a human being. We are not smart enough to compete with intelligence vastly beyond our own, and if we cannot compete, we will die or be dominated by a system we no do not understand. AI has the potential to become a phoenix that will emerge from the ashes of humanity. If the X-Men’s great Professor Xavier could not blockade the power of the phoenix in Mary Jane with his telekinetic mutant virtuosity, how can we expect a few puny humans to hold back superintelligence? If constraints are ultimately ineffective, we must speed up our own mental capabilities.

Wiring ourselves up with gadgets and micro-processors might give us a chance. This is where the future of humanity lies! Skin bags loaded with enough metal to process all the information in the world, sounds fun. Wow, that got out of hand quickly; I should put that copy of Neuromancer away for a while. Nah. William Gibson aside, Kurzweil suggests that we will be responsible for the next step in our evolution (Schneider, 2016). Our future is cybernetic; resistance is futile! There are a few ways this can be thought about. Either we can use implants to enhance our capabilities, or we can go full cyborg by uploading our consciousness into a computer, both of which are still science fiction fantasies. Whatever we decide, the impact on our identity is going to be turbulent as fundamental changes in our form, composition, capabilities, and environment wreak havoc on our ways of life.

Returning to the question, what will it mean to be human in a post-singularity world? I think the adage holds: if you can’t beat ’em, join ’em. The realization that we cannot control a future that is populated with beings that are smarter, stronger, and live longer than us will force our hand to act. Therefore, if The Singularity occurs (that’s a big if), humans will have no choice but to expand their capabilities to match or exceed the smartest artificial life forms to keep from being left behind technologically like your grandpa at the self-checkout. Thus, our species, culture, and civilization could soon be unrecognizable. What will another turn of the screw bring for humanity?   


Toffler, A. (1970). Future shock. Bantam Books. Retrieved December 8, 2022, from

Schneider, S. (Ed.). (2016). Science fiction and philosophy from time travel to Superintelligence. Wiley Blackwell.

Share this article:

Leave a Reply

Your email address will not be published. Required fields are marked *