The Singularity: End of Humanity?

r1
Killer Robots

I have been mulling something over for the last few weeks. It is as fascinating as it is scary. I have finally decided to put it down on this platform, one so that we can all share this burden of thought, and two, to stimulate my fellow scientists’ and tech enthusiasts’ thinking on this particular subject. You will agree with me that the thought of something wiping us from the Lord’s green earth is too awful to contemplate. It is disturbing, to say the least. Yet it is a possibility that should not be ignored. In this short article, I will tell you why am both excited and scared as I think these things through. 

 The Singularity

The technological singularity (also simply, the singularity) is the hypothesis that the invention of artificial super intelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization. According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting to a powerful super intelligence that would, qualitatively, far surpass all human intelligence.

This scenario was foreseen by the British mathematician and scientist John Good in 1960’s. Mr. Good wrote, “Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided the machine is docile enough to tell us how to keep it in control.”

Should the Singularity occur, technology will advance beyond our ability to foresee or control its outcomes and the world will be transformed beyond recognition by the application of super intelligence to humans and/or human problems, including poverty, disease and mortality. Revolutions in Genetics, Nanotechnology and Robotics (GNR) in the first half of the 21st century are expected to lay the foundation for the Singularity. According to the Singularity theory, super intelligence will be developed by self-directed computers and will increase exponentially rather than incrementally. Post-singularity, humanity and the world would be quite different.

Transhumanism

Transhumanism is the belief that the human race can evolve beyond its current physical and mental limitations, especially by means of science and technology. The debate on whether the singularity will be of help or will cause harm to humanity has divided scientists, innovators and researchers of artificial intelligence into proponents and opponents. Each side seems to have very good points to back its’ standpoint.

One of the key proponents of singularity and a futurist Mr. Ray Kurzweil (author of The Singularity is Near) asserts that in a post-Singularity world, humans would typically live much of the time in virtual reality which would be virtually indistinguishable from normal reality. A human could potentially scan his consciousness into a computer and live eternally in virtual reality or as a sentient robot. Kurzweil also defines his predicted date of the singularity (2045) as when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date “will not represent the Singularity” because they do “not yet correspond to a profound expansion of our intelligence.” He however argues that “machines are powering all of us. They’re making us smarter. They may not yet be inside our bodies, but by the 2030s, we will connect our neocortex, the part of our brain where we do our thinking, to the cloud.” Other proponents have argued that Kurzweil’s date 2045 is too far, and we might get there in 2030’s. In his book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. He states that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age. 

Beyond merely extending the operational life of the physical body, Jaron Lanier (an American computer philosopher) advocates for a form of immortality called “Digital Ascension” that involves “people dying in the flesh and being uploaded into a computer and remaining conscious.” In a rather surprising move, Tesla and SpaceX CEO Elon Musk recently shared updates on his plans, and he’s looking to hopefully share more on his progress with developing a “neural lace” in the next few months. That’s a technical term for direct cortical interface, and it’s something that the SpaceX and Tesla CEO takes very seriously, in case you thought he might just be having a laugh. For Musk, an interface between the human brain and computers is vital to keep our species from becoming obsolete when the singularity hits.

Extinction or Transcendence?

“I believe a better question to ask is whether achieving the singularity is a good thing or a bad thing. Is it going to help us grow extinct like the dinosaurs or is it going to help us spread through the universe like Carl Sagan dreamed of? Right now, it’s very unclear to me personally. ” – Futurist Nikola Danaylov, who manages the Singularity Weblog during an interview with the Singularity Hub.

There is no doubt by now that the singularity will bring with itself a better life to humanity as outlined above. But there is another group of scientists and thinkers who have authentic fears of this scenario. These are the people who clearly see that we are headed there, but are not sure of our safety as humans. Though they note that the singularity will indeed present great opportunities to humanity in some ways, they outline a scary side of machines outgrowing our intelligence. If this notion scares you, you’re in good company. A few of the most widely regarded scientists, thinkers and inventors, like the late Steven Hawking and Elon Musk, have already expressed their concerns that super-intelligent AI could escape our control and move against us. They point out some hidden negative effects of the Singularity theory. If artificial intelligence (AI) exceeded human intelligence, what would stop the machines from taking over and potentially destroying humanity? There are several scientific theories on this subject, from an AI Box (the AI is kept constrained in a virtual world where it cannot affect the external world) to a friendly AI (which will likely be harder to create than an unfriendly AI, but it may keep unfriendly AIs from developing simply to maintain their existence).

For instance, in an acclaimed British science fiction series Humans, state-of-the-art humanoid robots called “synths” are household servants, manual workers, office drones and companions; functional creations that exist purely to make human lives easier. But when some of the robots start to think and feel for themselves, troubles arise. At some point the powerful masculine machines take on their human “masters” creating even more confusion and damages. The Terminator films are built on the fear that someday there will be a war between humanity and our robotic creations, which will rise up and turn against us. Although this might be termed as mere fiction, anyone who has been following closely on the inventions and innovations in robotics and artificial intelligence will tell you that what is depicted in these fiction series is not far from reality. In these days of drone warfare, it is a fear that researchers in robotics and artificial intelligence (AI) recognize. In July 2017, an open letter from The Future of Life Institute was presented to the International Joint Conference on Artificial Intelligence, calling for a ban on the development of AI weapons – robots capable of killing autonomously, without human control. The letter was signed by leading AI researchers and scientists including Stephen Hawking, Elon Musk, Steve Wozniak, Stuart Russell and thousands more.

Criticism

Then there is a group of scientists who just don’t think that the singularity will ever materialize.  Most arguments against the possibility of the Singularity involve doubts that computers can ever become intelligent in the human sense. The human brain and cognitive processes may simply be more complex than a computer could be. Furthermore, because the human brain is analog, with theoretically infinite values for any process, some believe that it cannot ever be replicated in a digital format. The critics go further to point out that the Singularity operates from the assumption that biological systems are inferior and to be transcended by human-devised technologies. To date AI remains inefficient in processing speed and utilization of energy. In fact, unlike biological systems it cannot create its own atomic and molecular power. Intelligence and Mind are strictly one aspect of human holobiont constructions. As we know today the human microbiome not only coordinates with the human body, including the brain, but also signals to the external environment for exponential adaptation of the human holobiont. This is not likely to be replicated in a machine, however intelligent it may be.

One thought on “The Singularity: End of Humanity?

Leave a Reply

Your email address will not be published. Required fields are marked *