*The following article was featured earlier this month on the Brockporter Online News Magazine Philosophy of Science Monday. Much thanks to Editor David Markham:
“[Philosophers of Science] are very good at asking questions…they outline and delineate what the questions are…scientists are not good at the big questions…for things we haven’t been thinking about, we need the philosophers.”
– Dr. Karl Pribram, Neuroscientist
A popular belief amongst many technologists, scientists, futurists, and industrialists is that human beings are simply biological robots. At root level we are run by a computer program encoded in our DNA that is then carried out by our neurological hardware. This belief system is often referred to as Transhumanism. Most transhumanists also believe in what’s known as the Technological Singularity; a moment fast approaching when machines will hypothetically eclipse human intellect and in effect render us all obsolete. In this view Artificial Intelligence (aka Strong AI) will become pervasive, all knowing, and all powerful.
Even though it sounds like a script for a dystopian science fiction novel, this is by no means a fringe belief system. For example the founders of Google, Larry Page and Sergei Brin, are both ardent transhumanists who aspire to develop the world’s leading AI development laboratory. To achieve such ends they have recently hired renowned inventor and futurist Ray Kurzweil as their director of engineering in charge of “machine learning” and “natural language integration and understanding”. Kurzweil is a well known and outspoken proponent for AI, even writing a book titled The Singularity is Near. Likewise Professor Michio Kaku, a well known theoretical physicist, is quoted as saying, “It may be 20 years away, it may be 1,000, but most experts agree the Technological Singularity is inevitable.” Furthermore, “When the Singularity arrives, we cannot predict what the super intelligent machines will do. They will have their own goals and stop at nothing to fulfill them. They may even choose to eliminate everything that stands in their path, including us.”
The prominence of this kind of mechanistic belief system should not surprise us. We live in a materialist culture that avidly promotes techno-fetishism. I-Phones, I-Pods, Macs, Plasma TV, Facebook, Twitter; we are quite literally plugged into a matrix almost completely mediated by technology. We’re so thoroughly distracted and conditioned by the rapid development of flashy gadgetry that many intelligent and well meaning people delude themselves into thinking such future scenarios are both A) inevitable, and B) desirable. I am not one of those people, and I am not alone.
One of the most well known objectors to the transhumanist agenda is computer scientist Jaron Lanier, the man many consider to be the father of virtual reality. Lanier points out that most AI projections are based on Moore’s Law, a technological trend demonstrating that raw computing power roughly doubles every 18 months or so. This seems to indicate that the exponential growth of “machine learning” is a foregone conclusion. However, as Lanier explains in his book You Are Not a Gadget, Moore’s Law only applies to hardware systems, not software programs. Compared to hardware, as software programs become larger and more complex, they become intractably buggy and subject to lock-in. So no matter how powerful a computer may become, how intelligent a computer can become is quite limited. Because of this Lanier insists that what we consider to be human consciousness will forever remain an irreducible, irreproducible enigma.
Scientific Journalist Nicholas Carr also documents the ways human memory transcend the capabilities of computer systems in his book The Shallows. Transhumanists propose that biological memory functions like a hard drive that stores bits of data. However unlike a hard drive the human mind contains both transient “primary memories” and long term “secondary memories”. There is a duration of time known as a “consolidation phase” that is necessary to turn a transient memory into an imprinted long term memory. During these highly complex transitional periods physiological alterations of synaptic networks within the brain (aka neuroplasticity) also occur. Carr sums up the process as follows, “Governed by highly variable biological signals, chemical, electrical, and genetic, every aspect of human memory—the way it’s formed, maintained, connected, recalled—has almost infinite gradations. Computer memory exists as simple binary bits—ones and zeros—that are process through fixed circuits, which can be either open or closed but nothing in between.”
In the 1980s Professor of Philosophy John Searle devised a thought experiment meant to refute the notion of Strong AI that is famously known as the Chinese Room. Simply put the thought experiment is meant to demonstrate that despite a computer’s ability to accept inputs, calculate them, and output a final product, it by no means understands the meaning of what it is doing. Similar to an English speaking man sitting in a room, who is then handed Chinese characters and given explicit directions on how to arrange them and produce an output, a computer is incapable of comprehending the true meaning of its actions. The inability of a computer to grasp meaning is what separates man from machine in Searle’s opinion.
Another Scientific Journalist, Ari Schulman, in his article “Why Minds are Not Like Computers” articulates the Achilles heel of Strong AI as follows:
“The foundational creed of AI [is] that every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it…The approach of precisely describing intelligence and learning was applied to a wide array of simple problems, meeting with great success until the late 1970s, at which time researchers began to realize that many seemingly simple problems…could not be solved using methods that precisely described intelligence and learning…Seldom now do researchers attempt to replicate a basic mental task by tackling the salient features of the task on its own terms…The implication is that the essence of human nature, and thus of the mind, is profound and unknowable.”
For the sake of brevity I will not delve into other convincing refutations of Strong AI that have emerged out of the fields of quantum biology and non-local consciousness research. However take note that these findings are equally important in understanding why the equation Brain = PC does not compute.
So why is this debate so important? I think many of us can agree that while we are witnessing a vast amount of technological progress going on around us, we are also witnessing an equal and opposite regression of human consciousness and dignity. We are incrementally losing our inborn will to create, and becoming rudderless as a byproduct. As Jaron Lanier puts it, “Spirituality is committing suicide. Consciousness is attempting to will itself out of existence…The deep meaning of personhood is being reduced by illusions of bits. Since people will be inexorably connecting to one another through computers from here on out, we must find an alternative.”
Through honest and open dialogue perhaps we can envision a more empowering future scenario than that promised by transhumanism; a world that will promote novelty and individualism over rote standardization and consumerism. While we’re not quite sure how we came to inherit it, we can state the following with near surety: Humans possess consciousness, and machines never will. It’s about time we make use of it and rediscover our creative purpose.