Parapsychology Journalism: The People, The Theory, The Science, The Skeptics
I am old enough to have witnessed the full progression of artificial intelligence because it did not become a thing until I was an adult due to the limitations of early computers and all significant progress has happened in my lifetime. The dream of creating intelligent, thinking computers has been seen as a real possibility since the 1970’s and as computers have gotten faster, stored and accessed more memory, as parallel processing became a reality, it has always seemed as though we were a step closer to making it a reality. If only a computer is powerful enough, the thinking goes, we will be able to achieve true artificial intelligence. Many very intelligent people believe this to be true. Even Elton Musk, founder of many successful and innovative companies has expressed concern over the eventual power of artificial intelligence.
On the topic of AI, Musk issued a warning and strongly suggested that there must be strict regulations on this potential development. Musk’s full answer is below, and can be watched at the 1 hour and 7 minute mark on a video of the interview via Erik Bjäreholt:
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is — it’s probably that. So we need to be very careful with artificial intelligence.
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out. [audience laughs]“
Yet time after time, the closer we get, the farther away the goal seems. A case in point is the Google self driving car.
Would you buy a self-driving car that couldn’t drive itself in 99 percent of the country? Or that knew nearly nothing about parking, couldn’t be taken out in snow or heavy rain, and would drive straight over a gaping pothole?
If your answer is yes, then check out the Google Self-Driving Car, model year 2014.
. . . Google often leaves the impression that, as a Google executive once wrote, the cars can “drive anywhere a car can legally drive.” However, that’s true only if intricate preparations have been made beforehand, with the car’s exact route, including driveways, extensively mapped. Data from multiple passes by a special sensor vehicle must later be pored over, meter by meter, by both computers and humans. It’s vastly more effort than what’s needed for Google Maps.
Google’s cars are better at handling some mapping omissions than others. If a new stop light appeared overnight, for example, the car wouldn’t know to obey it. However the car would slow down or stop if its on-board sensors detected any traffic or obstacles in its path.
Google’s cars can detect and respond to stop signs that aren’t on its map, a feature that was introduced to deal with temporary signs used at construction sites. But in a complex situation like at an unmapped four-way stop the car might fall back to slow, extra cautious driving to avoid making a mistake. Google says that its cars can identify almost all unmapped stop signs, and would remain safe if they miss a sign because the vehicles are always looking out for traffic, pedestrians and other obstacles.
Google is a company with amazing resources and even so it can’t make it’s car do the one task that every living creature can manage: think for itself. It has to be programmed ahead of time for practically every situation it can possibly encounter and it can’t react in a novel way to novel situations. This is the problem with all artificial intelligence: it can’t handle the one task we most need it to do: actual thinking. Everything to date that has been done in A.I. research is a kind of work around, much like the Google self driving car. It gives the appearance of intelligence through brute force computing and data accumulation, but it isn’t real intelligence.
To understand why we don’t have true artificial intelligence requires that we examine our assumptions about computers and the human mind. Computers have always been superior to humans in a couple of ways. They remember things perfectly and they are capable of performing their tasks perfectly identically every single time. In this limited way they have always been “smarter” than people. The advent of computers, the power of their computing and their easily understandable mechanisms led many to be swayed by that model into seeing the brain as no more than a very complex massively parallel computer. This path has had two outcomes: the belief that computers can be made to think and the belief that the brain can sorted out much like a computer can.
It turns out that neither of these assumptions is true.
The problem of getting a computer to think ultimately comes down to what is referred to as the Hard Problem of Consciousness.
The hard problem of consciousness (Chalmers 1995) is the problem of explaining the relationship between physical phenomena, such as brain processes, and experience (i.e., phenomenal consciousness, or mental states/events with phenomenal qualities or qualia). Why are physical processes ever accompanied by experience? And why does a given physical process generate the specific experience it does—why an experience of red rather than green, for example?
. . . The hard problem contrasts with so-called easy problems, such as explaining how the brain integrates information, categorizes and discriminates environmental stimuli, or focuses attention. Such phenomena are functionally definable. That is, roughly put, they are definable in terms of what they allow a subject to do. So, for example, if mechanisms that explain how the brain integrates information are discovered, then the first of the easy problems listed would be solved. The same point applies to all other easy problems: they concern specifying mechanisms that explain how functions are performed. For the easy problems, once the relevant mechanisms are well understood, there is little or no explanatory work left to do.
Experience does not seem to fit this explanatory model (though some reductionists argue that, on reflection, it does; see the section on reductionism below). Although experience is associated with a variety of functions, explaining how those functions are performed would still seem to leave important questions unanswered. We would still want to know why their performance is accompanied by experience, and why this or that kind of experience rather than another kind. So, for example, even when we find something that plays the causal role of pain, e.g. something that is caused by nerve stimulation and that causes recoil and avoidance, we can still ask why the particular experience of hurting, as opposed to, say, itching, is associated with that role. Such problems are hard problems.
How does this relate to a self driving car? The human mind has the mother of all shortcuts for dealing with vast amounts of data. Rather than have to learn, store and retrieve the patterns for every conceivable type of road, we only have to learn one thing: the idea of what a road is. What would require a computer to sift through terabytes of information, we accomplish with a single, not terribly complex (for us) idea. Once we have that idea we can not only recognize and navigate any passable road but we can also navigate a car where no road exists (i.e. drive carefully on a relatively flat patch of dirt around a tree that has fallen on the road) because we have an idea of the conditions necessary for driving a car somewhere. An idea of a road encompasses all possible versions, real and imaginary of what a road can be.
An idea observes physical reality and computing IS physical reality. Another way to say this is that computing is always the observation, never the observer. An idea is not something physical, and a computer program is and therein lies the problem. We don’t know and don’t have any idea how to know, how to get from something physical to something non-physical. That is the hard problem of consciousness.
Clearly we have brains, which are physical which are somehow necessary for consciousness. So there is definitely a relationship between consciousness and the physical world. That is not in doubt. The problem is that we have no idea what that relationship is and until we understand it, true AI will remain a distant dream. We may have to completely rethink our beliefs on what consciousness is and how it originates. (That discussion is beyond the scope of this article.)
An idea is a product of consciousness, which is not material and cannot be duplicated by any physical process we know of. The idea of a road is not a representation of a road. Nor is it a specification or a diagram although it can incorporate these things. The idea of a road can include every real and imaginary road, as well as any type of representation of a road in any medium in which it is recognizable, even barely, as a road. What’s important here is that the idea of a road can take an infinite number of variables into account because an idea transcends the physical reality that it observes. Any computer, no matter how powerful, will never be able to do this. It cannot transcend it’s own physical reality. You cannot compute your way to the creation of an idea. All you can do is define and refine an idea that you already have. And to define an idea is to eventually fall into the trap of infinite variables. (You can never fully define an idea because they are an infinite number of definitions.) Increasing your computing power, memory and storage does not solve the problem of having to define everything.
This is a crucial limitation of AI because it is impossible to define everything. This is why the formation of ideas is central to real intelligence. Ideas and concepts allow us to process otherwise unimaginable amounts of information by relying on a core of intangible concepts that encompass nearly everything we can hope to encounter rather than having to literally translate every single bit of input our minds receive. That’s why we can look at a road and identify it as a road with just a glance without ever having to really think about it. Computers, of course, do not operate anything like this. We function with intangibles and a computer functions with tangibles. That is a huge difference.
So if you were worried that computers were going to take over the world, rest easy. Actual thinking is going to be the exclusive domain of living creatures for the foreseeable future.