The Weiler Psi

Parapsychology Journalism: The People, The Theory, The Science, The Skeptics

Artificial Intelligence is Not Taking Over the World


I am old enough to have witnessed the full progression of artificial intelligence because it did not become a thing until I was an adult due to the limitations of early computers and all significant progress has happened in my lifetime.  The dream of creating intelligent, thinking computers has been seen as a real possibility since the 1970’s and as computers have gotten faster, stored and accessed more memory, as parallel processing became a reality, it has always seemed as though we were a step closer to making it a reality.  If only a computer is powerful enough, the thinking goes, we will be able to achieve true artificial intelligence.  Many very intelligent people believe this to be true.  Even Elton Musk, founder of many successful and innovative companies has expressed concern over the eventual power of artificial intelligence.

On the topic of AI, Musk issued a warning and strongly suggested that there must be strict regulations on this potential development. Musk’s full answer is below, and can be watched at the 1 hour and 7 minute mark on a video of the interview via Erik Bjäreholt:

“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is — it’s probably that. So we need to be very careful with artificial intelligence.

“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out. [audience laughs]“

Yet time after time, the closer we get, the farther away the goal seems.  A case in point is the Google self driving car.

Would you buy a self-driving car that couldn’t drive itself in 99 percent of the country? Or that knew nearly nothing about parking, couldn’t be taken out in snow or heavy rain, and would drive straight over a gaping pothole?

If your answer is yes, then check out the Google Self-Driving Car, model year 2014.

. . . Google often leaves the impression that, as a Google executive once wrote, the cars can “drive anywhere a car can legally drive.” However, that’s true only if intricate preparations have been made beforehand, with the car’s exact route, including driveways, extensively mapped. Data from multiple passes by a special sensor vehicle must later be pored over, meter by meter, by both computers and humans. It’s vastly more effort than what’s needed for Google Maps.

Google’s cars are better at handling some mapping omissions than others. If a new stop light appeared overnight, for example, the car wouldn’t know to obey it. However the car would slow down or stop if its on-board sensors detected any traffic or obstacles in its path.

Google’s cars can detect and respond to stop signs that aren’t on its map, a feature that was introduced to deal with temporary signs used at construction sites. But in a complex situation like at an unmapped four-way stop the car might fall back to slow, extra cautious driving to avoid making a mistake. Google says that its cars can identify almost all unmapped stop signs, and would remain safe if they miss a sign because the vehicles are always looking out for traffic, pedestrians and other obstacles.

Google is a company with amazing resources and even so it can’t make it’s car do the one task that every living creature can manage: think for itself.  It has to be programmed ahead of time for practically every situation it can possibly encounter and it can’t react in a novel way to novel situations.  This is the problem with all artificial intelligence: it can’t handle the one task we most need it to do: actual thinking.  Everything to date that has been done in A.I. research is a kind of work around, much like the Google self driving car.  It gives the appearance of intelligence through brute force computing and data accumulation, but it isn’t real intelligence.

To understand why we don’t have true artificial intelligence requires that we examine our assumptions about computers and the human mind.  Computers have always been superior to humans in a couple of ways.  They remember things perfectly and they are capable of performing their tasks perfectly identically every single time.  In this limited way they have always been “smarter” than people.  The advent of computers, the power of their computing and their easily understandable mechanisms led many to be swayed by that model into seeing the brain as no more than a very complex massively parallel computer.  This path has had two outcomes: the belief that computers can be made to think and the belief that the brain can sorted out much like a computer can.

It turns out that neither of these assumptions is true.

The problem of getting a computer to think ultimately comes down to what is referred to as the Hard Problem of Consciousness.

The hard problem of consciousness (Chalmers 1995) is the problem of explaining the relationship between physical phenomena, such as brain processes, and experience (i.e., phenomenal consciousness, or mental states/events with phenomenal qualities or qualia). Why are physical processes ever accompanied by experience? And why does a given physical process generate the specific experience it does—why an experience of red rather than green, for example?

. . . The hard problem contrasts with so-called easy problems, such as explaining how the brain integrates information, categorizes and discriminates environmental stimuli, or focuses attention. Such phenomena are functionally definable. That is, roughly put, they are definable in terms of what they allow a subject to do. So, for example, if mechanisms that explain how the brain integrates information are discovered, then the first of the easy problems listed would be solved. The same point applies to all other easy problems: they concern specifying mechanisms that explain how functions are performed. For the easy problems, once the relevant mechanisms are well understood, there is little or no explanatory work left to do.

Experience does not seem to fit this explanatory model (though some reductionists argue that, on reflection, it does; see the section on reductionism below). Although experience is associated with a variety of functions, explaining how those functions are performed would still seem to leave important questions unanswered. We would still want to know why their performance is accompanied by experience, and why this or that kind of experience rather than another kind. So, for example, even when we find something that plays the causal role of pain, e.g. something that is caused by nerve stimulation and that causes recoil and avoidance, we can still ask why the particular experience of hurting, as opposed to, say, itching, is associated with that role. Such problems are hard problems.

How does this relate to a self driving car?  The human mind has the mother of all shortcuts for dealing with vast amounts of data.  Rather than have to learn, store and retrieve the patterns for every conceivable type of road, we only have to learn one thing: the idea of what a road is.  What would require a computer to sift through terabytes of information, we accomplish with a single, not terribly complex (for us) idea.  Once we have that idea we can not only recognize and navigate any passable road but we can also navigate a car where no road exists (i.e. drive carefully on a relatively flat patch of dirt around a tree that has fallen on the road) because we have an idea of the conditions necessary for driving a car somewhere.  An idea of a road encompasses all possible versions, real and imaginary of what a road can be.

An idea observes physical reality and computing IS physical reality.  Another way to say this is that computing is always the observation, never the observer.  An idea is not something physical, and a computer program is and therein lies the problem.  We don’t know and don’t have any idea how to know, how to get from something physical to something non-physical.  That is the hard problem of consciousness.

Clearly we have brains, which are physical which are somehow necessary for consciousness.  So there is definitely a relationship between consciousness and the physical world.  That is not in doubt.  The problem is that we have no idea what that relationship is and until we understand it, true AI will remain a distant dream.  We may have to completely rethink our beliefs on what consciousness is and how it originates.  (That discussion is beyond the scope of this article.)

An idea is a product of consciousness, which is not material and cannot be duplicated by any physical process we know of.  The idea of a road is not a representation of a road.  Nor is it a specification or a diagram although it can incorporate these things.  The idea of a road can include every real and imaginary road, as well as any type of representation of a road in any medium in which it is recognizable, even barely, as a road.  What’s important here is that the idea of a road can take an infinite number of variables into account because an idea transcends the physical reality that it observes.  Any computer, no matter how powerful, will never be able to do this.  It cannot transcend it’s own physical reality.  You cannot compute your way to the creation of an idea.  All you can do is define and refine an idea that you already have.  And to define an idea is to eventually fall into the trap of infinite variables.  (You can never fully define an idea because they are an infinite number of definitions.)  Increasing your computing power, memory and storage does not solve the problem of having to define everything.

This is a crucial limitation of AI because it is impossible to define everything.  This is why the formation of ideas is central to real intelligence.  Ideas and concepts allow us to process otherwise unimaginable amounts of information by relying on a core of intangible concepts that encompass nearly everything we can hope to encounter rather than having to literally translate every single bit of input our minds receive.  That’s why we can look at a road and identify it as a road with just a glance without ever having to really think about it.  Computers, of course, do not operate anything like this.  We function with intangibles and a computer functions with tangibles.  That is a huge difference.

So if you were worried that computers were going to take over the world, rest easy.  Actual thinking is going to be the exclusive domain of living creatures for the foreseeable future.

Advertisements

15 comments on “Artificial Intelligence is Not Taking Over the World

  1. Travis Perry
    May 30, 2016

    I agree with your sentiment. Although I know people like Ray Kurzweil have claimed that innovation is cyclical and exponential. They expect larger and larger gaps in AI to be crossed over time.

    Interestingly enough I was reading over a book of David Bohm’s, and he had this to say about intelligence:

    “at least implicitly every does accept the notion that intelligence is not conditioned (and indeed one cannot consistently do otherwise). Consider the example of an attempt to assert that all mans actions are conditional and mechanical… Either it is said that man is basically a product of his hereditary constitution, or else that he is determined entirely by environmental factors. However One could ask of the man who believed in hereditary determination whether his own statement asserting his belief was nothing but the product of his heredity… One may ask of the man who believes in environmental determinism whether the assertion of such a elief is nothing but the spouting off of words which he was conditioned by his environment…

    Indeed it is necessarily implied, in any statement, that the speaker is capable of talking from *intelligent perception*, which is in turn capable of a truth that is not merely the result of a mechanism based on meaning or skilled acquired in the past.”

    So if thats the case, that puts a new dimension to the arguments. If intelligence is a place or thing outside of mechanisms and environmental factors, then could machines, which are products of a man made environment ever truly recognize or participate in intelligence, or just ramble off code originally given to them by programmers?

  2. Evan
    January 31, 2015

    I have heard about this story on Mysterious Universe and I want to know your thoughts on the matter. Is it really AI achieved? http://mysteriousuniverse.org/2014/12/artificial-intelligence-the-worms-are-rising-up/

    • craigweiler
      January 31, 2015

      It’s hard to know without having more details, but I think it’s a step in the right direction. It may be possible to create the conditions for consciousness, a fundamental property of the universe, to take hold. It could very well be that a neural network and an array of sensors is enough for consciousness to start forming. Certainly any approach where you code instructions is doomed to fail, as I outlined in my article, but it seems to me that this doesn’t make true AI impossible.

      According to what I understand about how consciousness works, they should be able to create many of these devices and that they would naturally start to form a team of sorts if they had a common goal, much like insects.

      If they did that, then it would definitely be true AI.

  3. Mark Szlazak
    January 31, 2015

    There are many good videos on Ted about AI or machine learning. One that is popular is called “Humans need not apply”, “The Wonderful and Terrifying Implications of computers that can learn” and “Race Against the Machine.”

    The problem is the rate of development of AI and not whether these robots and computers are conscious. After all, i cannot tell whether anyone else is consious. If you have a robot that is “smarter” than humans, lasts longer than humans, is faster or more reliable than humans, etc. Then we are dealing with an extinction level event for humans.

    Look at Markram’s brain project. 1 billion Euros funding. It will help with neuroscience and brain diseases but also advance AI or confirm models already in AI. In any case, this project will help us compete with robots by merging technology with biology. We will need to evolve into cyborgs.

  4. guy boutron
    November 26, 2014

    for french physicist Philippe Guillemant, artificial consciousness isn’t possible

  5. David J. L.
    November 7, 2014

    Anthony Sanchez? Same guy who wrote the GB-1 app?

  6. Pat McDonald
    November 6, 2014

    One of my basic questions is “Is there intelligence life on Earth”. While the article is basically accurate as far as AI development goes, it does not address the question of what exactly AI would be taking over FROM. And don’t say it’s us. 99% of people have very little control or input over their lives. Better systems would not exclude AIs, but current systems certainly do exclude a lot of people from opportunities.

  7. Tony D
    October 30, 2014

    Great article and I love all your work! I have been following AI and Kurzweil’s cult for many years. Imagine that, guerrilla atheists for the most part, but then have woo woo faith in AI become aware and surpassing human intelligence.

    I’ve stated over and over that AI does not have a critical aspect to intelligence, and that is intuition, such as perceiving the future where the brain far exceeds any kind of fast and advanced computation that a computer or algorithm does. This is where human intelligence will always be superior to AI, along with perception, reasoning, compassion, and ethics.

    Please look into intuition and how AI will never obtain that. AI will never have a subconscious – the access to the super computer if you will. All of the neuroscientists model of the brain is incomplete, with respect to how the subconscious is poorly understood – as well as most of the brain. And AI enthusiasts are waging on AI being a model of the brain, from a materialist/mechanist stance. In other words, an incomplete model. Then of course, there is the theme of the nature of subjective consciousness – the hard problem you touched upon. AI will never have that.

    Allow Descartes to sum it up:

    In the Discourse, Descartes says:

    If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men. The first is that they could never use words, or put together signs, as we do in order to declare our thoughts to others. For we can certainly conceive of a machine so constructed that it utters words, and even utters words that correspond to bodily actions causing a change in its organs. … But it is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do. Secondly, even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they are acting not from understanding, but only from the disposition of their organs. For whereas reason is a universal instrument, which can be used in all kinds of situations, these organs need some particular action; hence it is for all practical purposes impossible for a machine to have enough different organs to make it act in all the contingencies of life in the way in which our reason makes us act.

    • craigweiler
      October 30, 2014

      I was not aware of this statement by Descartes. He clearly understood the problem. Thanks for sharing it.

      And thanks for such a thoughtful comment overall.

  8. marcustanthony
    October 29, 2014

    I agree that AI is a long way from making a conscious machine, and is probably barking up the wrong tree. However people are going to keep pumping time and money into this field, and eventually lessons will be learned.

    The other thing is that the line between human and machine will grow increasingly thin. In a sense, when we jump into a car, we are already part machine. We are a consciousness driving a machine. People act as if the car was their body. For example, a small man might act like a total bully on the road, while a much bigger man in a small, cheaper car might feel intimidated. The car has become the body, in effect.

    The effect of technology on the human mind and experience is already profound. Many, many people now spend a great deal of their lives in virtual environments. Just imagine how profound the allure of the machine will be once the humble iPhone or computer game become an all-encompassing multi-sensory experience! Large numbers of people will be ensnared. This is certain, because a huge number are already ensnared.

    So, in a nutshell, the machines don’t need to be conscious to have control over us.

    I won’t go into all the pros and cos and possible implications, nor how education might assist in rectifying the problem. The main problem that concerns me is that technology tends to disembody us and take us away from presence and from the world. It typically reduces self-awareness. In effect, the Internet is just an extension of the mind (or ego, if you prefer), or at least that is how the mind tends to engage it. In other words, it comprises the world of abstract, disembodied thought. Once this dominates consciousness and a person does not have the self-awareness to develop a deeper relationship with and control over that thought, he is then a slave to the mind, not a master of it. So more than ever we need education systems and spiritual/philosophical teachings which empower people to be masters of the mind. I’m writing a book about this at the moment (Champion of the Soul), which is all but finished. One of the chapters is specifically related to how to deal with the Internet and developments in IT, while retaining a spiritual awareness.

    Marcus

    • craigweiler
      October 29, 2014

      Well said.

  9. zebzaman
    October 29, 2014

    Well, to lighten the mood I thought I share this little hilarious tale: on a lake in Bavaria , Ammersee, there have been a number of smallish vessels doing lake excursions, mailny for tourists, in times past perhaps as a way to connect all the places around the lake. These have been driven by captains since they roamed the lake. Some super smart CEO of Bavarian Boat Whatever thought : ” Hmm. How can we fix a problem we haven’t got, namely: the boats crashing into the landing platforms due to human error.” So they installed very expensive computer aided landing intelligence. Promptly the boats crashed. They improved the program. They still crashed. So they had to go back to the good old human doing the landing. Ha ha Ha.

  10. Peter
    October 29, 2014

    Artificial intelligence of the computer variety may not be a threat but the artificial/pseudo intelligence of many of the world’s politicians and almost all of the world’s Atheists are a growing threat due to their substituting cant for consciousness.

  11. Maud Nordwald Pollock
    October 29, 2014

    Dear Craig; Thank you for your thoughtful article based on the information you have available. However your level to know is obviously not at the level of what is really going on on the planet. We currently have a breakaway society, that is between 100 to possibly 1000 years ahead of the rest of us. One great source is Richard Dolanhttp://www.richarddolanpress.com/ who is considered one of the most expert on UFO, ours and others. You might want to listen to some the interviews by Kerry Cassedy of Project Camelot , Captain Mark Richards http://projectcamelotportal.com/video-library/2209-space-command-2nd-interview-w-capt-mark-richards speaks about artificial intelligence. Also researcher Anthony Sanchez http://projectcamelotportal.com/component/k2/2558-anthony-sanchez-dulce-project-leonid. In any case if you are going to write on this subject you have might want to go deeper down the into the information rabbit hole. It requires a willingness to decide between which color pill you are willing to take, as in the Matrix, do you take the blue pill and stay with the status qua, or take the red pill and are willing to look for the truth. Good luck with your research, remembering that academia is not always reliable, as even if the information was known,it certainly will not be made available. Light,love, wisdom,harmony and joy always.

    • Pat McDonald
      November 6, 2014

      Everybody’s personal “truth” is what they choose to believe. 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: