Elon Musk has come out saying that he considers AI (Aritficial Intelligence) to be one of the biggest threats to humanity in our time. And he is not alone in this thinking, movies like I Robot or Terminator have popularized this perspective for some time.
To me, AI is not a worry; but I have greatly different philosophical assumptions than Elon. Of the people who most praise, and most condemn AI; there is a common assumption of rationalist humanism and some remnants of logical positivism. They both assume that everything worth knowing can be constructed by human logic, and there is no need (or place for) revelation or subjective feelings. This philosophy was more prosperous in the 50s and 60s than today, and post-modernism has definitely deflated a lot of its (over inflated) optimism; but it is still a force. I am not going to get into the history of philosophy, and how the Enlightenment and Post-enlightenment separated reason (physics) and meta-physics (Read Francis Schaeffer’s small book if you are interested), but it is enough to say here that we have inherited a thinking that reason can suffice to give meaning, values and purpose (whether to protect humanity, or destroy it).
This is the whole philosophical field of epistemology (how we know things) and ontology. To put it simply, I believe that knowledge is more than just a building up of systems from a few fundamental logical postulates, but all knowledge (both physics and meta-physics) is dependent on layer upon layer of ever broader (and more complex) contexts (which I believe ultimately end/begin in the mystery of God). In the context of computers, Tim Berners-Lee saw this with his definition of the Semantic Web (sub-note: with the shift from XML to JSON, most computer programmers have given up on this vision as being too complex for ‘productive’ work). Much of my thinking comes out of the ontology philosophy of Barry Smith and my database/ontology work. I have worked with AI languages since my days at McDonnell Douglas (though not much as of recent), and as we said back then: For every amount of artificial intelligence there will be an equal amount of artificial stupidity.
The Mitre Corporation (under a DARPA contract) spent millions (if not billions) on an AI ontology with little results – it was just too complex and difficult. Google cut and ran in the other direction (forget ontologies, just crunch numbers) and made a few dollars at search before hitting the wall of needing ontology (how do you TRUST the results? How do you define ‘fake news’, let alone sort it out!). What I see is that AI (in its current software incarnations) is incredible and amazing in what it can do, whether machine vision or playing Go; but it is all a number crunching algorithm, that is doing some form of categorization (asking “What”) within a constrained context,. In the future I expect to see a lot accomplished in medical diagnostics and other fields doing this. But in all these accomplishments, the success comes by defining a constrained context (that can be categorized). As long as AI stays within a context (which may be rather broad) it will be productive. I think it stumbles when the questions are of the nature of “Why?”, and fail when it comes to meta-physics (“What does it mean?”). Almost anyone post-Kant will deny that meta-physics has any logical foundation – and hence cannot be programmed.
The problem does not go away, but I believe that AI will be of little help here. This is the problem Google is having to confront: the issue of a self-driving can needing to reason over how to decide between avoiding a head-on collision and hitting a pedestrian. They may be able to program this decision into the car, but I don’t think they will be able to get the car to reach this decision on its own.
Because most of these people philosophically deny a rational meta-physics, yet assume that a rational ‘physics’ can produce meaning and value; they are open to the flip side of the coin, the dystopia where the computers reason that it is best to enslave (or annihilate) us for our own good (ie. I Robot). I can see AI systems going amock and doing a lot of damage, but it will be programming ‘bugs’ (or deliberate programming) causing drones to categorize/target the wrong ‘enemy’ rather than a reasoned philosophical decision; but not as an autonomous, reasoned conclusion.
I am convinced that as we go ‘deeper’ into knowledge we are confronted more and more with contradictory ‘facts’ that must be mutually affirmed; whether this is the nature of light as wave and particle, or the nature of God as Trinity. This goes beyond the realm of logic, and I think is why the church focuses on virtue and relationship as the path to knowledge, particularly when we move into meta-physics. This was the debate between Barlaam and St. Gregory Palamas; is God known by rational contemplation or ascetical effort. As much as I have enjoyed Data and the Doctor on Star Treck, I don’t think that a computer will ever come near human qualities of virtue and vice (though I can see people trying to give up their humanity and become more like computers).
AI within narrow domains has a lot of power; but the complexity of larger scopes will cripple it, and the philosophy of meta-physics will set its limits. But it will be interesting to follow the developments and debates.