Good aim, at the wrong target, is always a miss. This describes much of the current work in Artificial Intelligence: Brilliant minds, clever programmers, amazing algorithms, all pointed at the wrong target with stupefying aim. Despite their brilliance, cleverness, and coding, someone will get hurt if we continue pursuing the type of AI in vogue.
A few weeks ago, Google put out guidance for research on preventing harm from AI. This last week, the Federal Government did the same. And a new research center just opened at Cambridge to ponder the issues: How will we prevent harm from AI? How will we prevent self-driving cars from killing people (or, as I mentioned recently, too many people)? How will we care for those losing their jobs to increased automation and robots? How will we ensure fair distribution of the money generated? What should we do or how can we prevent an advanced AI from going rogue? Can we build in an “off” switch? So forth and so on. The perils pile upon perils.
These are real issues. Even if Elon Musk and a few others have overstated the situation, AI can create far-reaching problems. We can build machines that harm humans. We’ve done it before. Nearly every technology humanity adopts can cause harm when misused or not monitored. This is why we have anti-lock brakes in our cars, kill-switches on trains, fuses in our homes, require medical prescriptions for many drugs, and so much more. But AI raises the stakes through speed and complexity: Computers follow millions of instructions per second, faster than our ability to follow them. Nor can we predict the full effects of any software, let alone the hyper-complex, self-adjusting algorithms modern AI systems use. And, thanks to its ever-shrinking size and cost, AI will appear more and more in cars, toys, appliances, and even unexpected places. Its impact will surround and, maybe, overtake, us. We will soon live in an AI-encased world. Some of those devices will likely cause little or no harm, but others — as the sad self-driving car death in Florida demonstrates — will.
So I agree with Google and the U.S. government and other researchers that we must monitor AI progress and work for guidelines to protect humans from what these machines can, or could, do. But I also believe much of the problem comes from aiming at the wrong target. If we corrected our aim, many of the concerns would diminish while we still enjoyed AI’s benefits.
AI theorists consider what they call Artificial Generalized Intelligence (or AGI) the ultimate goal: The intelligence of an AGI would match or beat — if you believe Musk, Kurzweil, and the other true believers — human intelligence. For these theorists, AI’s recent successes, including Google’s DeepMind, IBM’s Watson, and Tesla’s self-driving cars, are no more than steps toward that end. Like all goals, however, the pursuit of AGI rests not just on a desire to see what we can accomplish, but on beliefs about what is. That is, the hope for AGI begins by failing to appreciate human intelligence, assuming it to be the accidental by-product — an emergent condition with the illusion of free will — of random changes locked in a struggle for survival. If human intelligence is the epiphenomenon of an ever-changing collection of complexly arranged chemicals, then, by all means, let’s see if we can do better. But, if intelligence is the designed result of an engineered system, a exquisite multilayered composite exceeding any human created artifact, then pursuing its replacement might be not only a fool’s errand; it might, and likely is, dangerous and ill-conceived.
The misguided goals, the bad aim, of so much AI (though not all) arises from dismissing human uniqueness. Such AI becomes, not a tool to assist humans, but one to replace them. Whether it replaces uniquely human abilities, such as making moral judgments, or squeezes humans out altogether, as some robotics proposals tend to assume, someone will get hurt. Re-aiming AI toward “Assisted Intelligence,” rather than replacement-directed “Artificial Intelligence,” would bring more benefit and remove the scariest scenarios. Our tools do not cause our problems; it is how we use them.