There is a strong possibility that in the not-too-distant future, artificial intelligences (AIs), perhaps in the form of robots, will become capable of sentient thought. Whatever form it takes, this dawning of machine consciousness is likely to have a substantial impact on human society.
Academic and fictional analyses of AIs tend to focus on human — robot interactions, asking questions such as: would robots make our lives easier? Would they be dangerous? And could they ever pose a threat to humankind?
These questions ignore one crucial point. We must consider interactions between intelligent robots themselves and the effect that these exchanges may have on their human creators. For example, if we were to allow sentient machines to commit injustices on one another — even if these ‘crimes’ did not have a direct impact on human welfare — this might reflect poorly on our own humanity. Such philosophical deliberations have paved the way for the concept of ‘machine rights’.
“Machine rights”? I have to confess, when I read articles like this I have an almost visceral reaction. It amounts to a full-blown, ongoing perplexity and fascination with the capacity of otherwise intelligent people to engage in serious-sounding group delusion. To pick this apart is a little like explaining to someone why his interest horoscopes is probably untethered to any genuine scientific knowledge about the planets, the relevant laws governing their motion, and so on. I’m tempted to say, “Yes, that’s right, the robots are becoming intelligent so quickly that they may soon take control. In fact, here they come now! Run for cover!”
Let’s do some not-too-painful sanity checking. What is the current state of robot technology? MIT is famous for cutting-edge work on robots (cf. Rodney Brooks), and they’ve got a robot that is getting better at identifying large objects like plates, as distinct from, say, a salad bowl.