Surely everyone is familiar with Isaac Asimov's Laws of Robotics, promulgated by him in some of his early robot-themed fiction:
1. First Law:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. Second Law:
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.But what you may not know is that Asimov later on added what he called the "zeroth" Law, so called to precede Laws 1-3 given above:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.The point: As Asimov himself came to realize, the original three laws were incomplete, and inadequate, and had to be adjusted, "tinkered with" if you will, to make them apply better, and do a better job of governing the actions of computers in their dealings with humans.
So it's the
inadequacy of their programs to foresee all possible disasters, including those that humans might bring upon themselves, and deal with them to the benefit or protection of humanity, that is the problem, rather than computers becoming malevolently intelligent.
Example: a computer operating under the original three Laws of Robotics cited above would not have taken action to stop a nuclear war from happening if it could. There was nothing in the original Three Laws that would have prompted it to do so.
But the modified Laws, with the zeroth Law added, presumably would have.
But how many other Laws would need to be added to cover a host of other dangerously possible situations? Bearing in mind that we would likely never anticipate all of them anyway?
Computers only do what they're told to do. But if they aren't told enough, they don't do enough.
about
Asimov's Laws:
https://en.wikipedia.org/wiki/three_laws_of_robotics