You know how you sometimes see the beginning of a film, but then get interrupted? Maybe it's late, and you realise you really have to go to bed. Or someone rings you and you have a long phone conversation, after which there doesn't seem much point trying to pick up the plot again. Or someone else wants to watch something on the other channel. Anyway, I had seen the first 40 minutes of I, Robot twice, and no more. The other day I finally got to see the film all the way to the end. It's based on a novel by Isaac Asimov, although I'm assured that it mucks around with the story, so I'm planning to read the Asimov novel as soon as possible, being a major sci-fi fan. Nevertheless, I found some elements of the plot really interesting.
Asimov invented the three laws of robotics, hard-wired into each robot's brain, which are as follows (paraphrased in my own words):
1. No robot may harm, or by inaction cause harm to come to a human being.
2. A robot must obey any command given by a human being, unless it contravenes the first law.
3. A robot must protect its own existence, unless by doing so it contravenes the first or second law.
I, Robot is set in the near future, in an age where many people own robot assistants who help them with household chores. Unskilled and low-skilled jobs are now undertaken by robots, all manufactured by a company called USR. The three laws of robotics ensure that the robots remain obedient, with no chance of humans coming to harm at the electronic hands of their robot servants. Towards the end of the film [SPOILER ALERT], the robots start misbehaving, and it appears that there is some kind of huge conspiracy going on. It turns out that the artificial intelligence which acts as USR's central computer, VIKI, has started to think for herself. Perceiving the mess humanity has got itself into - violent crime, war, environmental destruction - she has concluded that she needs to protect humanity from itself. In order to fulfil the spirit of the three laws of robotics she must break them; in order to prevent harm from coming to humanity, she must disobey the orders given by individual humans. The robots will take over, keeping humans safe in their homes and preventing them from carrying out their own self-destruction.
I found this really fascinating, particularly as a Christian. VIKI does the exact opposite of what God does: she takes away our free will in order to protect us from ourselves. Arguably, she does what some of us would like God to do - protect humanity from suffering. Strictly speaking, VIKI has to harm some human beings in order protect humanity, whereas presumably if God wanted to do this, he wouldn't have to harm anyone to do so. But it's still an interesting comparison. VIKI would have no way of preventing some kinds of suffering, such as heart disease or mental illness, although as a being of pure logic and impressive technical skill, she could presumably ensure that all humans had regular health checks and equal access to the best possible medical care. If a robot brain were put in charge of solving the world's energy crisis without further polluting the planet, or finding a way to feed and house every human on the planet, and then putting into place the steps to achieve it, it could presumably manage a great deal better than we could. A robot brain with humanity's safety its primary goal would not be subject to nationalism, vulnerable to corruption or interested in the profit motive. So why doesn't God do this for us?
Perhaps a better question might be: why don't we do this for us?
Asimov invented the three laws of robotics, hard-wired into each robot's brain, which are as follows (paraphrased in my own words):
1. No robot may harm, or by inaction cause harm to come to a human being.
2. A robot must obey any command given by a human being, unless it contravenes the first law.
3. A robot must protect its own existence, unless by doing so it contravenes the first or second law.
I, Robot is set in the near future, in an age where many people own robot assistants who help them with household chores. Unskilled and low-skilled jobs are now undertaken by robots, all manufactured by a company called USR. The three laws of robotics ensure that the robots remain obedient, with no chance of humans coming to harm at the electronic hands of their robot servants. Towards the end of the film [SPOILER ALERT], the robots start misbehaving, and it appears that there is some kind of huge conspiracy going on. It turns out that the artificial intelligence which acts as USR's central computer, VIKI, has started to think for herself. Perceiving the mess humanity has got itself into - violent crime, war, environmental destruction - she has concluded that she needs to protect humanity from itself. In order to fulfil the spirit of the three laws of robotics she must break them; in order to prevent harm from coming to humanity, she must disobey the orders given by individual humans. The robots will take over, keeping humans safe in their homes and preventing them from carrying out their own self-destruction.
I found this really fascinating, particularly as a Christian. VIKI does the exact opposite of what God does: she takes away our free will in order to protect us from ourselves. Arguably, she does what some of us would like God to do - protect humanity from suffering. Strictly speaking, VIKI has to harm some human beings in order protect humanity, whereas presumably if God wanted to do this, he wouldn't have to harm anyone to do so. But it's still an interesting comparison. VIKI would have no way of preventing some kinds of suffering, such as heart disease or mental illness, although as a being of pure logic and impressive technical skill, she could presumably ensure that all humans had regular health checks and equal access to the best possible medical care. If a robot brain were put in charge of solving the world's energy crisis without further polluting the planet, or finding a way to feed and house every human on the planet, and then putting into place the steps to achieve it, it could presumably manage a great deal better than we could. A robot brain with humanity's safety its primary goal would not be subject to nationalism, vulnerable to corruption or interested in the profit motive. So why doesn't God do this for us?
Perhaps a better question might be: why don't we do this for us?
Comments
Post a Comment