You are at Topic Wiki - Robots -> Ethical issues involved in the usage of robots
Robots nowadays are able to complete complicated and highly repetitive tasks in a quick amount of time with little to no mistake. Automation of factories had already made large number of factory workers obsolete. There is fear that future robots might be able to replace human in other jobs as well.
What it means to be a healthy human is to move, to do work, we shouldn't replace that or cancel it out. I'm personally disturbed by the notion of a world where we have these robots and better and better artificial intelligence, where systematically those systems replace humans, human services, human work. I think we're at our limit at what machines should do for us.
-- Hugh Herr, Massachusetts Institute of Technology 7
Another view is that some jobs are just not suitable for human, even if the cost of labor is relatively low. For example, many processes of a car making factory have to be automated for safety reasons.
Some things can't be done safely by people. So you need a robot.
-- William Botwick, President of General Motors Thailand 1
In conclusion, a balance has to be obtained in the future between robot and human workers. The pace of robots replacing human in simple, repetitive labor has to be matched by the education of labor workers.
"I can imagine a future where it is much cheaper to dump old people in big hospitals where machines care for them."
-- Professor Noel Sharkey, University of Sheffield, England 2
As robots become more advanced to the point that they are able to take care of human, this scenario might happen. Is it ethical and morally right to let robots alone "manage" a large group of people who need care, when the robots are pre-programmed to do so instead of out of altruism, in which robots would never understand?
Old people already fear computers, much less a robot sticking them for a diabetes test! Here's a better idea: robots to care for children, freeing up people to deal with the elderly. Stay with me now. Old people are afraid of robots, while kids don't know any better. And there's another upside. By the time the kids are old, they'll be fine with Olav's crazy robot idea. Then we won't need human interaction ever again!
-- Anonymous internet user 3
The internet user refers to this article by CNET. An excerpt from the article:
"Technology will contribute to resolve part of the challenge with employees in the health care sector," Olav Ulleren, head of a group representing Norwegian municipalities, told Reuters. "It could also help people live longer in their own homes."
Would the elders like to live longer, or rather like to have human taking care of them?
It must be noted that some considered healthcare by robots is indeed coming in the future:
Most of us would rather be attended to in a hospital by a robot than be ignored. And given the choice to stay in our own homes with a nursebot or go to a nursing home, a robot would allow us to continue to live independently as well as offer a more cost-effective alternative.
-- Joanne Pransky, American robotics expert and futurist 7
Nevertheless, there are things there will still be limits to what robots can do. There are many situations that require certain conditions that can only be handled by someone with experience. For example when a patient has a "new" disease that the robot do not comprehend because it is not programmed in them. Even robots acting as nurses also cannot be fully trusted as there can be odd situation all the time that the robots cannot do as there are just limitations to the programmed machines.
As we get more dependent on robotic devices in security and life-threatening situations (e.g. auto-eject system), what would happen if the robots fail at some point?
Kenji Urada is the first man to be killed by a robot in year 1981. He is believed to be doing maintenance on a robotic arm in a factory but failed to switch the robot off completely. The robot accidentally pushed him into a grinding machine, resulting in his death. 4
However, there are people who think that robots do not require special treatment as it is now because it's not more dangerous than normal machinery:
A radio-controlled car controlled by a six-year old is far more dangerous than a Roomba.
- Colin Angle of iRobot, company of Roomba, the automated vacuuming robot 4
In 2007, robotic sentries built by Samsung had been stationed by South Korea to guard the border between North and South Korea. It is equipped with two cameras and a machine gun. What if the robots shot innocent people by accident? Is the use of robots to harm human acceptable?
I have worked in artificial intelligence for decades, and the idea of a robot making decisions about human termination terrifies me.
-- Professor Noel Sharkey, University of Sheffield, England 5
On the other hand, some viewed that there is a place for robots in military uses:
Robotics systems may have the potential to out-perform humans from a perspective of the laws of war and the rules of engagement.
And there are no emotions that can cloud judgement, such as anger.
-- Ronald Arkin, Georgia Institute of Technology, Atlanta 5
Click here to watch one of Samsung's robotic sentries in action on Youtube!
Human vs robot
... The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.
The quote above is taken from the Terminator movies, in which Skynet is a super computer which is so advanced that it became self-aware, thinks for itself and views humankind as its enemies. Although it sounds like science fiction, it is not impossible that in the far future this may come true.
In any case, a catastrophe of a smaller scale (but nevertheless still serious) may still happen in that robot computer systems in the near future could have glitches which make them accidentally turn towards human. For example, military robots which are programmed to attack enemies could have software glitches, and make them attack their allies instead.
Three Laws of Robotics
In 1942, Issac Asimov published his short story "Runaround", which contains three rules regarding with the limitations of robots in their interaction with human:
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Isaac Asimov later added a "zeroth law":
0. A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
Robots with emotion
One day, robots could have the technology to recognize human's expressions and emotions. They could be able to show such behavior with extreme realism. Should human treat them as a being with real emotions? If a robot when in close contact with a human in a long period of time, will be able to develop strong sense of attachment to that human, how should the human treat the robot?
Even now, a group of researchers from Vanderbilt University are already working on a project to design a robot assistant which can detect and react to human emotions.
We are not trying to give a robot emotions. We are trying to make robots that are sensitive to our emotions
-- Craig Smith, associate professor of Psychology and Human Development, Vanderbilt University 6
(Courtesy of Choi Won-Suk/AFP/Getty Images)
As a robot becomes more humanoid in shape and features, and the robot is able to react to human emotions, is it possible that a human may fall in love with a robot and get married with it?
"My forecast is that around 2050, the state of Massachusetts will be the first jurisdiction to legalize marriages with robots," AI researcher David Levy at the University of Maastricht, Netherlands once said to LiveScience. 8
Levy had also explained that the factors which make a human fall in love could indeed be programmed into a robot:
For instance, one thing that prompts people to fall in love are similarities in personality and knowledge, and all of this is programmable. Another reason people are more likely to fall in love is if they know the other person likes them, and that's programmable too.
However, many expressed concerns with the idea:
Many consider robot intimacy, beyond simply sex machines, the next big market or killer application, but that really pushes the limits. Would you want your daughter to marry a robot? What would the church think of such unions?
-- Ronald Arkin, Georgia Institute of Technology, Atlanta 7
Responsibility of robots
If there is ever a case where a robot injures someone, who should be responsible for it? The robot or the owner of the robot?
"Right now, that's not an issue because the responsibility lies with the designer or operator of that robot; but as robots become more autonomous that line or responsibility becomes blurred."
-- Professor Noel Sharkey, University of Sheffield, England 2
As robots gradually becomes more intelligent, at which point are the robots intelligent enough to be responsible for their actions? Should there be a "Robots Act", which state proper laws guiding the robots owners to be resposible for the robots that they owned? INEX Robotics company actually allow consumer to create their own robots. Thus, such law can be quite important in the near future.
As one can see, these questions are hard to answer and may not seem to be a concern right now, but with the rapid development of robots, it may become a controversial issue some day.