The ethics of robotics

Scientists are already beginning to think seriously about the new ethical problems posed by current developments in robotics.

Experts in South Korea are drawing up an ethical code to prevent humans abusing robots, and vice versa. And, a group of leading roboticists called the European Robotics Network (Euron) has even started lobbying governments for legislation.

At the top of their list of concerns is safety. Robots were once confined to specialist applications in industry and the military, where users received extensive training on their use, but they are increasingly being used by ordinary people.

Robot vacuum cleaners and lawn mowers are already in many homes, and robotic toys are increasingly popular with children.

As these robots become more intelligent, it will become harder to decide who is responsible if they injure someone. Is the designer to blame, or the user, or the robot itself?

Isaac Asimov was already thinking about these problems back in the 1940s, when he developed his famous “three laws of robotics”. He argued that intelligent robots should all be programmed to obey the following three laws:

  • A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These three laws might seem like a good way to keep robots from harming people. But to a roboticist they pose more problems than they solve. In fact, programming a real robot to follow the three laws would itself be very difficult.

Source: BBC News

Related Links: European Robotics Network