Friday, February 19, 2010

And, the murderer still lives ....

"A robot may not injure a human being, or, through inaction, allow a human being to come to harm."


Industrial robots operate in the secured barrier where they've been programmed to have them separated from human. But, this doesn't guarantee that all will. Back in 1950, a famous sci-fi author Issac Asimov set down the 'The laws of Robotics' , a series of rules. One of them, most importantly, was 'thou shall not kill' . This pushes us back to certain unanswerable questions - Who's responsible if a robot kills someone? Are there legal proceedings if an autonomous machines are responsible for the killings? Who's responsible- the guy who designed it?

History has always witnessed the exceptions that positively had defied the rules. Kenji Urada, a maintenance worker at the Kawasaki plant in Akeiski, Japan, is supposedly the first person to be killed by a robot. Kenji failed to turn off the robot, and the robot mistook the poor man for an industrial component and accidentally pushed him into a grinding machine with its hydraulic arm. He died at the age of 37 in 1981.

In May 1987, a canadian Therac-25 computer was installed for the treatment for cancer patients by radioactive therapy. What may have come to computer, it increased the dosage by hundred times. The patients died of fatal dose, rather brutally murdered. The hospital sued the software company.

1979: A 25-year-old Ford Motor assembly line worker is killed on the job in a Flat Rock, Michigan, casting plant. He died instantly when the robot’s arm slammed him as he was gathering parts in a storage facility, where the robot also retrieved parts. Williams’ family was later awarded $10 million in damages. The jury agreed the robot struck him in the head because of a lack of safety measures, including one that would sound an alarm if the robot was near.

A Swedish company has been fined 25,000 kronor ($3,000) this April after a malfunctioning robot attacked and almost killed one of its workers at a factory north of Stockholm.

These have potentially raised up a legal implications. How reliable does a robot have to be before we trust it to do a human's job? Would you want to live in a home that monitored your movements and called for help if you didn't take your medicine?
If we can't resolve all these things about who's responsible, who's charged if there's an accident and also who should have stopped it, we deny ourselves the benefit of using this stuff.

Randy Samuels, a Silicon Techtronics programmer, was indicted on charges of manslaughter in the famous killer robot case. Samuels wrote the flawed software that caused a Silicon Techtronics Robbie CX30 industrial robot to crush and fatally injure its operator, Bart Matthews.

The case of 'Suicide Robot' : A elderly person from Australia, aged 81, has killed himself by a programmed robot. This incident happened in March, 2008 when he programmed a robot to shoot him in the head after building the machine from plans downloaded from the internet.
He connected his .22 semi-automatic pistol loaded with bullets to complex machine that implemented jigsaw power tool.
Certainly, this isn't a case where the technology has to be criticized like the earlier ones, but a technology has been used. If a robot could be programmed to kill oneself, how about I launching one to kill you? Added to it, you can't sue an unknown person for that; an effete law can't punish the robot either.

The victim's family have a chance of claiming the compensation from a software company under the Consumer Protection Act. That would be a quite earning. Company law specialists say 'victims won't have to establish fault, they only have to prove a causal link between the injury suffered and a defect in the product'. Is money the stance if someone gets killed or injured by a computer foul up?

References:
http://en.wikipedia.org/wiki/Robert_Williams_%28robot_fatality%29#Potential_problems
http://www.guardian.co.uk/science/blog/2009/aug/19/autonomous-machines-systems-report