Killer Robots Approved to Fight Crime. What Are the Legal, Ethical Concerns? | Jobs Recent


Last week, the San Francisco Board of Supervisors voted to allow its police department to use robots to kill suspected criminals. The decision was met with praise and disbelief.

The decision came six years after Dallas police armed themselves with robots and bombs to end a standoff with a gunman who had killed five officers. The Dallas incident is believed to be the first time a human was intentionally killed in the US by a robot. But, judging by the vote from San Francisco, it may not be the last.

What are the legal concerns when governments turn to machines to end people’s lives? UVA Today asked Professor Ashley Deeks, who studied at this intersection, to weigh in.

Ashley Deeks

Ashley Deeks is the Class of 1948 Professor of Scholarly Research in Law.

First, what do you do at the San Francisco poll?

I’m surprised to see this coming out of San Francisco, which is a very liberal place, rather than coming out of a city known as “tough on crime!” But it’s also important to understand what San Francisco’s traffic lights have and don’t have. These are not independent systems that can independently choose to use violence to attack. The police will be using them, even if they are far and away. So calling them “killer robots” might be a little misleading.

If a police officer uses deadly force, he is responsible for the decision. What problems would arise, legally, if a robot actually killed?

According to the Washington Post, the San Francisco Police Department does not plan to equip its robots with guns in the future. Instead, the law appears to envision situations in which police officers can arm the robot with something like a stun gun, a taser or a smoke grenade. San Francisco’s law would still put a human “in the loop,” as a human would be piloting the robot remotely, controlling its direction, and deciding when and if the robot should detonate explosives or deter a suspect. So the link between the individual who makes the decision and the use of lethal force may be easy to identify.

It can get even more complicated if the robot stops working as planned and accidentally injures someone through no fault of the user. If the victim or his family files a complaint, there may be cases where the manufacturer, the police department or both are to blame. But that is not a separate question from what happens when a police officer’s gun accidentally fires and injures someone due to a manufacturing error.

Besides the legal questions, what are the ethical questions that society will have to face when robots take over life? Or do legal and ethical questions overlap?

Legal and ethical questions are related. Ideally, community-based legal systems reflect critical thinking about ethics, as well as the Constitution, federal and state laws, and smart policy decisions. On the other side of the scale there are many benefits that come from devices that help protect police officers and innocent citizens from harm. Since many uses of deadly force occur because officers fear for their lives, well-controlled and properly deployed robots can reduce the use of deadly force because they can reduce the number of dangerous situations.

On the one hand there is concern about giving police departments a willingness to use force, even if it is not a last resort; about accidents that may occur if robotic systems are not properly inspected or operators are not properly trained; and about how the use of robots in this way is somehow a crack that opens the door to the future use of systems with more freedom in decision making.

Another question that may arise is whether the police should establish strict rules for the use of force when it is a robot that is delivering that force, because the robot itself cannot be killed or harmed by a suspect. In other words, we may not want to allow robots to use energy to protect themselves.

You learned how the police often use artificial intelligence as a crime-fighting tool. And some governments may be developing autonomous weapons systems that can select targets on their own in an armed conflict. Do you see a time when the police start considering AI-powered robots to make lethal decisions?

An important part of what the military does in wartime is to identify and kill enemy forces and destroy their enemy’s weapons. AI tools are well-suited to help the military make predictions about where to target them and which strikes will help win the battle. There is a heated debate about whether countries should deploy autonomous lethal systems that can choose who or what to target, but again, the point is that these systems would be deployed in wartime, not peacetime.

All this is very different from what the police do. The police can only use force to take control of the situation without any reasonable alternative. An officer may use constitutionally deadly force only against a person who may flee arrest for a serious crime or who threatens serious bodily injury or death to the officer or a third party. It is very difficult to imagine that police departments in the United States would use or legally have autonomous robots that would make independent decisions, based on their own algorithms, about when to use force.

Do you find San Francisco’s decision odd, or do you hope other cities and police departments will investigate this in the future?

It’s important to note that San Francisco’s ordinance still needs a second vote and approval by the mayor, so it’s not yet a deal. According to previous examples, as you have seen, the Dallas Police Department did the same thing in 2016, when it used a robot with an outstretched arm to place a pound of explosives near a shooter who had closed in after killing five officers and wounding seven others. . Dallas PD then detonated C4, killing the shooter.

Many police departments around the country have explosive disposal robots, which they received as surplus military equipment from the Pentagon. I would not be surprised if other countries and territories decided that now is a good time to clarify their laws on whether the use of these robots in ways that can kill or harm the suspect. Cities may choose to adopt laws like San Francisco, or they may decide that they want to see the use of these robots banned.

It will be important to have a strong debate about the wording of these laws. What are the specific situations where robots can be used to deliver power? How old should the authorities be to allow use in a particular case? How confident should operators be that systems are reliable? In addition, it will be important for a variety of people to attend – not only police departments and civil liberties advocates, but also lawyers, cultural experts and citizens.



Source link