GreeleyTribune.Net

Greeley Tribune, Greeley Tribune News, Greeley Tribune Sports

Autonomous robots can be more unstable than nuclear weapons.


Autonomous weapon systems – commonly known as killer robots. Killed humans for the first time. According to the UN Security Council last year Libya’s civil war report. History can recognize this as the starting point for the next great arms race, which has the potential to be the last one of humanity.

Autonomous weapon systems are robots with deadly weapons that can operate independently, selecting and attacking targets without any human weight. There are militants all over the world. Heavy investment In the research and development of autonomous weapons. America alone. Budget 18 billion For sovereign weapons between 2016 and 2020.

Meanwhile, human rights and Humanitarian organizations There is a race to establish rules and regulations on the manufacture of such weapons. Without such testing, foreign policy experts warn that destructive sovereign weapons technologies will destabilize dangerous nuclear strategies, both because they could radically change the notion of strategic domination. Increased risk of premature attacks, And because they can be. Combined with chemical, biological, radiological and nuclear weapons. Himself

Like Human rights specialist With attention to Creating a weapon of artificial intelligence, I believe that sovereign weapons protect the unstable balance and fragmentation of the nuclear world – for example, the minimal obstruction of the US President Option to start a strike – More unstable and more fragments.

Fatal bugs and black box.

I see four main dangers with sophisticated weapons. The first is the problem of misidentification. When choosing a target, will sophisticated weapons distinguish between enemy soldiers and 12-year-olds playing with toy guns? Between civilians fleeing the conflict and insurgents retreating strategically?

The killer robot, like the drone in the 2017 short film ‘Slaughter Boats’, has long been a major sub-genre of science fiction. (Warning: Graphic depiction of violence.)

The problem here is not that machines will make such mistakes and humans will not. That is, the difference between human error and algorithmic error is like the difference between sending a letter and tweeting. The scale, scope and speed of the killer robot system – governed by a targeting algorithm that is deployed across the continent – can be misidentified as recent by individual humans. US drone strike in Afghanistan Comparisons seem to be just like bullet errors.

Autonomous weapons experts use the metaphor of Paul Scherer. Run gun To explain the difference. A shotgun is a defective machine gun that fires even after the trigger is released. The gun continues to fire until the ammunition runs out, because the gun does not know it is making a mistake. Fleeing guns are extremely dangerous, but fortunately they have human operators who can break the ammunition link or try to point the weapon in a safe direction. Sovereign weapons, by definition, have no such protection.

Importantly, armed AI doesn’t have to be flawed to produce the effect of a fleeing gun. As numerous studies have shown about algorithm errors in various industries, the best algorithms – as designed – can. Create internally correct results that still make terrible mistakes. Rapidly in populations

For example, a neural network designed for use in Pittsburgh hospitals. Asthma Reducer In case of pneumonia, the image recognition software used by Google. Identification of African Americans as guerrillasA And a machine learning tool that Amazon uses to rank job candidates. Negative scores were systematically assigned to women..

The problem is not just that when AI systems make mistakes, they make a large number of mistakes. That is, when they make a mistake, their creators often do not know why they did it and, therefore, how to correct it. Of Black box problem The AI ​​makes it almost impossible to imagine the morally responsible development of an autonomous weapons system.

Spreading issues.

The next two risks are low and high spread issues. Let’s start with the low end. Autonomous weapons manufacturers are moving forward on the assumption that they will be able to. Control and control over the use of sovereign weapons. But if the history of weapons technology has taught the world anything, it is this: weapons have spread.

Market pressures can result in creativity and widespread sales that can be thought of. Kalashnikov assault rifle.: Killer robot that is cheap, efficient and almost impossible to overcome while circulating around the world. “Kalashnikov” sovereign weapons can fall into the hands of people outside government control, including international and domestic terrorists.

However, advanced spread is just as bad. Nations can compete to develop rapidly destructive versions of sovereign weapons, including capable ones. Growing chemical, biological, radiological and nuclear weapons.. The growing moral threat of weapons destruction will be exacerbated by increasing the use of weapons.

Advanced sophisticated weapons can lead to more frequent wars as they reduce the two main forces that have historically prevented and shortened wars: concerns for civilians abroad and concerns for their troops. Weapons are likely to be expensive. Moral ruler It is designed to mitigate the damage using what UN Special Rapporteur Agnes Kalamard said. “The Myth of Surgical Strike” Sovereign weapons to prevent moral protests will dramatically reduce both the need and the threat of our troops, dramatically. Cost analysis Nations pass through the beginning and maintenance of wars.

Asymmetric wars – wars fought on the lands of countries that lack competitive technology – are likely to be more common. Think of the global instability caused by Soviet and American military intervention during the Cold War, from the First Proxy War. Blow Tested worldwide today. Multiply the fact that every country is currently striving for advanced sovereign weapons.

Weakening the laws of war.

Finally, sovereign weapons will undermine humanity’s ultimate barrier to war crimes and atrocities: the international law of war. These rules are codified in treaties dating back to 1864. Geneva ConventionIs the International Thin Blue Line separating war from genocide with honor? He is of the opinion that people can be held accountable for their actions even during war, the right to kill other soldiers during war does not give the right to kill civilians. An outstanding example of someone being taken into account. Slobodan Milosevic, Former president of the Federal Republic of Yugoslavia, charged with crimes against humanity and war crimes by the United Nations International Criminal Tribunal for the former Yugoslavia.

But how can sovereign weapons be held accountable? Who is responsible for the war crimes robot? Who will be prosecuted? Weapons? soldier? Military commander? The corporation that made the weapon? NGOs and international law experts fear that sovereign weapons will take a serious turn. The difference of accountability

To catch a soldier. Criminally responsible For the deployment of an autonomous weapon that commits war crimes, the prosecution will need to prove both Actius Reeves and Men’s Ray, Latin terms define a criminal act and a criminal mind. This would be difficult as a matter of law, and potentially unjust in terms of ethics, provided that sovereign weapons are inherently unpredictable. I believe that there is a long way to go to separate the soldier from the independent decisions made in a rapidly evolving environment through autonomous weapons.

The legal and ethical challenge is not made easier by shifting the blame to the chain of command or returning it to the place of production. In a world where there are no rules. Meaningful human control Sovereign weapons will contain war crimes for which no war criminal will be held accountable. The structure of the laws of war, along with the cost of preventing them, will be significantly weakened.

A new global arms race.

Imagine a world in which militants, insurgent groups and international and domestic terrorists could deploy an ideologically unlimited deadly force at the time and locations of their choice with ideologically zero threat, resulting in no There is no legal accountability. This is a world where it is inevitable. Algorithm errors The plague could now wipe out entire cities, including technology companies like Amazon and Google.

I don’t think the world should repeat the catastrophic mistakes of the nuclear arms race. He should not go into dystopia.

[Get our best science, health and technology stories. Sign up for The Conversation’s science newsletter.]

James Davis., Professor of English, McAllister College.

This article is republished. Conversation Under the Creative Commons license. Read on Original article.

%d bloggers like this: