Can the use of AI weapons be banned? – NEWSLINE NHK – News

0
22

What are full and autonomous lethal weapons?

The fully automatic lethal weapons powered by IA are becoming the biggest problem as they are becoming a real possibility with rapid technological progress.

It is different from armed UAVs (unmanned aerial vehicles) already deployed in the real war … UAVs are remotely controlled by humans, who make final decisions about where and when to attack.

On the other hand, automatic artificial intelligence weapons would be able to make decisions without human intervention.

It is estimated that at least 10 countries are developing weapons with artificial intelligence. The United States, China and Russia, in particular, are engaged in fierce competition. They believe that artificial intelligence will be crucial in determining which country will be in a better position than others. There is growing concern that competition can lead to a new phase in the arms race.

Ban Lethal Autonomous Weapons is an NGO that seeks to highlight the danger of weapons.

A non-governmental organization calling for a ban on such weapons has produced a video to show how dangerous these AI weapons can be.

The video shows a handheld sized drone that uses a facial recognition system based on artificial intelligence to identify human targets and kill them by penetrating their skulls.

A swarm of micro-drones is released from a vehicle that flies to a target school, killing the young one after another while they try to escape.

The NGO warns that weapons based on artificial intelligence could be used as a tool in terrorist attacks, not only in armed conflicts between states.

This video is a complete fiction, but there are moves towards the use of such drones in real military activities.

In 2016, the United States Department of Defense tested a swarm of 103 AI-based micro-drones launched by fighter aircraft. Their flights have not been scheduled in advance. They flew in formation without colliding, using artificial intelligence to assess the situation of collective decision-making.

Radar images show a swarm of green dots – drones – that fly together, creating circles and other shapes.

An arms manufacturer in Russia developed a weapon for artificial intelligence in the form of a small military vehicle and published its promotional video. Show the weapon that finds a human-shaped target and shoots it. The company says that the weapon is autonomous.

The use of the IA is also designed to be applied to the command and control system. The idea behind it is to help the AI ​​identify the most effective ways to deploy troops or attacks.

The United States and other countries that are developing artificial intelligence weapons technology say that the use of completely autonomous weapons will prevent victims of their service members. They also say that it will also reduce human errors, such as the bombardment of wrong targets.

Warnings from scientists

But many scientists do not agree. They demand the prohibition of lethal AI autonomous weapons. Physicist Stephen Hawking, who died last year, was one of them.

Shortly before his death, he issued a serious warning. What worried him, he said, is that artificial intelligence could begin to evolve on its own, and "in the future, artificial intelligence could develop a will of its own, a will that conflicts with ours" .

Stephen Hawking once warned that self-sufficient lethal weapons would conflict with the human.

There are several problems concerning AI lethal weapons. One is an ethical problem. Needless to say, the humans who kill humans are unforgivable. But the question here is whether robots should be allowed to make a decision about human lives.

Another concern is that IA could lower the obstacles to war for government leaders because it would reduce the costs of war and the loss of their men and women.

The proliferation of AI weapons for terrorists is also a serious problem. Compared to nuclear weapons, AI technology is much less expensive and more readily available. If a dictator had access to such weapons, they could be used in a massacre.

Finally, the biggest concern is that humans can lose control over them. IA devices are machines. And the machines can go out of service or malfunction. They could also be subject to cyber attacks.

As Hawking warned, the AI ​​could stand up against humans. The AI ​​can quickly learn how to solve problems through deep learning based on huge data. Scientists say it could lead to decisions or actions that go beyond human understanding or imagination.

In chess and go board games, AI beat the world champions with unexpected tactics. But because he employed those tactics he remains unknown.

In the military field, artificial intelligence could choose to use cruel means that humans would avoid if it decided it would help to achieve a victory. This could lead to indiscriminate attacks on innocent civilians.

High barriers to regulation

The global community is now working to create international rules to regulate autonomous lethal weapons.

The arms control experts are trying to use the Convention on some conventional weapons, or CCW, as a framework for regulations. The treaty prohibits the use of land mines, among other weapons. Officials and experts from 120 CCW member countries discussed the issue in Geneva. They held their last meeting in March.

They aim to impose regulations before specific weapons are created. Until now, arms prohibition treaties were made after anti-personnel mines and biological and chemical weapons were actually used and atrocities were committed. In the case of artificial intelligence weapons, it would be too late to adjust them after completely autonomous lethal weapons went into effect.

Officials and international experts are discussing the regulation of autonomous lethal weapons but have not reached a conclusion.

The talks continued for more than five years. But the delegates did not even agree on how to define "autonomous lethal weapons".

A bit of pessimism of the voice that regulates the weapons of the AI ​​with a treaty is no longer viable. They say the talks will stop, the technology will make rapid progress and the weapons will be completed.

Sources say the discussions in Geneva are moving towards the creation of less stringent treaty regulations. The idea is that every country commits to respecting international humanitarian laws, therefore, it creates its own rules and reveals them to the public. It is hoped that this will act as a brake.

In February, the US Department of Defense published its first report on the Artificial Intelligence strategy. He says that AI weapons will be kept under human control and will be used without violating international laws and ethics.

But the challenges remain. Some wonder if countries will interpret international laws in their favor to make rules that satisfy them. Others say it may be difficult to confirm that human control works.

Humans have created various tools capable of indiscriminate massacres, such as nuclear weapons. Now, the birth of AI weapons that could be out of human control are driving us to a dangerous unknown domain.

If humans will be able to recognize the possible crisis and put an end to it before it turns into a catastrophe, it is fundamental. It seems that wisdom and human ethics are now tested.

. (tagToTranslate) News (t) Japan (t) Asia (t) World (t) Nuclear (t) Biz (t) Tech (t) NHK (t) Japan Broadcasting Corporation (t) Public broadcaster (t) NHKWORLD (t) ) NHK WORLD (t) NHK WORLD PREMIUM (t) NHK WORLD TV (t) Radio Japan (t) Japan (t) NHK NEWSLINE

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.