27 Feb

Max Polyakov

We have previously examined what the military forces of different nations are doing in terms of the full utilization of the potential inherent in the present day high tech. But we did not focus only on information technology as Max Polyakov did in his article. Once again, here, we are looking at the moral issues raised by the proposed use of robots for full-fledged warfare by India and the United States. Somebody said that in the nearest future, we will have a war weapon that cannot be easily regulated like the nuclear weapons, and that is the use of AI for wars. Max Polyakov has explained how important is EOS Platform for the Agricultural Sphere.

The Problem

The major problem being raised here is that these systems are actually programmed. That makes them dumb when they are compared to human soldiers, because they can’t make decisions when the need arises. Unlike Max Polyakov who would prefer the fact that efficiency could be improved by using information technologies during warfare, some of the scientists in charge of the project like Paul Bello of the Office of Naval Research told Defense One have addressed this. He said that they are now designing the newer robots with the ability to enjoy more functions than before. To forestall the type of problem that Google’s self-driving cars are posing ethically, he said that they are working on more automation for the newer robots. You can find additional information in the article Max Polyakov Relaunches Firefly with High Hopes to Bridge Gap between CubeSats and Space

Max Polyakov Noosphere

But the problem with this is their ability to make moral decisions when they are not manned by a human being. Besides programming them to use the recognition technology to pick out and eliminate targets, what about situations of disaster and conflict? How will they know the person to evacuate before the other? To buttress this problem, the chairman of the Yale Technology pointed out that the project is seeking to bring in robots in virgin contexts. These are areas where the programmers have not been able to predict what their reactions will be in certain situations. So, they acknowledge the huge moral question there.

The major problem is also the fact that those at the helm of affairs have predicted that the Lethal Autonomous Weapons Systems or LAWs will be with us sooner than expected. It would be cheaper and easier to mass produce them. So, they may well replace the guns and weapons for war, as Max Polyakov said. But this moral question still hangs on. Other areas where they may pose more danger will include when they get into the black markets. It will mean that terrorists, dictators, warlords, and disgruntled elements can latch on them to commit atrocities. But the bigger problem is, even if the authorized use comes with some ethical regulations, will the dirty users abide by those. So, throwing these out in the market may itself signal a huge problem.

Max Polyakov Noosphere Ventures

* The email will not be published on the website.