The author is a technology consultant and university professor.
He resides in Santo Domingo.
Elon Musk believes that artificial intelligence (AI) can trigger World War III. The Pentagon spends millions of dollars on AI for military purposes. The polling place is already hiring AI experts. In China, modern technology is used to strengthen authoritarian regimes and censorship. New Zealand creates the world's first virtual politician, and Musk believes that AI can become a brutal dictator and even unleash the Third World War. We talked about how AI is already affecting the situation in the world.
"Both the United States, Russia and China agree that AI will become a key technology in the future, upon which national power will depend," says Gregory Allen, an independent analyst at the Center for New American Security. Last year, the United States Department of Defense created a comprehensive algorithmic technology team. The list of their tasks includes the establishment of the use of artificial intelligence and artificial vision technologies in the Pentagon.RELATED
According to a decree signed by Donald Trump, the development of AI is a priority of his administration's policy. Last year, the Pentagon approved the allocation of $ 885 billion for AI for military purposes. Josh Sullivan, a Pentagon contractor, said the technology should help the United States successfully compete with Russia and China, since the use of AI will simplify the work of the military and give them time to tackle more important tasks. "Part of the task is to ensure that our government has access to the best technologies and apply them with due responsibility for the good of our citizens and combatants," said Sullivan.
The United States also created the Joint Center for Artificial Intelligence, which directs and coordinates AI pilot projects of the Pentagon, including recruitment, training, research and defense against cyber attacks. They establish the task of the widespread use of AI systems in intelligence activities. For example, when analyzing images to prepare for operations and minimize risks for soldiers and civilians.
The use of AI can also improve the safety of airplanes, ships and other vehicles and improve their maintenance. Therefore, the failure of critical parts can be provided with greater precision and timeliness. In addition, AI is actively used in the work of auditors and inspectors, it is used to inspect suspicious objects, as well as to extract pumps of various configurations.
Semi-autonomous systems are already being used in combat. For example, melee weapons systems. They can perform a search independently, find the enemy, evaluate what is happening and hit the target. Scientists also began working on technologies that will allow systems to act not only in accordance with a previously developed plan, but also to develop their own algorithm of actions. They will depend on their own ideas about what the situation is and what is better to do at this time.
Similarly, AI can be used to the detriment. For example, in the fight against freedom. The Chinese government plans to use it to forecast cyber attacks, demonstrations, as well as to strengthen Internet censorship, which already exists at a high level. Facial recognition systems and other AI-based technologies are launched in Chinese cities. The official reason is for security.
The Chinese police cloud system is designed to search for seven categories of people, including those that "undermine stability." The country also strives to create a system that gives each citizen and each company a social credit rating: each one will have their own score, which will reflect the buying habits, driving history and even the attitude towards politics. Almost like in the Netflix Black Mirror series.
At the end of 2018, the US technology news site, The Verge, published examples of photos of people generated by Nvidia using AI technology. Journalists fear that counterfeits of AI may affect society, because they could be used for misinformation and propaganda.
As for the US presidential elections. UU. From 2012 they were the first in which the influence of AI really helped a candidate. It all started with the fact that Barack Obama appointed machine learning expert Raid Ghana as his campaign's principal analyst. His team collected all the information about voters in a database, added information from social networks and began to predict four factors about each voter: 1. What is the probability of voting for Obama? 2. Will you come to the polls? 3. Will the reminder respond to this? 4. Will you change your mind after the conversation about specific topics? Based on this, 66,000 electoral simulations were conducted daily. After receiving the results, the volunteers joined the process: they knew who to go home to, who to call, what to speak and who better not to touch. This helped Obama take a second term.
These AI systems will be used in electoral campaigns and political initiatives that require the processing of large amounts of voter data, more and more often. Modern systems are already able to analyze various data sets from many sources, learn from elections to elections and give valuable advice. The analysis groups during the 2016 US presidential elections used algorithms to analyze trends in social networks. The history of Cambridge Analytica and the hacking of personal data of Facebook users is still recent.
A story that we cannot miss. In New Zealand, programmer Nick Garritsen created the world's first virtual politician. This is a bot named Sam. You can chat with him using the Facebook messenger. The bot is a virtual political woman who will participate in the next general elections of this 2020.
Diplomats and international experts have been discussing the issue of autonomous weapons for several years. The level of these discussions has reached official UN consultations. Now 26 states are in favor of a preventive moratorium on autonomous weapons systems. They have the support of more than 230 organizations and around 3,000 businessmen and scientists from around the world. Among them are Elon Musk, the founder of Tesla and Space X Musk, as well as some of Google’s top executives. "The decision to take a person's life should never be delegated to a robot," the letter said. The ban opposes states that are already actively investing in the use of AI for military purposes. For example, the United States, Israel, Russia and the United Kingdom. Representatives of the United States are trying to convince that autonomous weapons will help avoid "collateral damage." After all, a computer, unlike a soldier, can quickly analyze the entire situation on the battlefield and make fewer mistakes.
And I must mention again, whom I consider one of the leading entrepreneurs and innovators in the world, Elon Musk, who is skeptical about AI, not only in military operations, but also in political matters. He said that AI could create an "immortal dictator that cannot be escaped from anywhere." Musk admits that authoritarian regimes can create AI that can survive individual leaders or parties and become a constant source of oppression. According to him, competition in the field of AI development can cause the outbreak of World War III.
Nicholas Wright, a professor at University College London in the Foreign Affairs column, said that in countries that have strong traditions of individual freedom, they can reject such government initiatives, if any. He pointed out that such threats can come not only from the state. “The oligopolistic technology companies are concentrating power in their hands, absorbing competitors and pressing for them to adopt suitable standards to regulate their activities. However, society has already faced similar challenges after previous technological revolutions, ”he writes. And the strengthening of authoritarianism in other countries will help promote the development of democracy where it already exists.
Humanity is inclined to push rivalry into a certain framework, we versus them, as a result, Western countries can redefine their attitude towards censorship and observation. Most people do not immerse themselves in the details of data policies and ignore the risks. But when these details become the basis of the regime in the real world, they will stop looking boring and abstract. Governments and technology companies will have to explain what the difference is.
JPM / of-am
ALMOMENTO.NET publishes opinion articles without making editorial corrections. It reserves the right to reject those that are poorly worded, with syntax errors or spelling mistakes.