The Pentagon sees Artificial intelligence as a way to dominate Outfox, OutManuver and future competitors. But A.I. No brittle nature means that without proper care, technology can give new ways to attack enemies.

U.S. Army The Joint Artificial Intelligence Center, created by the Pentagon to help use, recently formed a unit for veterinarians and distributors to collect open source and industry machine learning models in defense department groups. Part of that effort points to the main challenge of using AI for military ends. The machine learning “red team,” known as the Test and Evaluation Group, will examine predefined models for vulnerabilities. Another cybersecurity team examines the AI ​​code and data for hidden vulnerabilities.

Machine learning, the technology behind modern AI, represents a fundamentally different, often more powerful, way of writing computer code. Instead of writing down the rules to follow the machine, the machine makes its own rules by learning from the learning data. The trouble is that this learning process, with errors in artifacts or training data, can cause AI models to behave strangely or unexpectedly.

“For some applications, machine learning software is a billion times better than traditional software,” says Gregory Lenlan, JAIC’s director of strategy and policy. But, he adds, machine learning “breaks down even more differently than traditional software.”

A machine learning algorithm trained to identify certain vehicles in satellite images can, for example, learn to associate a vehicle with a specific color of the surrounding scenery. Adverse could potentially fool AI by changing the views around its vehicles. With access to training data, even adversaries may be able to plant images, such as a special symbol, which will confuse the algorithm.

Allen says the Pentagon adheres to strict rules regarding the reliability and security of the software it uses. He says the approach could extend to AI and machine learning, noting that JAIC is working to update DOD standards around software to accommodate issues surrounding machine learning.

AI is changing the way some businesses operate because it can be an efficient and powerful way to automate tasks and processes. Instead of writing an algorithm to predict which products the customer will buy, for example, a company can track thousands or millions of previous sales of an AI algorithm and devise its own model to predict who will buy what.

The U.S. and other armies see similar benefits, and are rushing to use AI to improve logistics, intelligence gathering, mission planning, and weapons technology. China’s growing technological capability has expressed a sense of urgency within the Pentagon about adopting AI. Allen says DOD is moving “responsibly” to prioritize safety and reliability.

Researchers are developing more creative ways to hack, degrade or break AI systems in the wild. October In October 2020, Israeli researchers showed how carefully tweaked images can confuse AI algorithms that allow Tesla to interpret the way forward. This type of “adversarial attack” involves tweaking the input of a machine learning algorithm to detect small changes that cause large errors.

Don Song, a professor at UC Berkeley who has conducted similar experiments on Tesla’s sensors and other AI systems, says attacks on machine learning algorithms are already an issue in areas such as fraud detection. Some companies provide tools to test the AI ​​systems used in finance. “Obviously there’s an attacker who wants to escape the system,” he says. “I think we’ll see more issues like this.”

A simple example of a machine attack attack, involving a tie, went wrong with Microsoft’s defamatory chatbot, which began in 2016. An algorithm was used in the boat, which learned how to answer new questions by examining previous conversations; The Redditers quickly realized that they could exploit this to give Tyne hateful messages.