A new threat matrix outlines attacks against machine learning systems

By | 2:13 AM Leave a Comment

A report published last year has noted that most attacks against artificial intelligence (AI) systems are focused on manipulating them (e.g., influencing recommendation systems to favor specific content), but that new attacks using machine learning (ML) are within attackers’ capabilities.

attacks machine learning systems

Microsoft now says that attacks on machine learning (ML) systems are on the uptick and MITRE notes that, in the last three years, “major companies such as Google, Amazon, Microsoft, and Tesla, have had their ML systems tricked, evaded, or misled.” At the same time, most businesses don’t have the right tools in place to secure their ML systems and are looking for guidance.

Experts at Microsoft, MITRE, IBM, NVIDIA, the University of Toronto, the Berryville Institute of Machine Learning and several other companies and educational organizations have therefore decided to create the first version of the Adversarial ML Threat Matrix, to help security analysts detect and respond to this new type of threat.

What is machine learning (ML)?

Machine learning is a subset of artificial intelligence (AI). It is based on computer algorithms that ingest “training” data and “learn” from it, and finally deliver predictions, decisions, or accurately classify things.

Machine learning algorithms are used for tasks like identifying spam, detecting new threats, predicting user preferences, performing medical diagnoses, and so on.

Security should be built in

Mikel Rodriguez, a machine learning researcher at MITRE who also oversees MITRE’s Decision Science research programs, says that we’re now at the same stage with AI as we were with the internet in the late 1980s, when people were just trying to make the internet work and when they weren’t thinking about building in security.

We can learn from that mistake, though, and that’s one of the reasons the Adversarial ML Threat Matrix has been created.

“With this threat matrix, security analysts will be able to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning,” he noted.

Also, the matrix will help them think holistically and spur better communication and collaboration across organizations by giving a common language or taxonomy of the different vulnerabilities, he says.

The Adversarial ML Threat Matrix

“Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms. Data can be weaponized in new ways which requires an extension of how we model cyber adversary behavior, to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle,” MITRE noted.

The matrix has been modeled on the MITRE ATT&CK framework.

attacks machine learning systems

The group has demonstrated how previous attacks – whether by researchers, read teams or online mobs – can be mapped to the matrix.

They also stressed that it’s going to be routinely updated as feedback from the security and adversarial machine learning community is received. They encourage contributors to point out new techniques, propose best (defense) practices, and share examples of successful attacks on machine learning (ML) systems.

“We are especially excited for new case-studies! We look forward to contributions from both industry and academic researchers,” MITRE concluded.


from Help Net Security https://ift.tt/2J4WhkR

0 comments:

Post a Comment