Protect AI introduces three open-source software tools designed to secure AI/ML environments

By | 10:12 AM Leave a Comment

Protect AI announced a set of open-source software (OSS) tools designed to help organizations protect their AI and ML environments from security threats.

The company is leading security for AI/ML by developing and maintaining three OSS tools — NB Defense, ModelScan and Rebuff — that detect vulnerabilities in ML systems and are freely available via Apache 2.0 licenses to Data Scientists, ML Engineers, and AppSec professionals.

OSS has become one of the most important components for helping companies innovate quickly and maintain a competitive advantage. It underpins much of the software used by organizations in their applications, particularly for AI and ML applications. While OSS offers clear benefits, it also poses inherent security risks.

Although widespread efforts have been made to secure the software supply chain, the focus on AI/ML security has been overlooked. Protect AI is committed to helping build a safer AI-powered world, and in doing so has taken significant steps to securing the AI/ML supply chain.

In addition to the recent announcement of Protect AI’s Huntr, the world’s first AI/ML bug bounty platform focused on fixing AI/ML vulnerabilities in OSS, the company is also actively contributing to this effort by developing, maintaining, and releasing first of a kind OSS tools focused on AI/ML security. These tools include, NB Defense for Jupyter notebook security, ModelScan for model artifacts, and Rebuff for LLM Prompt Injection Attacks.

All three can be used as standalone tools, or can be integrated within the Protect AI Platform which provides visibility, auditability, and security into ML Systems. The Protect AI Platform provides an industry first look into the ML attack surface by creating a ML Bill of Materials (MLBOM), that helps organizations detect unique ML security threats and remediate vulnerabilities.

“Most organizations don’t know where to start when it comes to securing their ML Systems and AI Applications,” said Ian Swanson, CEO of Protect AI. “By making NB Defense, Rebuff, and ModelScan available to anyone as permissive open-source projects, our goal is to raise awareness for the need to make AI safer and provide tools organizations can start using immediately to protect their AI/ML applications.”

NB Defense – Jupyter Notebooks security

Jupyter Notebooks are an interactive web application for creating and sharing computational documents, and are the starting point for model experimentation for most data scientists. Notebooks enable code to quickly be written and executed, can leverage a vast ecosystem of ML-centric open-source projects, make it easy to explore data or models interactively, and provide capabilities to share work with peers.

Creating a threat vector for malicious actors, notebooks can often be found in live environments with access to sensitive data. With no commercial security offering in the market that can scan a notebook for threats, Protect AI built NB Defense as the first security solution for Jupyter Notebooks.

NB Defense is a JupyterLab Extension, as well as a CLI tool, that scans notebooks and/or projects looking for problems. It detects leaked credentials, personally identifiable information (PII) disclosure, licensing issues, and security vulnerabilities. NB Defense improves the security posture of data science practices and helps protect ML data and assets. Visit this link to get started with NB Defense.

ModelScan – ML model security scanner

ML models are shared over the internet, between teams and are used to make critical decisions. Yet they are not scanned for code vulnerabilities. The process of exporting a model is called serialization, and involves packaging it into specific files for use by others. In a Model Serialization Attack, malicious code is added to the contents of a model during serialization — a modern version of the Trojan Horse. These create vulnerabilities that can be used to execute multiple types of attacks.

First is credential theft, that allows for writing and reading data to other systems in an environment. Second, inference data theft that infiltrates requests to the model. Third is Model Poisoning, which alters the results of the model itself, and finally, privilege escalation attack which loads the model to attack other assets like training data.

ModelScan is used to determine if models contain unsafe code, and supports multiple formats including H5, Pickle and SavedModel. This protects users when using PyTorch, TensorFlow, Keras, Sklearn, XGBoost, with more on the way. Visit this link to get started with ModelScan.

Rebuff – LLM prompt injection attack detection

In July, 2023, Protect AI acquired and began maintaining the Rebuff project to help support the need for an extra layer of defense when using LLMs. Prompt injection (PI) attacks are malicious inputs that target applications built on large language models (LLMs) and can manipulate outputs from models, expose sensitive data, and allow attackers to take unauthorized actions.

Rebuff is an open source self-hardening prompt injection detection framework that helps to protect AI applications from PI attacks. The solution uses four layers of defense to protect LLM applications. First, heuristics to filter out potentially malicious input before it reaches the model. Second, a dedicated LLM to analyze incoming prompts and identify potential attacks.

Third, a database of known attacks to enable it to recognize and prevent similar attacks in the future. Fourth, canary tokens which modify prompts to detect leakages, which enables the framework to store new embeddings for the identified malicious prompt back into the vector database to prevent future attacks. Visit this link to get started with Rebuff.


from Help Net Security https://ift.tt/qf1N07n

0 comments:

Post a Comment