Let’s talk about open source LLM security tools! Some of these tools have been reviewed by the TAICO team, some have not. Let us know if you have any tools you would like to add to the list.
It’s also important to note that while these tools do amazing things, they are “point solutions” and solve very specific problems. Truly building a secure LLM application would likely, at least at this point, involve adding other tools, commercial, open source, or otherwise, to the mix to provide a comprehensive security solution. These tools are great though!
1. Picklescan
Why is this tool important? Pickle files have always been useful but also a source of security vulnerabilities. Given Python is so strong in the machine learning space, we also see the use of pickle files with large language models, so we need to ensure we are using them securely.
Security scanner detecting Python Pickle files performing suspicious actions
- Official repo: https://github.com/mmaitre314/picklescan
Pickle files are python-based modules that allow a developer to serialize and deserialize code. They’re commonly used by AI developers to store and build off ML models that have already been trained. Threat actors also take advantage of the fact that pickle files can execute python code from untrusted sources during the deserialization process. - https://cyberscoop.com/hugging-face-platform-continues-to-be-plagued-by-vulnerable-pickles/
- Hugging Face docs on Pickle files: https://huggingface.co/docs/hub/en/security-pickle
2. Modelscan
Why is this tool important? Thanks to amazingly useful tools like Ollama and Huggingface, using an LLM model is only a few commands away. But, similar to Docker/container images, we don’t know what’s in the model, so we need tools to help us gain confidence in it by scanning it.
Machine Learning (ML) models are shared publicly over the internet, within teams and across teams. The rise of Foundation Models have resulted in public ML models being increasingly consumed for further training/fine tuning. ML Models are increasingly used to make critical decisions and power mission-critical applications. Despite this, models are not yet scanned with the rigor of a PDF file in your inbox.
- Official repo: https://github.com/protectai/modelscan
- Article: https://www.theregister.com/2024/12/18/ai_model_security_scan/
3. Garak
Why is this tool important? We want to know if our LLM models are secure, and one of the ways we do that is by testing them. But how do we test them? By using Garak.
garak helps you discover weaknesses and unwanted behaviors in anything using language model technology. With garak, you can scan a chatbot or model and quickly discover where it’s working well, and where it’s vulnerable to attack. You get a full report detailing everything that worked and everything that could do with improvement. - Garak docs
- TAICO post: https://taico.ca/posts/ai-security-tools-garak/
- Official website: https://www.garak.ai/
4. CodeGate
Why is this tool important? When we use LLMs, especially for things like coding, we don’t exactly know what we are sending up to them, or what they are sending down to us, e.g. are we accidentally sending our API keys to the LLM? CodeGate helps us filter the input and output of the LLM to help us make sure we are using the model securely.
The open source project CodeGate is a security proxy that sits between the LLM and the users IDE and filters input and output, with features like avoiding API key leakage, checking for insecure dependencies and insecure code. It is stewarded by Stacklok, a company that is focused on making open source software more secure.
- Official website: https://codegate.ai/
- TAICO post: https://taico.ca/posts/codegate-and-everything-is-a-filter/
5. PyRIT
(Get it? PyRIT = Pirate! 🏴☠️)
Why is this tool important? Again, in order to gain trust in our LLM models and the applications built on them, we need to test them. PyRIT is a tool that helps us perform red teaming tests on our LLM models.
The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems.
- Official repo: https://github.com/Azure/PyRIT
- PyRIT paper: https://arxiv.org/abs/2410.02828
Conclusion
This is only a small list of tools, there are a lot more out there. Let us know if you have any tools you would like to add to the list, or if you’d like to help us review some of these tools. We’d love to hear from you!
Join our meetup group - https://www.meetup.com/taico-toronto-artificial-intelligence-and-cybersecurity-org/