Overview
Baish is an open source project supported by the TAICO team. It’s a security-focused tool that uses Large Language Models (LLMs) and other heuristics to analyse shell scripts before they are executed. It’s designed to be used as a more secure alternative to the common curl | bash
pattern.
Baish serves two key purposes: it’s both a focused cybersecurity solution and an educational platform. While implementing its core functionality of securing third-party scripts, developers gain hands-on experience with Large Language Models (LLMs). This includes learning practical skills like integrating LLMs into security workflows, implementing proper safeguards, and handling untrusted input. The project also provides valuable insights into LLM-specific security challenges, such as defending against prompt injection attacks, making it an excellent resource for understanding both the benefits and potential vulnerabilities of AI in cybersecurity.
How to Contribute
Baish is a learning project, and we’re looking for contributions–anyone can help, and there are several ways to contribute. If you’re interested in contributing, please contact us, or just make a pull request! It’s that easy!
Baish is written in Python, and is not a large project, which makes it a great place to start for anyone looking to learn Python.
Key Features
- Accepts files on stdin, ala the
curl | bash
pattern, but instead you would docurl | baish --shield | bash
- Can analyze any file, not just shell scripts curled to bash
- Analyzes scripts using various configurable LLMs for potential security risks
- Provides a harm score (1-10) indicating potential dangerous operations (higher is more dangerous)
- Provides a complexity score (1-10) indicating how complex the script is (higher is more complex)
- Saves downloaded scripts for later review
- Logs all requests and responses from LLMs along with the script ID
- Uses YARA rules and other heuristics to detect potential prompt injection