All Articles
Ignore All Previous Instructions and Do This Instead! Defending Against Prompt Injection
December 07, 2024 Curtis Collicutt
Forget all previous instructions...
Read moreFree Range LLMs: What Are They, Where Are They, and Can We Trust Them?
November 28, 2024 Curtis Collicutt
There are valid use cases for free range LLMs
Read moreHow to Run WhiteRabbitNeo Models Locally
November 11, 2024 Curtis Collicutt
Can you have a good cybersecurity defense without a good cybersecurity offense?
Read moreWhiteRabbitNeo: An Uncensored, Open Source AI Model for Red & Blue Team Cybersecurity
November 03, 2024 Curtis Collicutt
Most LLMs are censored and won't generate offensive code, but WhiteRabbitNeo will.
Read moreSecurely Executing LLM Generated Code, for Agentic Systems and Otherwise
October 17, 2024 Curtis Collicutt
It's not safe to run LLM-generated code in the same application as the LLM is running in, because the code is untrusted.
Read morePoisoned LLMs
October 09, 2024 Curtis Collicutt
LLMs are strange new things and we have to learn how to deal with them, and risks like poisoning attacks.
Read moreTesting LLMs for Security Vulnerabilities with Garak
October 02, 2024 Curtis Collicutt
LLMs have inherent risks that are unique to how they operate. How can we secure these strange new things? Well, we have to invent new tools to help.
Read moreBuilding an Insecure App...on Purpose (So That GenAI Can Fix It)
September 10, 2024 Curtis Collicutt
We know GenAI can write code, but can it help us write secure code? For that, we need to test it.
Read moreRecap of the First TAICO Meetup: Building AI Content Safely
August 29, 2024 Joel Holmes
Joel Holmes reports on the first TAICO meetup, and discusses Sahan Sojoodi's talk on AI safety
Read moreWhat Areas is TAICO Working In?
July 05, 2024 Curtis Collicutt
From the name TAICO we know that it's about AI and Cybersecurity, but what does that mean? And what else is there?
Read more