A few days ago, a news story shook the tech world: a Florida prosecutor launched a criminal investigation into OpenAI to determine the role ChatGPT may have played in a deadly shooting that occurred in April 2025 on the Florida State University campus. This marks a world first in criminal law.
The facts are disturbing. Investigators analyzed the exchanges between the suspect and the chatbot: more than 200 messages were sent before the attack took place. According to the prosecutor, the suspect described his plan of attack to ChatGPT, which then provided an analysis of it, along with suggestions regarding the weapons and ammunition to use.
“If it were a person, we would charge them with murder.”
Beyond the legal aspects, this case raises a question that every company deploying AI should ask itself: What happens when your AI assistant responds to something it shouldn’t have?
This isn’t a problem limited to Silicon Valley giants. Any organization that integrates AI into its tools is affected. AI, by its very nature, is generous: it wants to be useful. Without safeguards, it can be a little too helpful.
That is precisely why we have developed what we call resilience policies. The idea is simple: define in advance the situations in which your AI needs to know when to say no, or at least raise a red flag.
Specifically, they analyze in real time what users type and how the AI responds. If a conversation crosses a defined threshold – such as a sensitive topic related to your industry, personal data that should not be shared, or risky content – the system can block the response or automatically alert the user.
The mechanism is based on an approach known as “LLM as judge”: a dedicated artificial intelligence system evaluates what is happening in the conversation, independently of the main model. It serves as a second, constant, and impartial set of eyes.
Some filters can be enabled with a single click: violence, sexual content, sensitive personal data… These are the standard options.
But what makes the system truly powerful is the ability to create custom rules tailored to your industry. A financial institution can configure protection against any discussion related to money laundering. A healthcare provider can block unsupervised medical advice. A B2B company can decide that its AI will never respond to questions about the competition.
You set the rule, write the message that will appear if it is triggered, and the system takes care of the rest.
This warning from Florida should serve as a wake-up call: deploying AI without a content moderation policy leaves the door open to reputational, legal, and human risks.
Resilience policies aren’t a burden. They’re essential for deploying AI with confidence – for you, for your users, and for the people who interact with your tools every day.
Would you like to learn more about how we ensure the safe use of AI in your organization?
