Posts

Adding AI to your SaaS - Security Risks and Opportunities

Using a new AI solution is no different to using any other 3rd party solution, with a few additional and important considerations.
privacy
security
risk
Cover image

Artificial Intelligence (AI) is full of opportunities, not wonder many SaaS companies inject AI features all over their platforms: Image and text generation, recognition, classification, and more. These can bring great benefits for end-users, no doubt. But these also bring unacceptable risks if not managed properly.

Using a new AI solution is no different to using any other 3rd party solution. The same controls should apply, with a few additional and important considerations.

Same as other 3rd party solutions

When considering using a 3rd party solution, your 3rd party assessment process (you have one, right?) should get you to consider:

  • Data processing and privacy
  • Cybersecurity and data protection measures
  • Financial stability and health
  • Compliance with industry regulations and standards
  • Operational stability and business continuity
  • Incident response plans

This is no different for 3rd party AI solutions… with some extra risks.

With additional risks

AI solutions come with great powers, and these also create some new risks. Here are some of the top ones:

  • Prompt Injection
    • How to protect against prompt injection that could lead to data leaks, command injection, or other abuse (e.g., customer tricking the AI to provide cheaper prices or free items)?
  • Sensitive Information Disclosure
    • How do you control the output so that no sensitive data that should not have been produced can be displayed (aka, DLP for AI)?
  • Data privacy
    • How do you remove Personally Identifiable Information (PII) from the system memory (e.g., RAG)? Or if it has been used to train the next generations of model?
  • Data leak
    • Usage of the data provided into an AI system: Will it be used to train future models? Could this lead to our data inadvertently being made available to others?
  • Regulatory breaches
    • How will you ensure you meet regulatory requirements around decision-making, for example? How can you ensure decisions’ lawfulness, fairness, rationality, and transparency?
  • Copyright issues
    • What are the risks that copyrighted material has been used to train an AI model, and that we could get sued for using copyrighted material?
  • Reputational risks
    • What happens if the AI system “hallucinates” or generates wrong or insulting information?
  • Unbounded Consumption
    • Do you have any limits in place to ensure an attacker can’t “DDoS your wallet” by generating expensive queries?

This last one is probably the largest risk of them all. There are already several classes of injection attacks against systems with well-defined inputs (SQL injection, code injection / XSS and more), but trying to secure a system that can take any input is immensely more difficult.

Start by checking some of these:

💡
Be careful not to be overconfident with AI. It is easy to make mistakes and risk painful consequences. Talk to a specialist.

Should I give up?

No! AI comes with great opportunities. But it’s a powerful tool that can also create powerfully bad outcomes. It’s important that the risks be carefully considered and can be mitigated to an acceptable level before starting to use AI tools.

Olivier Reuland

Related