Mindgard is an AI security platform that provides automated red teaming and continuous security testing for LLMs, AI agents, and multimodal models. It is the first Dynamic Application Security Testing for AI (DAST-AI) solution built specifically to detect runtime AI vulnerabilities.
Founded in 2022 as a spinout from Lancaster University by Dr. Peter Garraghan (CEO/CTO), Dr. Neeraj Suri (CSO), and Steve Street (CRO/COO), Mindgard draws on over a decade of academic AI security research. The company is headquartered in London and Boston, and has raised over $11.6M in funding, including an $8M round in December 2024 led by .406 Ventures with participation from Atlantic Bridge and Willowtree Investments. Mindgard has 11 PhDs on staff and has been recognized in the OWASP LLM and Generative AI Security Solutions Landscape Guide and won the 2025 Cybersecurity Excellence Award for Best AI Security Solution.
What is Mindgard?
Mindgard sits at the testing layer of the AI security stack. Rather than filtering inputs and outputs at runtime like a firewall, it proactively attacks your AI systems to find vulnerabilities before adversaries do. The platform simulates thousands of adversarial attack scenarios against your models and agents, then reports findings aligned with industry frameworks like MITRE ATLAS and OWASP Top 10 for LLMs.
The platform is neural-network agnostic, meaning it works across generative AI, LLMs, NLP, computer vision, audio, and multi-modal models. It also tests the broader attack surface around AI systems, including agents, tools, APIs, data sources, and orchestration workflows.
Key Features
| Feature | Details |
|---|---|
| Testing Approach | Dynamic Application Security Testing for AI (DAST-AI) |
| Attack Library | Thousands of threat scenarios, continuously updated |
| Model Support | LLMs, NLP, computer vision, audio, multi-modal, AI agents |
| Framework Alignment | MITRE ATLAS, OWASP Top 10 for LLMs |
| CI/CD Integration | GitHub Actions, CLI, automated pipeline testing |
| Compliance | SOC 2 Type II, GDPR compliant, ISO 27001 expected early 2026 |
| Deployment | SaaS platform; requires only an inference or API endpoint |
| Reporting | Compliance-ready reports, SIEM integration |
How testing works
Mindgard follows a five-stage workflow. First, you point the platform at your existing AI products and environments. Then you schedule or execute security tests with one click. The platform runs its attack library against your models, collects and analyzes detailed risk scenarios, and generates reports viewable within your existing systems and SIEM tools. Teams then review findings and take corrective action with prioritized remediation guidance.
CI/CD integration
The Mindgard GitHub Action pulls the latest version of the Mindgard CLI each time a workflow runs. This means the pipeline tests not only your model and application changes but also automatically incorporates every new attack technique and testing enhancement the moment Mindgard publishes it. Integration requires only an inference or API endpoint for the model under test.
AI artifact scanning
Beyond runtime testing, Mindgard also scans AI artifacts — model files, configurations, and dependencies — for known vulnerabilities and supply chain risks before deployment.
Getting Started
When to use Mindgard
Mindgard is built for organizations that want to proactively test the security of their AI systems rather than relying solely on runtime defenses. It is particularly valuable for teams running AI in production who need continuous assurance that their models, agents, and workflows remain secure as they evolve.
The DAST-AI approach catches vulnerabilities that static scanning and pre-deployment reviews miss — issues that only surface when the model is actually running and processing inputs. The CI/CD integration means security testing becomes part of the development cycle rather than a manual gate.
For runtime input/output protection rather than testing, consider Lakera Guard or LLM Guard. For open-source AI red teaming, see Garak or Promptfoo. For broader AI model security scanning and runtime defense, look at HiddenLayer or Protect AI Guardian.

Comments
Powered by Giscus — comments are stored in GitHub Discussions.