HiddenLayer AISec is an enterprise AI security platform built specifically for ML model security โ covering supply chain scanning, runtime defense, and adversarial red teaming across the full model lifecycle, from training to production deployment. Unlike traditional cybersecurity tools, which were designed for code and infrastructure, HiddenLayer operates on model artifacts and inference behaviors without requiring access to weights, training data, or prompts.

HiddenLayer was co-founded in 2022 by Chris “Tito” Sestito, Tanner Burns, and James Ballard. Sestito spent years leading threat research at Cylance, where attackers exploited the company’s Windows executable AI model using an inference attack. That breach โ which let binary files evade detection across Cylance’s entire customer base โ became the founding motivation.
In September 2023, HiddenLayer raised $50M in Series A funding led by M12 (Microsoft’s Venture Fund) and Moore Strategic Ventures, with participation from Booz Allen Ventures, IBM Ventures, and Capital One Ventures.
What is HiddenLayer?
HiddenLayer AISec is an ML security platform designed for the threats that traditional cybersecurity tools cannot address โ adversarial attacks on model inference, backdoors embedded in model weights, and supply chain compromise through poisoned model artifacts. The AISec Platform 2.0, unveiled in April 2025 ahead of RSAC, covers four areas: AI discovery, supply chain security, runtime defense, and attack simulation.
The platform is model-agnostic and agentless โ no access to model weights, training data, or prompts required. HiddenLayer’s research team has disclosed 48+ CVEs in ML frameworks such as PyTorch and TensorFlow, and holds 25 granted patents (with 56 pending) in adversarial detection, model protection, and AI threat analysis.
Key Features
| Feature | Details |
|---|---|
| ModelScanner | Scans 35+ formats (PyTorch, TensorFlow, ONNX, Keras, GGUF, pickle, safetensors) for malware, tampering, and backdoors |
| AI Discovery | Shadow AI detection across cloud and on-prem environments |
| Runtime Defense | Blocks adversarial attacks, prompt injection, and inference manipulation in real time |
| Attack Simulation | Continuous adversarial testing aligned with MITRE ATLAS |
| Model Genealogy | Tracks model lineage โ training, fine-tuning, and modification history |
| AIBOM | Auto-generated AI Bill of Materials for every scanned model (components, datasets, dependencies) |
| Threat Intelligence | Aggregates data from Hugging Face and community sources to surface emerging ML security risks |
| Compliance | Aligns with NIST AI RMF, MITRE ATLAS, ISO 42001, EU AI Act |
ModelScanner
ModelScanner is HiddenLayer’s pre-production ML model scanner. It analyzes 35+ model formats โ including PyTorch, TensorFlow, ONNX, Keras, GGUF, pickle, and safetensors โ for malicious code injections, pickle deserialization exploits, and architectural backdoors (trojans embedded in model weights). Traditional antivirus and SCA tools do not inspect model artifacts, so ModelScanner fills a gap in the supply chain security stack that no conventional tool addresses.
The scanner integrates into CI/CD pipelines and MLOps platforms via lightweight containers, and works with registries including Hugging Face, MLflow, SageMaker, and Databricks Unity Catalog.

Model Genealogy and AIBOM
An AIBOM (AI Bill of Materials) is the ML equivalent of a software SBOM: a machine-readable inventory of a model’s components, training datasets, fine-tuning history, and dependencies. HiddenLayer auto-generates an AIBOM for every scanned model, giving security and compliance teams a traceable record for supply chain risk assessment and licensing enforcement.
Model Genealogy, introduced in AISec Platform 2.0, extends this further by tracking how a model was trained, fine-tuned, and modified over time โ providing the audit trail evidence regulators and governance frameworks such as ISO 42001 and the EU AI Act require.
Attack Simulation
The red teaming engine runs continuous adversarial testing aligned with the MITRE ATLAS framework, probing models for weaknesses in robustness, data handling, and system integration before attackers find them.
Platform integrations
HiddenLayer integrates with major MLOps and cloud platforms:
- AWS โ SageMaker model registry integration
- Databricks โ Unity Catalog scanning
- Hugging Face โ Continuous monitoring of model repositories
- MLflow โ Automatic scanning of registered models
- Microsoft Azure โ Available on Azure Marketplace
- CrowdStrike โ Listed on CrowdStrike Marketplace
Getting Started
When to use HiddenLayer
HiddenLayer fits enterprises that rely on ML models for business-critical applications and need security controls that traditional tools don’t provide. It’s a good match for organizations pulling models from public repositories like Hugging Face, running customer-facing AI, or operating in regulated industries where AI governance is a hard requirement.
For a broader overview of AI security, see the AI security guide. For open-source ML model scanning without the enterprise platform, consider Protect AI Guardian (built on the open-source ModelScan project).
For prompt injection detection as a standalone API, look at Lakera Guard. For LLM red teaming tools, see Garak or Promptfoo.