HiddenLayer AISec is an enterprise AI security platform that protects machine learning models across their full lifecycle — from supply chain to production deployment.

The company was co-founded in 2022 by Chris “Tito” Sestito, Tanner Burns, and James Ballard. Sestito spent years leading threat research at Cylance, where attackers exploited the company’s Windows executable AI model using an inference attack. That incident — which allowed binary files to evade detection across Cylance’s customer base — became the catalyst for HiddenLayer. Gartner named the company a Cool Vendor for AI Security. In September 2023, HiddenLayer raised $50M in Series A funding led by M12 (Microsoft’s Venture Fund) and Moore Strategic Ventures, with participation from Booz Allen Ventures, IBM Ventures, and Capital One Ventures.
What is HiddenLayer?
HiddenLayer addresses the security gap between traditional cybersecurity tools and the threats specific to AI/ML systems. The AISec Platform 2.0, unveiled in April 2025 ahead of RSAC, provides four capabilities: AI discovery, supply chain security, runtime defense, and attack simulation.
The platform is model-agnostic and agentless. It works across any architecture without requiring access to model weights, training data, or prompts. HiddenLayer’s research team has disclosed 48+ CVEs and holds 25 granted patents (with 56 pending) in adversarial detection, model protection, and AI threat analysis.
Key Features
| Feature | Details |
|---|---|
| ModelScanner | Scans 35+ formats (PyTorch, TensorFlow, ONNX, Keras, GGUF, pickle, safetensors) for malware, tampering, and backdoors |
| AI Discovery | Shadow AI detection across cloud and on-prem environments |
| Runtime Defense | Blocks adversarial attacks, prompt injection, and inference manipulation in real time |
| Attack Simulation | Continuous adversarial testing aligned with MITRE ATLAS |
| Model Genealogy | Tracks model lineage — training, fine-tuning, and modification history |
| AIBOM | Auto-generated AI Bill of Materials for every scanned model (components, datasets, dependencies) |
| Threat Intelligence | Aggregates data from Hugging Face and community sources to surface emerging ML security risks |
| Compliance | Aligns with NIST AI RMF, MITRE ATLAS, ISO 42001, EU AI Act |
ModelScanner
ModelScanner detects malicious code injections, pickle deserialization attacks, and architectural backdoors in ML models. It scans 35+ formats including PyTorch, TensorFlow, ONNX, Keras, GGUF, and safetensors. Scanning happens before models enter production, catching supply chain threats that traditional security tools miss because they weren’t designed for ML artifacts.
The scanner integrates into CI/CD pipelines and MLOps platforms. It supports deployment via lightweight containers and works with registries including Hugging Face, MLflow, SageMaker, and Databricks Unity Catalog.
Model Genealogy and AIBOM
Introduced in AISec Platform 2.0, Model Genealogy tracks how models were trained, fine-tuned, and modified over time. This provides explainability and compliance evidence for audit trails.
The AI Bill of Materials is automatically generated for every scanned model. It inventories model components, datasets, and dependencies in an industry-standard format, enabling supply chain risk tracing and licensing policy enforcement.
Attack Simulation
The red teaming engine runs continuous adversarial testing aligned with the MITRE ATLAS framework. It probes models for weaknesses in robustness, data handling, and system integration — identifying vulnerabilities before attackers find them.
Platform integrations
HiddenLayer integrates with major MLOps and cloud platforms:
- AWS — SageMaker model registry integration
- Databricks — Unity Catalog scanning
- Hugging Face — Continuous monitoring of model repositories
- MLflow — Automatic scanning of registered models
- Microsoft Azure — Available on Azure Marketplace
- CrowdStrike — Listed on CrowdStrike Marketplace
Getting Started
When to use HiddenLayer
HiddenLayer fits enterprises that rely on ML models for business-critical applications and need security controls that traditional tools don’t provide. The platform is designed for organizations that download models from public repositories like Hugging Face, deploy customer-facing AI applications, or operate in regulated industries where AI governance is mandatory.
For a broader overview of AI security, see our AI security guide. For open-source ML model scanning without the enterprise platform, consider Protect AI Guardian (built on the open-source ModelScan project). For prompt injection detection as a standalone API, look at Lakera Guard. For LLM red teaming tools, see Garak or Promptfoo.
