Prompt Security is an AI security platform that protects organizations against prompt injection, data leakage, and shadow AI across both employee GenAI usage and homegrown LLM applications.
Co-founded in August 2023 by Itamar Golan and Lior Drihem in Tel Aviv, Israel, Prompt Security grew rapidly to become one of the leading GenAI security platforms. The company was named a 2025 SINET16 Innovator for its AI security capabilities. In August 2025, SentinelOne signed a definitive agreement to acquire Prompt Security for approximately $250 million, integrating the platform into SentinelOne’s Singularity cybersecurity suite to address GenAI and agentic AI security at the endpoint and runtime level.
What is Prompt Security?
Prompt Security takes a full-stack approach to GenAI security. Rather than focusing on a single attack vector, the platform addresses three interconnected problems: employees leaking sensitive data to third-party AI tools, adversaries exploiting homegrown LLM applications through prompt injection, and unmanaged shadow AI sprawling across the organization.
The platform inspects all interactions with GenAI tools in real time. For employee-facing AI usage, a browser extension monitors prompts, file uploads, and responses across 250+ AI models. For homegrown applications, an API integration screens inputs and outputs for injection attempts, data exfiltration, and policy violations. Detection runs with sub-200ms latency, and the system can block, redact, or alert depending on the configured policy.
Key Features
| Feature | Details |
|---|---|
| Prompt Injection Detection | Direct, indirect, jailbreak, system prompt extraction |
| Response Latency | Sub-200ms for real-time detection |
| Model Support | 250+ AI models via unified interface |
| Shadow AI Detection | Browser extension + network-level visibility |
| Data Protection | PII, PHI, financial data detection and redaction |
| Deployment Options | SaaS, on-premises, browser extension (Chrome) |
| Red Teaming | Built-in testing with custom LLMs |
| Content Filtering | Customizable policies with role-based controls |
| Compliance | Encryption at rest and in transit, customizable retention policies |
| Integration | Browsers, desktop apps, APIs |
Browser extension
The Prompt Security browser extension is the primary tool for monitoring employee GenAI usage. It deploys in minutes via Intune or similar MDM solutions and works by dynamically detecting GenAI interactions through DOM analysis and user action tracking — typing, pasting, clicking, and file uploads.
Organizations can start in monitor-only mode to gain visibility into GenAI usage patterns before enforcing policies. Once policies are active, the extension can block risky prompts, redact sensitive data in real time, or alert administrators based on configurable rules.
Data leakage prevention
The platform uses semantic analysis rather than simple pattern matching to identify sensitive information. It detects PII, PHI, financial data, source code, and proprietary information contextually, accounting for how data appears within natural language prompts. When sensitive data is detected, the system can redact it automatically, block the interaction, or send an alert to the platform admin with full logging.
Red team testing
Prompt Security includes built-in red teaming capabilities that test your LLM applications against known attack patterns. The platform generates adversarial prompts using custom LLMs to probe for vulnerabilities including injection, data extraction, and policy bypasses.
Getting Started
When to use Prompt Security
Prompt Security fits organizations that need to secure GenAI usage on two fronts: controlling how employees interact with third-party AI tools, and protecting homegrown LLM applications from adversarial attacks. The browser extension approach gives security teams visibility into shadow AI without requiring network-level inspection, and the sub-200ms detection latency keeps the user experience intact.
The platform is especially relevant for regulated industries — healthcare, financial services, government — where data leakage to AI tools is a compliance risk and where prompt injection in production applications could expose sensitive information.
For API-focused runtime protection with lower latency, consider Lakera Guard. For open-source input/output guardrails, see LLM Guard or NeMo Guardrails. For AI red teaming specifically, look at Garak or Mindgard.
Note: Acquired by SentinelOne in August 2025 for approximately $250M. The platform is being integrated into SentinelOne's Singularity platform.

Comments
Powered by Giscus — comments are stored in GitHub Discussions.