API & AI Security: Protecting APIs and LLMs (2026)

Written by Suphi Cankurt
The API and AI security landscape
APIs have become the default interface for modern software. Every mobile app, SaaS product, and microservice talks to the backend through APIs. That makes APIs the primary attack surface for most organizations. Attackers know this. API-targeted attacks have grown faster than any other category of web attack over the past three years.
AI has added a second front. The explosion of LLM-powered applications since 2023 has introduced threat categories that did not exist five years ago. Prompt injection, model exfiltration, and hallucination-driven misinformation sit outside the scope of traditional application security tools. A new generation of AI-specific security tooling has emerged to address these risks.
These two domains are more connected than they appear. Every LLM is accessed through an API. Every AI agent that calls external tools does so through API requests. The API layer is the control plane for AI security, and the AI layer introduces risks that conventional API security tools were never designed to catch. Understanding both is now a baseline requirement for security teams building or protecting modern applications.
The API security market reached roughly $6 billion in 2025 and continues to grow at about 30% per year. AI security is newer and smaller, but growing faster. Gartner added AI security to its Hype Cycle in 2024, and venture funding for AI security startups exceeded $1 billion in the first half of 2025 alone.
API security fundamentals
APIs are different from traditional web applications in ways that matter for security. A web application has a UI layer that constrains what users can do. An API has no such guardrails. It accepts structured requests and returns structured responses. The caller has direct access to business logic, data operations, and backend services. One malformed request can exfiltrate thousands of records if authorization is broken.
The most common API types you will encounter:
- REST APIs use HTTP methods (GET, POST, PUT, DELETE) and are by far the most common. Most API security tools focus here first.
- GraphQL APIs accept flexible queries, which creates unique attack vectors. A single GraphQL query can request nested relationships that cause resource exhaustion or leak data the caller should not see.
- gRPC uses Protocol Buffers over HTTP/2. It is common in microservice architectures and harder to test with conventional tools because the payloads are binary, not JSON.
- WebSocket APIs maintain persistent connections. Traditional request-response security testing does not apply cleanly. Session hijacking and message injection are the primary risks.
The core API security concerns are consistent across all types. Authentication verifies who the caller is. Authorization determines what that caller can access. Rate limiting prevents abuse. Input validation stops injection and malformed data. Schema enforcement ensures requests match the API specification.
Shadow APIs are a persistent problem. These are endpoints that exist in your infrastructure but are not documented, not monitored, and often not protected. They appear when developers deploy services without updating API inventories, when legacy endpoints survive decommissioning, or when third-party integrations expose unexpected surfaces. Salt Security, Wallarm, and Traceable AI all offer API discovery features designed to find shadow APIs through traffic analysis.
API sprawl compounds the issue. A typical enterprise runs thousands of API endpoints across dozens of services. Keeping track of them manually is not feasible. Automated API discovery and inventory management have become table stakes for any serious API security program.
Read our full API security guide for a deeper breakdown. Compare all API Security tools.
OWASP API Top 10
The OWASP API Security Top 10 (2023 edition) lists the most critical API-specific risks. It is distinct from the standard OWASP Top 10 for web applications because API vulnerabilities follow different patterns. Authorization flaws dominate, not injection.
| # | Risk | What it means |
|---|---|---|
| API1 | Broken Object Level Authorization (BOLA) | Users can access objects belonging to other users by manipulating IDs in API requests. The single most common API vulnerability. |
| API2 | Broken Authentication | Weak or missing authentication mechanisms allow attackers to impersonate legitimate users or gain unauthorized access to API endpoints. |
| API3 | Broken Object Property Level Authorization | APIs expose object properties that the user should not be able to read or modify. Combines the former “excessive data exposure” and “mass assignment” categories. |
| API4 | Unrestricted Resource Consumption | APIs allow requests that consume excessive resources (CPU, memory, bandwidth) without limits. Enables denial-of-service and cost-based attacks. |
| API5 | Broken Function Level Authorization | Users can access administrative or privileged functions by calling endpoints they should not have access to. |
| API6 | Unrestricted Access to Sensitive Business Flows | Attackers automate access to business flows (purchasing, reservation, posting) in ways that harm the business. |
| API7 | Server-Side Request Forgery (SSRF) | APIs fetch remote resources based on user-supplied URLs without validation, allowing attackers to reach internal services. |
| API8 | Security Misconfiguration | Default configurations, unnecessary HTTP methods, missing security headers, and overly permissive CORS policies. |
| API9 | Improper Inventory Management | Running outdated or deprecated API versions, unpatched endpoints, and undocumented shadow APIs. |
| API10 | Unsafe Consumption of APIs | Blindly trusting data from third-party APIs without validation. Your application becomes vulnerable through its API dependencies. |
BOLA sits at the top for good reason. It accounts for the majority of API breaches because it is easy to exploit and hard to detect with generic security tools. A conventional DAST scanner cannot tell whether user A should be able to access user B’s order through /api/orders/12345. That requires understanding the application’s authorization model. Dedicated API security tools like 42Crunch and Salt Security are built to test for this class of vulnerability.
The standard web OWASP Top 10 focuses heavily on injection (SQLi, XSS). The API Top 10 barely mentions it. That reflects reality: authorization flaws, not injection, are what gets APIs breached.
Read our OWASP Top 10 guide for more details on both lists.
AI and LLM security
AI security addresses a fundamentally different threat model. Traditional application vulnerabilities are bugs in code. AI vulnerabilities are emergent behaviors in statistical models. You cannot patch a prompt injection the way you patch a SQL injection. The model’s flexibility is both its purpose and its attack surface.
Prompt injection is the defining risk. An attacker crafts input that hijacks the model’s behavior, overriding system instructions. Direct prompt injection embeds malicious instructions in user input. Indirect prompt injection hides instructions in content the model retrieves, such as a web page, email, or document. If your LLM-powered application reads external data, it is exposed to indirect prompt injection.
Data leakage happens when a model reveals training data, system prompts, or user information it should keep private. LLMs do not have a reliable concept of confidentiality. They can be coaxed into repeating fine-tuning data, exposing internal tool configurations, or leaking one user’s conversation context to another.
Hallucinations create real security exposure. When an LLM generates false information with confidence and downstream systems or users act on it, the damage goes beyond incorrect answers. A coding assistant that suggests vulnerable code patterns. A customer-facing bot that fabricates company policies. A research agent that cites nonexistent sources. These are risks that need guardrails, not just better prompts.
The OWASP Top 10 for LLM Applications (2025 edition) formalizes these risks:
- LLM01: Prompt Injection
- LLM02: Sensitive Information Disclosure
- LLM03: Supply Chain Vulnerabilities
- LLM04: Data and Model Poisoning
- LLM05: Improper Output Handling
- LLM06: Excessive Agency
- LLM07: System Prompt Leakage
- LLM08: Vector and Embedding Weaknesses
- LLM09: Misinformation
- LLM10: Unbounded Consumption
AI red teaming is the practice of systematically testing LLMs for these weaknesses. Tools like Garak, Promptfoo, PyRIT, and DeepTeam automate this by running thousands of adversarial prompts against your model and evaluating the responses. This is analogous to running a DAST scan against a web application, but the attack payloads are natural language instead of SQL injection strings.
Guardrails are the runtime defense layer. Lakera, LLM Guard, and NeMo Guardrails sit between users and the model, filtering inputs for prompt injection attempts and scanning outputs for policy violations, sensitive data leakage, or harmful content. Think of guardrails as a WAF for LLMs.
Read our full AI security guide for a deeper breakdown. Compare all AI Security tools.
Where API and AI security converge
The boundary between API security and AI security is blurring. Every LLM application exposes an API. Every agentic workflow chains multiple API calls together. Securing one without the other leaves gaps.
LLMs are accessed through APIs. When you call GPT-4, Claude, or Gemini, you are making an HTTP request to an API endpoint. That means every API security concern applies: authentication, rate limiting, input validation, data exposure. A broken API authentication mechanism on your LLM endpoint lets anyone run prompts against your model on your bill. API security is the first line of defense for any AI application.
Agentic AI creates complex API chains. Modern AI applications do not just respond to prompts. They call tools. An AI coding assistant retrieves files through an API, runs code through another API, and pushes changes through a third. An AI customer support agent queries order databases, processes refunds, and sends emails. Each of these tool calls is an API request, and each is a potential attack vector. A prompt injection that causes the agent to call the refund API with attacker-controlled parameters is both an AI vulnerability and an API vulnerability simultaneously.
The API layer is the control point. Because all model interactions flow through APIs, the API layer is where you enforce limits. Token-level rate limiting prevents cost abuse. Request logging creates an audit trail for AI actions. Input/output filtering at the API gateway can block prompt injection attempts before they reach the model. API authentication ensures only authorized users can invoke AI capabilities.
Neither domain alone is sufficient. API security tools were not built to understand prompt injection. AI guardrails were not built to detect BOLA vulnerabilities. You need both. A Salt Security deployment protects the API layer. A Lakera deployment protects the model layer. Together, they cover the full surface.
Tool vendors are already responding. Some API security platforms now detect LLM-specific attacks. Some AI security companies have added API monitoring. Over the next two years, expect the boundaries between these categories to continue shifting.
Tooling for API and AI security
API security tools
The API security market has matured significantly. Tools generally fall into three categories: testing, discovery, and runtime protection.
API testing and auditing tools validate APIs against specifications and scan for vulnerabilities. 42Crunch audits OpenAPI definitions and runs conformance scans. APIsec generates test cases automatically from API documentation. StackHawk and ZAP run DAST-style scans with API-aware crawling.
API discovery and posture management tools find all APIs in your environment, including shadow APIs. Salt Security and Traceable AI analyze traffic to build a real-time API inventory. Wallarm combines API discovery with runtime protection.
Runtime API protection tools monitor API traffic and block attacks in real time. Cequence and Akamai API Security focus on bot mitigation and abuse prevention. Wallarm provides API-specific WAF capabilities.
AI security tools
The AI security tooling space is younger but moving fast. It breaks into red teaming, guardrails, and platform security.
LLM red teaming tools test models for vulnerabilities before deployment. Garak is the open-source standard, running comprehensive prompt injection, jailbreak, and data leakage probes. Promptfoo combines red teaming with eval workflows. PyRIT is Microsoft’s open-source red teaming framework. DeepTeam focuses on automated adversarial testing.
Guardrails and runtime protection tools filter model inputs and outputs in real time. Lakera detects prompt injection with sub-100ms latency. LLM Guard is an open-source alternative that scans for PII, toxicity, and injection attempts. NeMo Guardrails from NVIDIA provides programmable conversation rails. Prompt Security offers enterprise prompt injection prevention.
AI security platforms provide broader coverage. HiddenLayer scans ML models for malicious payloads and adversarial attacks. Protect AI Guardian offers model scanning and supply chain security. Mindgard provides automated AI security testing.
Where the categories overlap
Wallarm is a good example of convergence. It started as an API security platform and now includes detection for LLM-specific attacks. Akto combines API security testing with LLM testing capabilities in a single platform. Expect more tools to bridge both domains as AI applications become standard infrastructure.
Getting started
If your priority is API security
Step 1: Build your API inventory. You cannot protect what you do not know exists. Start with automated API discovery. If you use an API gateway, audit its configuration. If you do not, deploy a tool that discovers APIs through traffic analysis. Even a manual inventory of your publicly exposed endpoints is better than nothing.
Step 2: Assess against the OWASP API Top 10. Focus on BOLA and broken authentication first. These are the most exploited API vulnerabilities. Manually test your most sensitive endpoints by changing object IDs in requests and verifying the response. If user A can retrieve user B’s data by changing an ID parameter, you have a BOLA vulnerability.
Step 3: Add automated API testing to CI/CD. 42Crunch audits OpenAPI specs as part of the build pipeline. ZAP runs API scans against staging environments. Both integrate with GitHub Actions, GitLab CI, and Jenkins.
Step 4: Monitor API traffic in production. Deploy runtime monitoring to catch attacks and anomalies. This also catches shadow APIs and unexpected usage patterns.
If your priority is AI security
Step 1: Red team your LLM applications. Before deploying any LLM-powered feature, run adversarial tests. Promptfoo and Garak are free, open source, and can run in a CI pipeline. Test for prompt injection, jailbreaks, and data leakage as a baseline.
Step 2: Add input/output guardrails. Deploy a filtering layer between users and the model. LLM Guard is open source and handles PII detection, prompt injection filtering, and toxicity checking. For production-grade latency requirements, evaluate Lakera.
Step 3: Secure the API layer. Apply standard API security controls to your model endpoints: authentication, rate limiting, request logging. Token-level rate limiting prevents cost abuse. Request logging creates an audit trail if the model behaves unexpectedly.
Step 4: Review agentic tool access. If your AI application calls external tools or APIs, audit what each tool can do and what data it can access. Apply the principle of least privilege. An AI agent that can send emails should not also have access to the billing API.
If you need both
Start with whichever domain has the larger exposure in your environment. Most organizations have more API surface area than AI surface area today. Secure the APIs first, then layer AI-specific controls on top. The API security infrastructure you build (authentication, rate limiting, monitoring, logging) directly supports AI security as well.
Browse Tools by Category
This hub covers 2 application security categories with 16 tools total. Dive into any category to compare tools, read reviews, and find the best fit for your stack.
Discover, test, and protect your APIs
Secure AI models, LLMs, and ML pipelines
Learning Resources
Deepen your understanding with these in-depth guides covering key concepts, tool comparisons, and implementation strategies.
Frequently Asked Questions
What is API security?
What is the OWASP API Security Top 10?
What is AI security?
How are API security and AI security connected?
What are the best API security tools?
What tools exist for AI/LLM security?

Suphi Cankurt is an application security enthusiast based in Helsinki, Finland. He reviews and compares 129 AppSec tools across 10 categories on AppSec Santa. Learn more.
Comments
Powered by Giscus — comments are stored in GitHub Discussions.