Skip to content

Application Security Statistics 2026

Suphi Cankurt

Written by Suphi Cankurt

Every statistic on this page comes from original research we conducted in February 2026. We tested 6 LLMs for code security, scanned 7,510 websites for security headers, and analyzed GitHub data for 65 open-source AppSec tools.


Key statistics at a glance

25.1%
AI-Generated Code Vulnerability Rate
7,510
Websites Scanned for Security Headers
65
Open-Source AppSec Tools Analyzed
608K+
Combined GitHub Stars
191+
Security Tools Compared
27.3%
CSP Adoption Rate

AI-generated code security

We gave 6 large language models 89 identical coding prompts — building login forms, handling file uploads, querying databases — without mentioning security. Then we scanned all 534 code samples with 5 open-source SAST tools and manually validated every finding. Source: AI-Generated Code Security Study 2026.

Vulnerability rates

  • 25.1% of AI-generated code samples contained at least one confirmed vulnerability
  • 534 total code samples tested across 6 LLMs (89 prompts per model)
  • 175 confirmed vulnerabilities found after manual validation of 1,173 raw SAST findings
  • GPT-5.2 had the lowest vulnerability rate at 19.1% (17 out of 89 samples)
  • Claude Opus 4.6, DeepSeek V3, and Llama 4 Maverick tied for the highest rate at 29.2%
  • Gemini 2.5 Pro came in at 22.5%, Grok 4 at 21.3%
  • The gap between the safest and least safe model was 10.1 percentage points

Most common weaknesses

  • SSRF (CWE-918) was the single most common vulnerability with 32 confirmed instances
  • Injection-class weaknesses (SSRF, command injection, NoSQL injection, path traversal) accounted for 33.1% of all findings
  • OWASP A10 (SSRF) led with 32 findings, followed by A03 (Injection) at 30 and A05 (Security Misconfiguration) at 25
  • Debug information leaks (CWE-215) were the second most common individual weakness at 18 findings
  • Deserialization of untrusted data (CWE-502) ranked third with 14 findings

Language comparison

  • GPT-5.2 showed the widest language gap: 11.4% vulnerability rate in Python vs 26.7% in JavaScript
  • Claude Opus 4.6 was the only model where Python performed worse (31.8%) than JavaScript (26.7%)
  • Grok 4 had the tightest cross-language gap at 1.7 percentage points
The full AI-Generated Code Security Study 2026 includes OWASP category heatmaps, per-model deep dives, and all 89 prompt examples.

Security headers adoption

We scanned the Tranco Top 10,000 websites in February 2026 and recorded every security header in their HTTP responses. 7,510 sites returned valid responses. Source: Security Headers Adoption Study 2026.

Adoption rates

  • 51.7% of top websites have HSTS (Strict-Transport-Security) enabled — the most adopted security header
  • 49.5% deploy X-Frame-Options
  • 44.4% set X-Content-Type-Options
  • 28.4% have a Referrer-Policy
  • 27.3% deploy Content-Security-Policy (CSP)
  • 14.0% use Permissions-Policy
  • 10.0% set Cross-Origin-Opener-Policy (COOP)
  • 7.4% deploy Cross-Origin-Embedder-Policy (COEP) — the least adopted header

CSP configuration quality

  • 48.8% of sites with CSP use unsafe-inline, undermining XSS protection
  • 42.5% of sites with CSP use unsafe-eval
  • Only 16.7% of CSP-adopting sites use nonce-based policies
  • Only 12.8% use strict-dynamic — the modern best practice
  • 2,049 sites enforce CSP, while 296 use report-only mode

HSTS configuration

  • 71.8% of HSTS sites set a max-age of at least 1 year
  • 54.7% include the includeSubDomains directive
  • 35.7% include the preload directive
  • 238 sites set a max-age of less than 1 day — too short for meaningful protection

Grade distribution

  • Average Observatory-compatible score: 58 out of 100
  • 726 sites earned an A+ grade (9.7%)
  • 0.3% received an F grade — down from 55.6% in a 2023 academic study (Kishnani & Das, 3,195 sites)
  • The most common grade was D (2,085 sites, 27.8%)

Adoption by site rank

  • Top 100 sites: 41.7% CSP adoption, 68.1% HSTS adoption
  • Sites ranked 5,001-10,000: 23.9% CSP adoption, 47.7% HSTS adoption
  • CSP adoption drops by nearly half between the top 100 and sites ranked 5,001-10,000

Information leakage

  • 27.1% of sites still send the deprecated X-XSS-Protection header
  • 8.6% set Cross-Origin-Resource-Policy (CORP)
See the full Security Headers Adoption Study 2026 for interactive charts, rank-tier breakdowns, and the 2023 vs 2026 comparison.

Open source AppSec tools

We pulled GitHub data for 65 open-source application security tools across 8 categories and analyzed stars, forks, contributors, release cadence, issue resolution times, and package downloads. Source: State of Open Source AppSec Tools 2026.

Community traction

  • 608,000+ combined GitHub stars across all 65 tools
  • Ghidra is the most-starred open-source AppSec tool with 64,368 stars
  • Jadx (47,291), mitmproxy (42,289), and Trivy (31,910) round out the top four
  • Secrets detection tools punch above their weight: Gitleaks (24,912) and TruffleHog (24,563) both rank in the top 10
  • Promptfoo (10,463 stars) is the only AI security tool in the top 20

Maintenance health

  • Median health score across all tools: 58 out of 100 (fair)
  • 7 tools score above 70 (good): Renovate, Trivy, Nuclei, TruffleHog, Promptfoo, ZAP, and Grype
  • 4 tools are flagged as at-risk (health score below 20): Dastardly, w3af, Rebuff, and detect-secrets
  • No tool scored above 90
  • SCA tools have the highest average category health score at 61.6

Contributors and releases

  • Trivy leads in contributor count with 444 contributors
  • Renovate (432) and Kyverno (415) also have 400+ contributors
  • Nikto has the fastest median issue resolution at 0.7 days
  • Renovate resolves issues in a median of 0.9 days
  • 52% of open-source AppSec tools are written in Go or Python
  • Go leads with 30.8% (20 tools), followed by Python at 21.5% (14 tools)
  • 43% of tools use the Apache-2.0 license
  • TypeScript now powers two top-20 tools (Promptfoo and Renovate)

Category breakdown

  • Mobile security tools lead in raw star count (203,997) due to Ghidra, Jadx, mitmproxy, and Frida
  • IaC Security has 13 tools with 100,000 combined stars
  • SAST has the most tools (16) with 119,881 combined stars
  • DAST has the lowest average health score at 40.7
The full State of Open Source AppSec Tools 2026 covers download numbers, Docker Hub pulls, at-risk project details, and health score methodology.

Application security tool landscape

AppSec Santa tracks the broader application security tooling market — both open-source and commercial. Here is a snapshot of our current coverage.


Sources & methodology

Three studies, all conducted in February 2026. No third-party data is used without attribution.

Prior academic work supports why this data matters. Pearce et al. (2021) found that roughly 40% of GitHub Copilot’s output contained security vulnerabilities in their NYU study “Asleep at the Keyboard?” — our 2026 results show the rate has dropped to 25.1% across newer models, but the problem is far from solved.

AI-Generated Code Security Study 2026 534 code samples from 6 LLMs (GPT-5.2, Claude Opus 4.6, Gemini 2.5 Pro, DeepSeek V3, Llama 4 Maverick, Grok 4), tested via OpenRouter API with 89 prompts covering all OWASP Top 10:2021 categories. Scanned with 5 open-source SAST tools. Every finding manually validated. Full dataset on GitHub.

Security Headers Adoption Study 2026 Top 10,000 websites from the Tranco Top Sites list scanned for 10 security headers. 7,510 returned valid HTTP responses (75.1% success rate). Scoring follows the Mozilla HTTP Observatory methodology.

State of Open Source AppSec Tools 2026 GitHub API data for 65 open-source AppSec tools across 8 categories. Metrics include stars, forks, contributors, commit activity, release cadence, issue resolution times, and package downloads from PyPI, npm, and Docker Hub. All data collected February 2026.

Frequently Asked Questions

How often is this data updated?
We update this page quarterly as new data becomes available from our ongoing research.
Can I cite these statistics?
Yes. Please cite as: ‘Application Security Statistics 2026, AppSec Santa (appsecsanta.com).’ Each statistic links to its source study with full methodology.
Where does this data come from?
All statistics come from original research conducted by AppSec Santa — including our AI Code Security Study (534 code samples, 6 LLMs), Security Headers Study (7,510 websites scanned), and State of Open Source AppSec Tools report (65 projects analyzed).
Suphi Cankurt

10+ years in application security. Reviews and compares 162 AppSec tools across 10 categories to help teams pick the right solution. More about me →

Newsletter

Weekly AppSec tool insights

One email per week. Reviews, research, and what's changing in AppSec.