Alter is a zero-trust identity and access control platform purpose-built for AI agents, verifying every tool call with fine-grained RBAC/ABAC authorization and ephemeral credentials that expire in seconds. While broader platforms like Onyx Security or Noma Security address full AI governance, Alter specializes in the identity and access control layer where unauthorized agent actions cause the most damage.
The company was founded by Srikar Dandamuraju (CEO) and Kevan Dodhia (CTO) and is backed by Y Combinator (S25 batch). Before Alter, Dandamuraju was a Platform Lead at Goldman Sachs, where he scaled post-trade infrastructure and helped launch the GM Card. Dodhia was the technical co-founder of ComputeAI, where he built a compute engine 5x faster than EMR Spark and sold into regulated enterprises like the London Stock Exchange Group. ComputeAI was acquired by Terizza in 2025. Dodhia is a Carnegie Mellon graduate (2019).
Their shared experience building mission-critical infrastructure at Goldman Sachs and for the London Stock Exchange informs Alter’s approach: treat every AI agent interaction with the same rigor applied to financial transactions.
What is Alter?
Alter sits between AI agents and the tools they call, acting as an authentication and authorization layer. Every request is verified at the parameter level, authorized against granular policies, executed with least-privilege access, and fully audited in real time.
The platform eliminates long-lived API keys — a common vulnerability in agent workflows — by issuing ephemeral, scope-narrowed tokens that expire in seconds. Agents receive only the minimum access needed for a specific task, and credentials are rotated or revoked automatically after use.
Key Features
| Feature | Details |
|---|---|
| Access Control | Fine-grained RBAC (Role-Based) and ABAC (Attribute-Based) policies |
| Verification | Parameter-level checks on every tool call |
| Credentials | Ephemeral, scope-narrowed tokens with seconds-lived expiration |
| Blocking | Pre-execution blocking of dangerous operations (DROP TABLE, excessive payments, etc.) |
| Audit | Complete request/response logging with CISO-ready dashboard |
| Compliance | SOC 2, HIPAA, GDPR audit readiness |
| Tool Support | MCP (Model Context Protocol) and native tool integrations |
| A2A | Agent-to-Agent connections coming soon |
| Red Teaming | Partnership with former OpenAI cybersecurity experts for ongoing vulnerability testing |
| Identity | Cryptographic identity verification for each agent interaction |
How zero-trust works for agents
Traditional API security uses long-lived keys that grant broad access. In agentic AI workflows, this creates cascading risk: a compromised agent with a persistent API key can access everything that key permits, indefinitely.
Alter replaces this model with zero-trust principles adapted for agent workflows:
Identity verification — Each agent request starts with cryptographic identity verification. The platform confirms the agent’s identity before processing any action.
Policy evaluation — The request is evaluated against RBAC and ABAC policies at the parameter level. A policy might allow an agent to read customer records but block writes, or permit payments up to a threshold.
Credential issuance — If authorized, Alter issues an ephemeral token scoped to exactly the permissions needed for this specific action. The token expires in seconds.
Execution and audit — The action executes with least-privilege access. The full request and response are logged for compliance and forensic analysis.
Credential revocation — After execution, the credential is automatically revoked. There are no persistent tokens to leak or misuse.
Red teaming partnership
Alter partners with former OpenAI cybersecurity experts who provide ongoing red teaming of agent workflows. This testing covers prompt injection attacks that attempt to escalate agent privileges, data exfiltration through tool calls, and other exploits specific to agentic AI systems.
The red teaming results feed back into Alter’s policy engine, helping identify new attack patterns and strengthen default protections.
Getting Started
When to use Alter
Ideal for teams deploying AI agents that interact with sensitive systems — databases, payment processors, internal APIs — where unauthorized actions could cause real damage. The parameter-level policy enforcement matters most in regulated industries where compliance requires demonstrating least-privilege access and complete audit trails.
The platform complements broader AI security tools rather than replacing them. It handles the identity and access control layer while other tools cover vulnerability scanning, prompt filtering, or agent governance.
For more AI security tools and guidance, see the AI security tools category page. For enterprise AI governance platforms, see Onyx Security or Noma Security. For runtime prompt protection, consider Lakera Guard or LLM Guard. For LLM vulnerability scanning, look at Garak or Promptfoo. For protocol-layer zero trust, check Xage Security.