Note: This incident is still under active investigation. Resolv Labs has not yet published a full post-mortem. This article reconstructs the attack based on publicly available on-chain data and analyses published by Chainalysis, DeFiPrime, CoinDesk, The Block, and security researchers including Cyvers, PeckShield, and Ledger’s CTO. We treat these sources as credible, but details may evolve as new information emerges. We will update this article accordingly.
TL;DR
On March 22, 2026, an attacker stole $23 million from Resolv Labs without exploiting a single line of smart contract code. The target was an AI assistant — connected to the protocol’s cloud infrastructure via Model Context Protocol — that was tricked through prompt injection into leaking the credentials needed to mint 80 million unbacked USR tokens. The blockchain worked perfectly. The AI agent was the weakest link. This is the anatomy of DeFi’s first AI infrastructure breach, and probably not the last.
What is Resolv and how does USR minting work?
If you’re new to crypto: Imagine a company that issues its own digital dollar bills. Each bill is supposed to be worth exactly $1.00, backed by real assets the company holds in reserve — just like a bank issues deposit receipts backed by cash in the vault. In traditional finance, that company would be regulated, audited, and limited in how many bills it can print. In DeFi (decentralized finance), these digital dollar bills are called stablecoins, and the “printing press” is a piece of software running on a blockchain.
The critical thing to understand is: if someone prints more bills than there are assets backing them, the bills are no longer worth $1.00 each. The market figures this out almost instantly — traders and automated algorithms detect the flood of unbacked tokens, and the price collapses as everyone races to sell before it drops further. This is called a depeg: the moment a stablecoin loses its fixed value. In this case, USR went from $1.00 to $0.025 — a 97.5% loss — in just 17 minutes. Eighty million fake bills entered circulation, and the market repriced every single one of them to near zero.
What happened here is the equivalent of someone breaking into the printing press — not by cracking the safe where the bills are stored, but by tricking the AI assistant that controls who’s allowed to press the “print” button. The assistant handed over the keys. The attacker printed 80 million fake bills and cashed them out before anyone noticed.
Resolv is a stablecoin protocol that uses a strategy known as delta-neutral hedging to keep USR pegged to $1.00. In simple terms, the protocol holds crypto assets (ETH, BTC) while simultaneously placing opposite bets in derivatives markets, so that price movements cancel out — the dollar value stays roughly constant regardless of whether crypto goes up or down. (For a deeper explanation of how synthetic stablecoins maintain their peg and the risks involved, see our guide on stablecoins and the Ethena USDe crisis section in our Real Yield report.) This design belongs to the same family as Ethena’s USDe, though with distinct implementation choices that would prove critical on March 22.
The minting process for USR follows a two-step pattern common in protocols that require off-chain authorization. First, a user calls requestSwap(), depositing USDC into the contract as collateral. This creates a pending mint request. Second, a privileged address — the SERVICE_ROLE — calls completeSwap() to authorize the actual minting of USR tokens. The idea is straightforward: the deposit comes from the user, but the protocol’s backend verifies the deposit amount, calculates the correct USR-to-USDC ratio, and then signs the mint transaction with the SERVICE_ROLE key.
Here is where the architecture becomes dangerous. The SERVICE_ROLE was a single Externally Owned Account (EOA) — not a multisig, not a timelock, not a governance-controlled contract. A single private key, stored in AWS Key Management Service (KMS), controlled the ability to mint any amount of USR. As Chainalysis documented in their post-mortem, “the contract enforces a minimum USR output — but critically, no maximum.”
Read that again. The smart contract would allow the SERVICE_ROLE to mint one billion USR against a single dollar of USDC, and the code would execute without error. There was no on-chain ratio check. No maximum mint cap. No oracle validation. No circuit breaker. The contract trusted the SERVICE_ROLE implicitly — whatever amount it authorized was minted, full stop.
DeFiPrime’s analysis put it bluntly: there were “no on-chain guardrails enforcing that ratio.” The entire security model depended on the assumption that the SERVICE_ROLE key would only ever be used by legitimate backend infrastructure. That assumption was about to be catastrophically wrong.
To understand why this matters, consider the difference between a protocol where minting logic lives on-chain (verified by code, immutable, transparent) and one where it lives off-chain (dependent on a single key, stored in cloud infrastructure, controlled by software that can be compromised). Resolv chose the latter. And the software controlling that key was an AI agent.
How was the AI assistant compromised? The prompt injection attack
This is where the Resolv hack departs from every previous DeFi exploit in history. There was no reentrancy bug. No flash loan manipulation. No oracle front-running. No governance attack. The attacker didn’t touch the blockchain at all — not until the very end, when they used a legitimately obtained signing key to authorize transactions that the smart contract executed flawlessly.
The attack vector was an AI assistant — a large language model (LLM) integrated into Resolv’s operational infrastructure. Like many protocols in 2025–2026 that adopted AI for internal tooling, Resolv had deployed an LLM-based assistant connected to their backend systems via Model Context Protocol (MCP).
What is Model Context Protocol?
MCP is an open standard launched by Anthropic in November 2024 that allows AI models to dynamically discover and interact with external tools during runtime. Think of it as a universal adapter: instead of hard-coding every integration an AI might need, MCP provides a standardized way for an LLM to connect to databases, APIs, file systems, and cloud services on the fly. The AI can query a Supabase database, read files from a storage bucket, call an API endpoint, or execute a database migration — all through MCP tool calls.
The power is obvious. The danger is equally obvious, though far less discussed. An MCP-connected AI operates with whatever permissions the MCP server grants it. In Resolv’s case, the AI assistant was connected to Supabase databases — and it was running with service_role privileges.
In Supabase’s security model, the service_role key bypasses all Row Level Security policies. It has full read and write access to every table, every row, every column. It is explicitly documented as a key that should never be exposed to clients or untrusted environments. Resolv gave this level of access to an AI that processed external inputs.
How prompt injection works
Prompt injection is conceptually simple but devastatingly effective. An LLM processes text — it reads inputs and generates outputs. The fundamental problem is that LLMs cannot reliably distinguish between instructions from their operator and instructions embedded in the data they process. If an attacker can get malicious text into any input the AI reads, the AI may interpret that text as a legitimate command.
Palo Alto Networks’ Unit 42 published extensive research on indirect prompt injection attacks observed in the wild. Their findings are sobering: 37.8% of attacks use visible plaintext (instructions simply written in a document the AI processes), 19.8% use HTML attribute cloaking (hiding instructions in HTML attributes that humans don’t see but AI reads), and 16.9% use CSS suppression (embedding commands in CSS that is rendered invisible to human viewers but parsed by the model). The attacks are not theoretical. They are happening in production systems, against real organizations, right now.
The Supabase MCP attack pattern had already been documented before the Resolv incident. PVML’s research described the exact scenario: a support ticket or user input containing embedded instructions is processed by an AI assistant with service_role access. The AI, unable to distinguish the malicious instructions from legitimate ones, follows the embedded commands. In PVML’s documented case, this led to exfiltration of integration tokens. Their conclusion was unambiguous: “LLMs are not designed to be security gatekeepers.”
In the Resolv attack, the attacker crafted input that the AI assistant interpreted as legitimate instructions. The exact payload has not been publicly disclosed — Resolv’s post-mortem refers to it only as “a crafted prompt injection” — but the outcome is known: the AI leaked temporary cloud credentials, including AWS access tokens that provided a foothold into Resolv’s infrastructure.
| Stage | What Happens | Why It Works |
|---|---|---|
| Data ingestion | AI processes external input (user message, support ticket, database record) | LLMs cannot distinguish instructions from data — all text is processed equally |
| Hidden instructions | Attacker embeds commands within content the AI reads | No input sanitization on MCP channel; AI treats embedded text as operator directives |
| Execution | AI follows embedded instructions as if they came from its operator | service_role grants full database access; AI has no concept of “authorized” vs “unauthorized” requests |
| Exfiltration | Credentials leak via the AI’s output channel (response text, tool call results, logs) | No output filtering, no anomaly detection, no human-in-the-loop approval for sensitive operations |
Table: Anatomy of an Indirect Prompt Injection — the four stages that enabled the Resolv credential leak.
Practical DevSecOps identified eight critical vulnerability categories in MCP servers: prompt injection, tool poisoning, over-permissioned tools, the confused deputy problem, cross-tool contamination, token theft, server impersonation, and unvalidated dynamic tool loading. The Resolv attack exploited at least three of these: prompt injection (the entry point), over-permissioned tools (service_role access), and what amounts to a confused deputy problem (the AI acted on the attacker’s behalf while believing it was following legitimate instructions).
The critical insight is that this was not a failure of the AI in some abstract sense. The AI did exactly what it was designed to do: it processed input and generated output. The failure was in the architecture that gave an AI — a system that processes untrusted inputs by definition — access to credentials that could ultimately control a $100M+ protocol. This is what Ledger CTO Charles Guillemet calls the “lethal trifecta”: untrusted input, powerful execution capabilities, and network access for exfiltration. Any system that combines all three is, in Guillemet’s words, “primed to fail under pressure.”
From cloud credentials to AWS KMS: the lateral movement
With temporary AWS credentials in hand, the attacker shifted from AI exploitation to a classic cloud infrastructure attack pattern: lateral movement. This is a technique familiar to every cloud security team but almost entirely absent from the DeFi security playbook, which has historically focused on smart contract audits and on-chain monitoring.
The attacker did not go directly to the blockchain. There was no need to. The goal was the signing key, and the signing key lived in the cloud.
The first step was to enumerate the AWS environment using the temporary credentials leaked by the AI. AWS Identity and Access Management (IAM) policies determined what those credentials could access. In a well-architected environment with least-privilege IAM, temporary credentials from an AI assistant would be scoped narrowly — perhaps read-only access to specific database tables and nothing else. In Resolv’s environment, the credentials provided sufficient access to move deeper.
The attacker navigated from the initial IAM foothold to AWS Key Management Service (KMS), where Resolv stored the SERVICE_ROLE private key. KMS is Amazon’s managed service for storing and using cryptographic keys — it is designed to be a secure vault for exactly this kind of sensitive material. But KMS security depends entirely on IAM policies controlling who can access the keys. If an attacker has valid AWS credentials with sufficient permissions, KMS will serve them the key just as willingly as it serves the legitimate application.
Chainalysis confirmed the path: the attacker “accessed Resolv’s AWS Key Management Service environment where the protocol’s privileged signing key was stored.” Once inside KMS, the attacker had the SERVICE_ROLE private key — the single cryptographic secret that authorized all USR minting operations.
The entire lateral movement phase — from AI-leaked credentials to KMS access — likely took minutes. Cloud infrastructure attacks move fast when the attacker has valid credentials. There are no exploit chains to develop, no zero-days to discover. The attacker simply logs in with the stolen keys and navigates the environment like a legitimate administrator. This is why credential theft is consistently the most common initial access technique in cloud breaches, according to every major cloud security report from AWS, Google Cloud, and Microsoft Azure threat intelligence teams.
It is worth pausing to appreciate the irony. Resolv stored their most critical secret — the key that controls hundreds of millions of dollars in minting authority — in AWS KMS, one of the most secure key management services available. The security of KMS itself was never in question. The vulnerability was in the path to KMS: an AI agent, running with excessive permissions, processing untrusted inputs, connected to the same cloud environment where the signing key resided. The chain was only as strong as its weakest link, and the weakest link was an LLM that could not distinguish a legitimate instruction from a malicious one.
| Step | Target | Method | Outcome |
|---|---|---|---|
| 1 | AI Assistant | Prompt injection via crafted input | Temporary cloud credentials leaked |
| 2 | AWS IAM | Credential reuse from leaked tokens | Access to internal AWS services |
| 3 | AWS KMS | Lateral movement to key management | Control of SERVICE_ROLE signing key |
| 4 | Mint Contract | completeSwap() with compromised key | 80M USR minted against $200K collateral |
| 5 | DEX Pools | Market liquidation across Curve, KyberSwap, Velodrome | ~11,400 ETH (~$24M) extracted |
Table: Attack Chain — From Prompt to $23M. Five steps, under 30 minutes, zero smart contract exploits.
The on-chain execution: 80 million tokens from thin air
With the SERVICE_ROLE key compromised, the attacker finally touched the blockchain — and the smart contract did exactly what it was designed to do.
Transaction 1: The attacker deposited approximately $100,000 in USDC into the mint contract via requestSwap(), then immediately called completeSwap() with the stolen SERVICE_ROLE key, authorizing the minting of 50 million USR. That is a 500x over-mint — fifty million dollars’ worth of stablecoin created from one hundred thousand dollars of collateral. The contract accepted the transaction without hesitation. There was no ratio check to reject it, no maximum cap to limit it, no oracle to verify whether the minting amount bore any relationship to the deposit.
Transaction 2: Shortly after, the attacker repeated the process, minting an additional 30 million USR. Same method, same key, same absence of on-chain guardrails.
In total: 80 million USR created against roughly $200,000 in actual USDC deposits. A 400–500x dilution of the token supply. Every existing USR holder’s tokens were instantly backed by a fraction of what they had been worth minutes earlier.
The liquidation strategy was methodical. Rather than dumping 80M USR directly onto a single DEX — which would have triggered immediate circuit breakers and maximal slippage — the attacker first converted a significant portion of USR into wstUSR (wrapped staked USR). This wrapping step served a tactical purpose: wstUSR was accepted by different liquidity pools than raw USR, allowing the attacker to access multiple liquidation venues simultaneously and reduce the price impact on any single pool.
The attacker then distributed sales across three major decentralized exchanges:
- Curve Finance: The primary liquidation venue for stablecoin pairs. The USR/USDC and USR/USDT pools absorbed the initial selling pressure before the peg collapsed.
- KyberSwap: Used for additional USDC/USDT conversion. KyberSwap blocked the exploiter’s wallets within hours of the attack, but by then the damage was done.
- Velodrome: Targeted for its deep liquidity on Optimism, providing another exit route for the stolen funds.
The result was swift and devastating. On Curve, USR crashed from $1.00 to $0.025 in approximately 17 minutes — a 97.5% collapse. The Block reported the attacker extracted roughly $25 million, while CoinDesk placed the figure at approximately $23 million. The discrepancy reflects the difficulty of tracking exact amounts across multiple DEXs and chains in real time.
The final conversion was from stablecoins to ETH. The attacker consolidated proceeds into approximately 11,400 ETH — worth roughly $24 million at the time of the exploit — and began moving funds through intermediary addresses. The attacker’s primary wallet, 0x8ed8cf0c1c531c1b20848e78f1cb32fa5b99b81c, was quickly flagged by security firms and DEX operators.
The entire on-chain phase — from first mint to final ETH consolidation — took less than 30 minutes. For context, that is faster than most multisig signers can coordinate an emergency response, faster than most protocol teams can even confirm an exploit is happening, and faster than any governance-based pause mechanism could react. The speed advantage belonged entirely to the attacker.
The on-chain forensics tell a story of a smart contract doing exactly what it was told by the authorized key. Every transaction was valid. Every signature was legitimate. The contract had no way to know — and no mechanism to check — that the key signing these transactions had been stolen from a compromised AI agent’s cloud environment. From the blockchain’s perspective, this was not a hack at all. It was a series of perfectly authorized operations.
What the security firms found
The Resolv hack triggered an immediate response from the blockchain security ecosystem. Within hours, multiple firms had published their initial assessments. What emerged was not just a post-mortem of a single exploit but a recognition that the entire threat model for DeFi protocols had just expanded.
Cyvers: first detection
Cyvers, a real-time Web3 security monitoring platform, was among the first to detect the anomalous activity. Their systems flagged the creation of 80M USR tokens as a critical anomaly — the minting volume was orders of magnitude beyond any previous completeSwap() execution. Cyvers’ alert triggered the broader security community’s investigation and helped exchanges and DEXs begin blocking the exploiter’s addresses.
PeckShield: quantifying the damage
PeckShield, one of the most established blockchain security firms, estimated the total artificially generated USR at $80 million face value. Their on-chain analysis confirmed the two-transaction minting pattern and traced the fund flows across Curve, KyberSwap, and Velodrome. PeckShield’s real-time tracking was instrumental in helping exchanges freeze the exploiter’s remaining assets before they could be fully laundered.
Chainalysis: the definitive post-mortem
Chainalysis published the most comprehensive analysis under the title “How One Compromised Key Printed $23 Million.” Their investigation confirmed the AWS KMS vector and provided the clearest description of the attack chain: prompt injection → credential theft → lateral movement to KMS → SERVICE_ROLE compromise → unauthorized minting.
Chainalysis’s key recommendations included:
- Monitor completeSwap() ratio anomalies: Any mint where the USR output exceeds the USDC input by more than a small tolerance (accounting for delta-neutral positioning) should trigger an immediate alert.
- Automated pausing on suspicious Mint events: If the minting volume in a single transaction or short time window exceeds historical norms by 10x or more, the contract should automatically pause and require multisig intervention to resume.
- Cloud infrastructure monitoring: KMS access logs should be monitored in real time. Any access from an unrecognized IP, role, or session should trigger immediate key rotation.
- AI agent isolation: AI agents should never share cloud credentials with systems that control signing keys. The AI environment and the key management environment should be completely separate AWS accounts with no cross-account access.
Charles Guillemet (Ledger): the structural warning
Perhaps the most significant response came not from a blockchain security firm but from Charles Guillemet, CTO of Ledger, who published “Agentic AI Is Loose. Your Security Model Is Not Ready.” Guillemet’s analysis went beyond the specifics of the Resolv hack to describe a systemic problem with how AI agents are being deployed in crypto infrastructure.
Guillemet described the “lethal trifecta” that makes AI agents inherently dangerous in high-value environments:
- Untrusted input: AI agents process data from external sources — users, websites, databases, APIs — any of which can contain prompt injection payloads.
- Powerful execution capabilities: MCP and similar tool-use protocols give AI agents the ability to execute real actions — database queries, API calls, file operations, and potentially transaction signing.
- Network access for exfiltration: AI agents can communicate outward — through their response channel, through tool calls, through logging — meaning any leaked information can reach the attacker.
Guillemet’s proposed solution is architecturally radical: “Agents Propose, Humans Sign.” In this model, an AI agent can analyze data, suggest transactions, and prepare operations — but the final authorization must come from a human, verified through hardware-enforced separation (such as a Ledger hardware wallet). The AI can never autonomously execute a high-value action. It can only recommend.
His conclusion carries particular weight given Ledger’s position as the leading hardware wallet manufacturer: “If your architecture cannot clearly separate what an agent suggests from what a human authorizes, then it is primed to fail under pressure.”
The collective message from the security community was clear: the Resolv hack was not an isolated incident but a harbinger. Any protocol using AI agents with elevated privileges — particularly those with access to signing keys, cloud credentials, or service_role database access — must immediately implement prompt injection defenses, input sanitization, and the principle of least privilege for all AI-connected systems.
The blockchain was never the weak link
This is the part that should unsettle every DeFi builder, investor, and security researcher. The smart contract code worked perfectly. It did exactly what it was designed to do: when the SERVICE_ROLE key signed a completeSwap() transaction, the contract minted the specified amount of USR. The code was not buggy. It was not vulnerable to reentrancy. It had no unchecked external calls. No storage collision. No integer overflow. By every standard measure of smart contract security, the Resolv mint contract was technically sound.
The vulnerability existed entirely in the centralized infrastructure stack: AI agent → MCP → Supabase → AWS IAM → AWS KMS. Every component in this chain is off-chain. Every component is centralized. Every component is exactly the kind of trusted intermediary that blockchain technology was invented to eliminate.
This inverts the traditional DeFi security model entirely. For the past six years, the overwhelming majority of DeFi exploits have targeted the blockchain layer: reentrancy attacks (The DAO, 2016), flash loan oracle manipulation (bZx, 2020; Cream Finance, 2021), governance attacks (Beanstalk, 2022), bridge vulnerabilities (Ronin, Wormhole, Nomad, 2022), and token approval exploits. The standard response to these attacks has been better audits, formal verification, bug bounties, and on-chain monitoring.
None of those defenses would have prevented the Resolv hack. You could have audited the mint contract a hundred times with the best firms in the world and found nothing, because there was nothing to find. The contract was secure. The AI agent controlling it was not.
Andrew Whong, co-founder of Herd, summarized the architectural failure: “The mint contract had no oracle or max mint checks.” But even that understates the problem. Oracle checks and max mint caps would have helped — they would have limited the blast radius — but the fundamental issue is that a single key controlled by a compromisable AI agent had unlimited minting authority. The solution is not just adding on-chain checks. It is rethinking the entire relationship between AI infrastructure and on-chain operations.
DeFiPrime called it “a case of overly trusting off-chain infrastructure” — a phrase that could serve as the epitaph for an entire generation of DeFi protocols that bolted AI agents onto their operations without considering the security implications.
The paradox is painful. DeFi’s core promise is trustless, permissionless, decentralized finance. No single point of failure. No trusted intermediary. Code is law. And yet here was Resolv — a protocol managing hundreds of millions of dollars — where the minting function was controlled by a single centralized key, managed by an AI agent, stored in a cloud provider’s infrastructure, accessible through a chain of credentials that could be leaked by tricking a chatbot. The blockchain was the most secure component in the entire stack. It was the off-chain infrastructure — the AI, the cloud, the key management — that failed.
| Traditional DeFi Hack | AI Infrastructure Hack (Resolv) | |
|---|---|---|
| Attack surface | Smart contract code | Off-chain AI/cloud stack |
| Vector | Reentrancy, oracle manipulation, flash loans | Prompt injection, credential theft, lateral movement |
| Defense | Code audits, formal verification, bug bounties | Input sanitization, least privilege, MCP hardening, human-in-the-loop |
| Detection | On-chain monitoring, transaction simulation | Cloud audit logs + on-chain anomaly detection |
| Blockchain role | The vulnerability | The victim |
Table: Traditional DeFi Hack vs AI Infrastructure Hack — the Resolv exploit represents a paradigm shift in where DeFi vulnerabilities live.
For users trying to assess their own exposure to these risks, tools like CleanSky can help monitor token approvals and positions across protocols — but the deeper lesson is that on-chain monitoring alone is no longer sufficient. The attack surface now extends into the cloud infrastructure, AI agents, and key management systems that protocols use behind the scenes. Understanding risk in DeFi now means understanding the entire stack, not just the smart contracts.
Why this probably won’t be the last AI-infrastructure attack
If the Resolv hack were an isolated incident — a one-off failure by a single protocol that made uniquely bad architectural decisions — it would be concerning but containable. The problem is that every trend in the industry points toward more AI agents, more MCP connections, more automated operations, and more protocols making exactly the same mistakes Resolv made.
AI agents are already capable attackers
Anthropic’s own research, published in December 2025, showed that AI agents could already exploit more than 50% of real-world vulnerable smart contracts on the SCONE-bench benchmark — a dataset of 405 contracts representing $550M in simulated value. This research was about AI attacking smart contracts, but the same capabilities apply in reverse: AI agents are sophisticated enough to be valuable tools for both defenders and attackers.
Separately, researchers demonstrated that frontier models including GPT-5 and Sonnet 4.5 found two zero-day flaws across 2,849 BNB Chain contracts at a compute cost of just $3,476. The economics of AI-powered vulnerability discovery have tilted decisively in favor of attackers: the cost to find exploitable bugs is now trivial compared to the potential payoff.
The numbers are accelerating
Hacken’s 2025 annual report documented a 1,025% increase in AI-related exploits compared to 2024. That is not a typo — a ten-fold increase in a single year. The category includes AI-powered phishing, AI-assisted vulnerability discovery, deepfake-enhanced social engineering, and now prompt injection attacks on protocol infrastructure. Cecuro’s research found that frontier AI agents can execute end-to-end exploits on 72% of known vulnerable contracts, up from roughly 20% in early 2024.
The 2025 crypto security landscape was already sobering before Resolv: $4.65 billion in total losses, the Bybit multisig compromise, a wave of access control failures. The Resolv hack adds a new category to an already expanding threat surface.
MCP adoption is accelerating without security parity
Model Context Protocol adoption exploded throughout 2025 and into 2026. Thousands of MCP servers now exist for databases (Supabase, PostgreSQL, MongoDB), cloud providers (AWS, GCP, Azure), development tools (GitHub, GitLab, Jira), communication platforms (Slack, Discord, email), and financial services (banking APIs, payment processors, blockchain nodes). Every MCP server is a potential prompt injection target.
The Supabase MCP incident documented by PVML was the warning shot. Supabase themselves published “Defense in Depth for MCP” in response, recommending that MCP servers never use service_role keys and instead use scoped, per-user authentication. But adoption of these recommendations has been uneven at best. Many protocol teams deployed MCP integrations during the AI hype wave of 2024–2025 without implementing basic security controls.
The Resolv hack was the first real-money casualty. It will not be the last. The combination of expanding AI agent deployment, increasing MCP adoption, and the enormous financial incentives for attackers creates a threat landscape that is growing faster than defenses are being deployed. Every protocol using AI agents for operational tasks — risk management, trading, customer support, monitoring, governance participation — should consider the Resolv attack a direct warning.
The structural problem
The deeper issue is not any single vulnerability but a structural mismatch between AI capabilities and AI security. LLMs are extremely powerful tools for processing information, generating outputs, and interacting with external systems. They are also fundamentally unable to reliably distinguish trusted instructions from untrusted inputs. This is not a bug that will be patched in the next model release — it is an inherent property of how transformer-based language models process text.
Every MCP connection to an AI agent creates a new attack surface. Every piece of data the AI processes is a potential prompt injection vector. Every tool the AI can call is a capability the attacker inherits. And every credential the AI can access is a credential the attacker can steal. The more powerful and connected AI agents become, the more valuable they become as attack targets. This is the security paradox of agentic AI: the features that make agents useful are the same features that make them dangerous.
What protocols should do now
The Resolv hack provides a clear blueprint for the defensive measures that every DeFi protocol using AI agents must implement. These are not optional best practices — they are minimum requirements for any protocol that does not want to be the next $23M headline.
1. Separate signing from AI — permanently
This is the single most important takeaway. AI agents must never hold, access, or be in the same credential path as signing keys. The “Agents Propose, Humans Sign” architecture advocated by Guillemet should be the default for every protocol. An AI can analyze a pending swap request and recommend a minting amount. It can prepare the transaction parameters. It can even submit the transaction to a queue. But the actual signing — the cryptographic authorization that makes the transaction executable — must require human approval through a hardware-enforced boundary.
In practice, this means: the AWS account where AI agents run must have zero access to the AWS account (or HSM, or multisig) where signing keys reside. No IAM role, no cross-account trust, no shared credentials. Complete architectural separation.
2. Harden every MCP connection
Every MCP server that connects to an AI agent should implement:
- Input sanitization: Strip or escape potential prompt injection patterns from all data before it reaches the AI.
- Output filtering: Monitor AI outputs for credential patterns (API keys, tokens, connection strings) and block them before they leave the system.
- Role-based access: Use the minimum privilege level required. Never connect an AI to a database with service_role access. Use scoped, read-only tokens wherever possible.
- Tool allowlisting: Explicitly define which MCP tools the AI can call. Block all tools that are not on the allowlist. Prevent the AI from discovering or invoking new tools at runtime.
- Rate limiting: Limit the frequency and volume of tool calls an AI can make within a time window to prevent rapid exfiltration.
3. Implement on-chain guardrails
Even if the off-chain infrastructure is compromised, on-chain safety mechanisms can limit the damage. The Resolv contract had none. Every minting or high-value operation contract should include:
- Maximum mint caps: A per-transaction and per-hour ceiling on the amount that can be minted. If Resolv had capped minting at, say, 2x the deposit value, the attacker could have stolen $400K instead of $23M.
- Oracle price checks: Before minting, verify the deposit-to-mint ratio against an external oracle. If the ratio deviates beyond a threshold, revert the transaction.
- Time-locks: Large minting operations should require a mandatory delay (e.g., 15 minutes to 24 hours) during which the pending mint can be reviewed and canceled.
- Anomaly-triggered pauses: If minting volume exceeds historical norms by a large factor (10x, 50x), the contract should automatically pause and require multisig intervention to resume.
4. Multisig for all critical operations
The SERVICE_ROLE should never have been a single EOA. Critical protocol operations — minting, burning, parameter changes, upgrades, emergency pauses — should require multiple independent signers. A 3-of-5 or 4-of-7 multisig using geographically distributed signers with hardware wallets would have made the Resolv attack virtually impossible, even with the SERVICE_ROLE key compromised, because the attacker would have needed to compromise multiple independent signers simultaneously.
5. Cloud security fundamentals
The Resolv attack exploited basic cloud security failures that would have been caught by a standard cloud security review:
- Least privilege IAM: Temporary credentials issued to AI agents should be scoped to the absolute minimum required permissions. An AI assistant that queries database tables does not need access to KMS.
- No long-lived credentials: AI agents should use short-lived, automatically rotating credentials. If the Resolv AI’s credentials had expired within minutes rather than remaining valid long enough for lateral movement, the attack would have been significantly harder.
- KMS access logging and alerting: Every access to a KMS key should generate an alert. Unusual access patterns (new IP, new role, new time of day) should trigger immediate investigation.
- Network segmentation: The AI agent environment and the key management environment should be in separate VPCs (Virtual Private Clouds) with no network path between them.
6. Deploy prompt injection defenses
While no defense is 100% effective against prompt injection (it remains an open research problem), several techniques significantly raise the bar for attackers:
- Spotlighting: A technique that marks the boundary between trusted instructions and untrusted data using special delimiters that the AI is trained to respect. The system prompt explicitly instructs the AI to treat content within untrusted delimiters as data only, never as instructions.
- Instruction hierarchy: Configuring the AI so that system-level instructions always override any instructions found in user inputs or external data. This is imperfect but raises the difficulty of successful injection.
- Adversarial testing: Regularly red-team your AI integrations with prompt injection attempts. If your security team cannot extract credentials through prompt injection, that does not mean an attacker cannot — but it is a necessary minimum.
- Output classifiers: Use a separate, simpler AI model to scan the primary agent’s outputs for potential credential leaks, anomalous tool calls, or signs of instruction following from untrusted inputs.
7. Monitor the full stack
On-chain monitoring is necessary but no longer sufficient. Protocols must now monitor:
- Cloud audit logs: AWS CloudTrail, GCP Cloud Audit, Azure Activity Logs — every API call in the cloud environment should be logged and analyzed.
- AI agent behavior: Log every tool call, every input, every output. Establish baselines for normal behavior and alert on deviations.
- MCP tool invocations: Track which MCP tools are being called, how frequently, and with what parameters. A sudden spike in credential-related tool calls should trigger immediate investigation.
- On-chain anomalies: Continue monitoring for unusual minting, large transfers, and price deviations — but recognize that by the time these appear on-chain, the off-chain compromise may already be complete.
The bigger picture: when the machine is the vulnerability
The Resolv USR hack is not just a story about one protocol that made bad architectural decisions. It is a preview of a new category of risk that the entire crypto industry must contend with as AI integration accelerates.
For the past decade, the crypto security narrative has been dominated by smart contract bugs. The DAO hack in 2016 taught us about reentrancy. The Parity wallet freeze taught us about library contracts. The Wormhole bridge hack taught us about cross-chain verification. Each exploit advanced the industry’s understanding of on-chain security, leading to better auditing tools, formal verification, and bug bounties. The smart contract security industry is now mature, well-funded, and genuinely effective at finding code-level vulnerabilities.
But the Resolv hack bypassed all of that. The attacker did not need to find a code bug because the code was irrelevant. The attacker needed to find a way to obtain a signing key, and the path ran through an AI chatbot, a cloud database, and a key management service — all off-chain, all centralized, all invisible to the on-chain monitoring systems that the industry has spent billions building.
This is not a problem that better smart contract audits can solve. It is not a problem that formal verification can address. It requires an entirely new security discipline that combines cloud security, AI safety, prompt injection defense, key management best practices, and traditional DeFi monitoring into a unified framework. No such framework exists today in a standardized, widely adopted form. The Resolv hack is the wake-up call.
The industry now faces a choice. It can treat the Resolv hack as an anomaly — a single team’s mistake, unlikely to be repeated — and continue deploying AI agents with elevated privileges and minimal security controls. Or it can recognize that the combination of agentic AI, MCP tool access, and high-value on-chain operations creates a fundamentally new threat surface that demands fundamentally new defenses.
The smart money is on the second option. But history suggests that many protocols will choose the first — and we will be writing about the next Resolv-style exploit before the year is out.
Staying safe in crypto has always required understanding where the risks actually live. After March 22, 2026, those risks include the AI agents that protocols trust to run their infrastructure.
Track what matters. CleanSky monitors your DeFi portfolio across 484+ protocols and 34+ networks. Paste any address — see positions, risk scores, and token approvals. No sign-up, no wallet connection.
Editorial independence. CleanSky is an independent project. This article contains no affiliate links or sponsored content. Read our editorial policy.