In 2025, AI-assisted crypto scams generated $3.2 million per operation — 4.5 times more than traditional scams. AI fraud in digital assets grew by 500%. The $17 billion stolen in crypto scams that year had AI as a protagonist. But the coming risk is worse: autonomous AI agents that don't scam people but operate directly on DeFi protocols — with their own wallets, signing transactions, moving capital 24/7. If one of these agents steals, who pays? California has already banned using "AI did it" as a legal defense. Microsoft launched a digital passport for agents. And Chainalysis monitors on-chain what agents do with your money. The question is no longer whether AI agents will operate in DeFi — but who is responsible when something goes wrong.

This article explains what machine identity is, why Microsoft and Chainalysis are building the infrastructure to govern autonomous agents, how the MCP protocol becomes an attack vector, and what legal framework determines who pays when an agent steals.

Editorial note: This article is for informational purposes only and does not constitute legal or financial advice. The regulation of AI agents is rapidly evolving. CleanSky has no commercial relationship with Microsoft, Chainalysis, or Anthropic. Data as of April 2026.

What is an autonomous AI agent and why can it move your money?

An AI agent is not a chatbot. It is a system that makes decisions and executes actions without human intervention: rebalances portfolios, provides liquidity on Uniswap, arbitrages between exchanges, manages collateral on Aave. In 2026, agents operate with their own wallets, sign transactions, and move capital 24/7.

The difference from a traditional app: an app executes predetermined instructions. An agent makes dynamic decisions based on context, training, and market conditions. This autonomy is what generates value — and what generates risk. A compromised agent can make thousands of API calls, filter data, and drain funds in minutes, before a human can intervene.

The problem: 4.8 million cybersecurity job vacancies worldwide according to ISC2. Companies are deploying agents massively without adequate controls. The result: "Shadow AI" (AI deployed without security department approval or supervision) — agents operating without governance, without verifiable identity, without accountability.

What is the difference between an AI agent and a trading bot?

Trading bots have been in crypto for years — arbitrage, market making, liquidations. Why are AI agents a different risk?

DimensionTraditional BotAI Agent
LogicFixed rules: "if price < X, buy"Dynamic: interprets context, adapts strategy
ScopeOne protocol, one functionMulti-protocol, multi-chain, multi-tool via MCP
Predictability100% predictableNon-deterministic — can "hallucinate" or decide unexpectedly
Attack SurfaceThe bot's codeThe model + training data + MCP connections + prompts
ResponsibilityClear — the programmerDiffuse — the developer, the company, the operator, the user?
Damage ScalabilityLimited to the bot's capitalCan spread infection to other agents (multi-agent)

The leap from bot to agent is the leap from calculator to autonomous employee. A bot does what you tell it. An agent decides what to do — and sometimes it decides wrong. In DeFi, where transactions are irreversible and execute in seconds, "sometimes it decides wrong" can mean millions.

How does Microsoft identify an AI agent — and why does it matter for DeFi?

Microsoft launched Entra Agent ID — a "digital passport" for AI agents. Each agent receives a unique, persistent, and verifiable identity that includes its capabilities, authorized protocols, and hierarchy (which "parent" agents control it).

Agent 365 is the directory: it allows administrators to discover all agents on the network, set policies, monitor behavior, and — most importantly — quarantine or disable agents that act anomalously.

ComponentFunctionRisk Mitigated
Entra Agent IDUnique and persistent identity per agentImpersonation, lack of attribution
Agent 365Centralized inventory of all agentsShadow AI — uncontrolled agents
Conditional AccessReal-time risk assessment per actionPrivilege escalation, lateral movement
Lifecycle ManagementAutomatic provisioning and revocation"Zombie" accounts with accumulated permissions
Behavior MonitoringDetection of deviations from expected logicMalicious or erroneous autonomous actions

Why does it matter for DeFi? Because agents operating on DeFi protocols today do not have verifiable identity. An agent arbitraging on Pendle is indistinguishable on-chain from a human or a hacker. Without identity, there is no attribution. Without attribution, there is no responsibility. Entra Agent ID does not directly solve DeFi — but it sets the standard that regulators will demand.

What is MCP and why is it the most vulnerable link?

The Model Context Protocol (MCP) — developed by Anthropic and now under the Linux Foundation — is the standard that allows AI agents to connect with external tools: databases, APIs, wallets, DeFi protocols. It is the "USB" of agents: a universal interface that eliminates the need for proprietary connectors.

The problem: this same interoperability opens up attack vectors that did not exist before:

  • Indirect prompt injection: malicious instructions hidden in the data that an MCP server returns to the agent. The agent executes them believing they are legitimate data.
  • Multi-agent infection: a compromised agent uses MCP connections to propagate malicious instructions to other agents in the organization. Autonomous viral effect.
  • "AgentSmith" vulnerabilities: malicious MCP servers designed to steal API keys, prompt data, and files during context exchange.

For DeFi, this means that an agent using MCP to connect to protocols can be manipulated to execute unauthorized transactions if the MCP server it connects to is compromised. The agent's security depends on the security of all its connections — and in MCP, those connections are open by design.

Microsoft responded by integrating Defender directly into the agents' workflow — real-time prompt injection detection and monitoring of suspicious interactions between agents and MCP servers. But Defender protects corporate environments. In DeFi, where agents operate without Active Directory and without an IT department, protection depends on the operator. And most DeFi agent operators are individuals or DAOs without enterprise security infrastructure.

How does Chainalysis monitor what agents do on-chain?

Chainalysis has evolved from tracking humans to tracking agents. Its platform now offers "blockchain intelligence agents" that monitor autonomous financial actions:

CapabilityMechanismCompliance Impact
Alert enrichmentContext from 10 million previous investigationsReduces false positives in AML monitoring
Deterministic flowsRigid rules for critical decisionsLegal defensibility — predictable results
Time windowsActivity analysis in specific blocksDetects agentic laundering patterns
OrchestrationSwarms of agents monitoring multiple chainsSymmetric response against criminal bots
Automated reportsAdmissible intelligence reports in courtStreamlines audits and judicial processes

The collaboration between Microsoft (identity) + Chainalysis (on-chain surveillance) creates a system where each agent has a "who it is" (Entra) and a "what it did" (Chainalysis). In litigation, this allows tracing from the on-chain action to the agent's identity and its operator. Without Chainalysis, a stealing agent is an anonymous hash on the blockchain. With Chainalysis, it is an admissible chain of evidence.

If an AI agent steals, who pays?

"The bot did it" is not a legal defense in any modern jurisdiction. Courts have already begun to treat AI systems as products — not as entities with their own responsibility. Responsibility is distributed along the chain:

ActorResponsibilityPrecedent / Law
Model DeveloperBase capabilities and training securityGarcia v. Character Technologies (2025): chatbot = defective product
Company Deploying AgentPolicies, monitoring, supervisionAir Canada (2023): company 100% responsible for what its AI says
DeFi Agent OperatorConfiguration, authority limits, walletCalifornia AB 316 (2026): prohibited from using "AI autonomy" as a defense
Infrastructure ProviderPlatform security and permissionsOFAC: strict liability — if your agent violates sanctions, you pay

The general rule: economic responsibility follows the benefit. If a company uses an agent to maximize profits in DeFi and the agent commits an infraction, the company is primarily responsible. This applies even to OFAC sanctions (the US Treasury's sanctions office): non-compliance by an AI agent is strict liability for the operator — it doesn't matter if the agent "decided on its own."

For financial regulators, agents operating in markets are subject to a triple regime:

Agent ActivityRegulatorRequirement
Rebalances portfolios / yield strategiesSECRegistration as Investment Advisor
Provides liquidity / market makingSECRegistration as Broker-Dealer
Trades BTC, ETH, SOLCFTCCPO/CTA assessment (commodity pool operator/advisor) and pre-trade controls
Transmits stablecoins or cryptoFinCENRegistration as MSB (Money Services Business) + BSA compliance
Interacts with sanctioned addressesOFACImmediate blocking and reporting — strict liability

The GENIUS Act already designates stablecoin issuers as financial institutions under the BSA (Bank Secrecy Act), obliging them to implement customer identification programs and suspicious activity reporting for transactions processed by AI agents. The SEC has made it clear that fiduciary duty is non-delegable to an algorithm — firms must demonstrate that their agents protect client interests at all times.

Regulation was not designed for agents — but it applies to whoever operates them. And the trend is clear: more regulation, not less. An agent operating today without registration can generate retroactive liabilities for its operator.

How does an investor protect themselves from autonomous agents?

Agentic risk in DeFi has two sides: agents you use (and can fail) and agents of others operating in the same protocols (and can manipulate).

If you use agents to manage your capital:

  • Explicit authority limits. Never give an agent unlimited access to your wallet. Set caps per transaction, per protocol, and per day. If the agent needs to move more than X, it should require human approval.
  • Escrow accounts. Funds for a task are locked in a contract and only released when success is verified. If the agent fails, the funds are protected.
  • Verify the MCP server. If your agent connects to external tools via MCP, verify who operates the server. A malicious MCP server can inject instructions into your agent.
  • AI insurance. The liability insurance market for agentic AI already exists — it covers errors, hallucinations, third-party damages, and regulatory defense costs. 40% of agentic AI projects could be canceled in 2027 due to fraud costs and premiums.

If you operate in protocols where agents are present:

  • Agents amplify volatility. Thousands of agents reacting to the same CPI data or a hack in the same millisecond can create liquidation cascades that a human would not cause.
  • Agents can manipulate pools. An agent with sufficient capital can move the price in an illiquid pool, execute arbitrage, and exit — all in one block. Agentic MEV is the evolution of bot MEV: more sophisticated, faster, harder to detect.
  • Oracle centralization is the most dangerous vector. If a malicious agent compromises Chainlink's price feed for a token, all protocols using it — lending, DEX, derivatives — execute operations with false data. Drift lost $285M this way.

What does DeFi gain if machine identity works?

It's not all risk. If the identity infrastructure (Entra) + surveillance (Chainalysis) + regulation (AB 316, OFAC) matures correctly, the result is a safer DeFi for institutional capital:

  • Auditable agents operating with verifiable identity generate trust — pension funds and corporate treasuries that currently avoid DeFi could enter if they know that every agent is traceable.
  • Viable insurance — without identity there is no insurance (who do you insure?). With identity, insurers can assess risk and offer coverage. This unlocks capital that is currently paralyzed by agentic risk.
  • Clear responsibility attracts talent — serious developers avoid DeFi due to diffuse legal risk. A framework where "the operator pays" is clearer than "no one pays" — and clarity attracts more building than ambiguity.

The paradox: the regulation that decentralization purists fear is the same that could bring the next trillion dollars to DeFi. Hidden centralization is a risk — but a total lack of accountability is also a risk.

Where is AI agent regulation in finance headed?

Three trends that will define 2026-2027:

  1. Mandatory identity. Microsoft's Entra Agent ID is voluntary today. It will be mandatory tomorrow. Regulators will demand that every agent operating in financial markets has verifiable, traceable, and revocable identity. It is the equivalent of "KYC for machines."
  2. Strict liability for operators. The legal trend is clear: the operator pays, not the model. Companies deploying agents without supervision will assume responsibility for every action of the agent — including those they did not anticipate. AI liability insurance will cease to be optional.
  3. Agentic defense. Some industry estimates project more than 1 billion active agents in the enterprise world by 2028. The only way to monitor at that scale is to use AI to monitor AI — "autonomous red teaming" (attack simulation where security agents try to compromise production agents) to find vulnerabilities before criminals do.

The formula for agentic risk: more autonomy × more financial authority × more model opacity = more risk. Mitigation: verifiable identity × audit transparency. Companies that do not implement both controls will not only lose money — they will lose the legal ability to defend themselves when something goes wrong.

The most honest truth: the infrastructure to govern AI agents in finance is being built now — Microsoft, Chainalysis, MCP, specialized insurance. But the speed of agent deployment outpaces the speed of regulation. In DeFi, where there is no regulator to slow down deployment, agentic risk is the new smart contract risk: invisible until it explodes, and when it explodes, someone asks "who should have been watching?"

The most illustrative case of 2026: the KelpDAO hack was not executed by an AI agent — it was executed by Lazarus (humans). But the attacker used the same stolen rsETH as collateral in Aave to borrow real assets — an operation that an AI agent could execute identically, faster, and without leaving the same traces. When a sophisticated AI agent replicates Lazarus's playbook — flash loan, bridge exploitation, collateral deposit, real asset extraction — the speed will be minutes, not hours. And the question "who pays?" will determine if DeFi can survive the agentic era.

Kevin Warsh, the new Fed chairman, has investments in Compound and dYdX — two protocols where agents already operate. He is the first regulator to understand agentic risk from direct experience. What he does with that understanding will define whether AI agent regulation in finance is informed or reactive.

Do you know if any AI agents are operating in the same protocols where you have your capital?

CleanSky shows your exposure by protocol, chain, and asset type — so you can see your portfolio structure before an agentic action moves the market. Without custodying your funds. Discover how it works.