When news broke on April 18, 2026 that KelpDAO had lost $292 million, making it the largest DeFi exploit of 2026, the first instinct of most security researchers was to look for the bug in the code, but there wasn’t one.

KelpDAO’s smart contracts were not broken. No vulnerability was discovered in the rsETH token logic. No cryptographic primitive was compromised. The contracts performed exactly as written. Aave, the lending protocol that sustained the most significant downstream impact, stated the same about itself: its contracts had not been exploited.

What failed was a single deployment decision made long before any of this happened: a configuration choice that sat outside the perimeter of every code review and every audit the project had ever run.

The Contracts Were Fine!

This is the distinction the industry rarely talks about: the difference between code risk and operational risk. The KelpDAO incident is a textbook demonstration of why conflating the two is so dangerous, and why addressing only one leaves protocols fundamentally exposed.

What Happened (The Short Version)

KelpDAO’s rsETH bridge relied on LayerZero’s messaging infrastructure to move assets across chains. LayerZero’s security model centers on a Decentralized Verifier Network (DVN), a configurable set of independent verifiers that validate cross-chain messages before they are accepted as legitimate.

LayerZero’s own documentation recommends configuring at least two independent DVNs to avoid a single point of failure. KelpDAO’s rsETH bridge opted for the default configuration of a single DVN operated by LayerZero labs.

Attackers exploited that single point of failure precisely. They compromised two RPC nodes that LayerZero’s verifier relied on to confirm transactions, replacing the node software with malicious versions capable of selectively forging responses. The spoofing was precise: the malicious nodes returned fraudulent data only to the DVN, while continuing to return truthful responses to all other requesters, including LayerZero’s own monitoring services. Nodes they could not compromise were taken down via DDoS, forcing the DVN’s traffic entirely through the poisoned infrastructure.

With the sole verifier now operating through infrastructure they controlled, the attackers injected a synthetic cross-chain message (one with no corresponding source transaction on Unichain at all) falsely claiming that 116,500 rsETH had been locked on the source chain. The DVN attested to it. KelpDAO’s OFTAdapter released 116,500 rsETH from its escrow, worth roughly $292 million, to the attacker’s address. No smart contract was broken; every contract involved operated exactly as designed.

The attack window lasted approximately 80 minutes. KelpDAO detected it and blocked further activity within the hour. But the funds were already gone, and 89,567 of the drained rsETH had already been deposited on Aave as collateral, used to borrow $190 million in WETH against assets that were now backed by nothing.

Why No Audit Would Have Caught This

Here is where it is worth being precise, because the industry tends to blur what code reviews actually assess.

A smart contract audit examines the logic of deployed contract code. Auditors check for vulnerabilities in functions, incorrect access control, arithmetic errors, reentrancy risks, and deviations from a specification.

A smart contract audit does not typically examine:

  • How third-party protocol integrations are configured at deployment
  • Whether infrastructure components (RPC nodes, relayers, oracle setups) introduce single points of failure
  • Whether default settings recommended by protocol documentation have been followed
  • How a system behaves when off-chain dependencies are compromised

The KelpDAO incident fell squarely into all four of these categories and was not resolvable through a code review of any of them. The vulnerability was not a bug that could have been spotted in a differential audit. It was a deployment configuration decision, made once, and never revisited between audits.

Point-in-time audits were never designed to close this kind of gap. They are, by definition, snapshots. A protocol evolves, its integrations evolve, its infrastructure changes, configurations change. The audit stays fixed.

Operational Risk Is Structural, Not Accidental

What happened to KelpDAO was not an oversight unique to that team. It is a structural gap in how the industry currently approaches security.

Most blockchain security programs are built around code, which is auditable, static, and well-understood as a surface. But the actual attack surface of a production DeFi protocol extends far beyond contracts. It includes:

  • Integration configuration: how external protocols (bridges, oracles, messaging layers) are wired in, and whether those configurations meet the security expectations of those protocols
  • Infrastructure dependencies: the off-chain systems (RPC endpoints, relayers, indexers) that protocols assume to be reliable and honest
  • Architectural trust assumptions: questions of who controls what, how many entities can veto or approve critical operations, and whether those assumptions are revisited as the system evolves
  • Ecosystem-level exposure: the downstream impact of a protocol’s failure on systems that hold its assets as collateral. In this case, Aave faced between $123M and $230M in bad debt depending on how losses are socialized, with WETH pools across Ethereum, Arbitrum, Base, Mantle, and Linea hitting 100% utilization

A single smart contract audit reviews none of these categories holistically. This is not a criticism of audits, it is a description of their scope. The problem is that the industry has often treated “audited” as synonymous with “secure,” when these are answering entirely different questions.

What Good Looks Like

The KelpDAO incident offers a useful benchmark. The question to ask about any protocol integration is not only “does the code work?” but also: what is the worst-case outcome if any single off-chain component in this integration is compromised?

For cross-chain bridges relying on a verification layer, the answer should never be “all funds are stolen.” That outcome should require the corruption of multiple independent parties, not one. A properly configured DVN setup, for example one requiring consensus from two independent verifiers such as the LayerZero default DVN and a secondary provider like Google Cloud or Polyhedra, would have made this attack ineffective: compromising a single verifier’s RPC infrastructure would not have been sufficient to forge an accepted message. Alternatively, the team can always run their own DVN.

At the infrastructure level, SEAL 911’s initial takeaways from this incident are instructive: run independent validator sets whose operators are not controlled by the same entity; cross-reference RPC responses across multiple independent gateways; treat any mismatch as a potential attack signal; run local nodes for sensitive operations rather than trusting shared cloud infrastructure.

LayerZero’s response to this incident is telling. In the wake of the exploit, they announced they will no longer sign messages for any application running a 1-of-1 DVN configuration, effectively requiring all existing single-verifier setups to migrate.

Beyond the technical checklist, the more important shift is organizational. Security cannot be a periodic event, but it has to be a continuous process that keeps pace with protocol development. Configuration decisions made at deployment need to be revisited as protocols evolve. Integrations with third-party infrastructure need to be assessed not just for contract-level correctness, but for their operational resilience.

A Different Kind of Security Partnership

What this incident illustrates, and what the industry is slowly coming to recognize, is that protocols need security partners who are present continuously, not periodically. Two categories of engagement are directly relevant to the risks the KelpDAO incident exposes.

Continuous security coverage moves security from a point-in-time review to an ongoing engagement. Rather than auditing code at a fixed moment, a continuous program sits alongside a protocol’s development cycle, reviewing configurations as they evolve, flagging integration choices as they are made, and maintaining visibility into the trust assumptions that underpin a system’s architecture. A question like “does this cross-chain bridge follow the multi-verifier redundancy the underlying messaging protocol recommends?” should be asked on an ongoing basis, not once at deployment and never again. This is the category of engagement designed to cover the space between audits: the window where configuration decisions are made, never formally reviewed, and left in place until something breaks.

Risk assessments addresses a different but related gap: the need for a holistic view of what an asset is actually built on, independent of its smart contract logic. For a liquid restaking token like rsETH, a thorough risk assessment examines the bridging mechanism, the verifier configuration, the off-chain infrastructure dependencies, and the failure modes of each external integration, not just whether the contracts are solid. This kind of evaluation is valuable for protocols assessing their own security posture, and equally valuable for any institution or protocol considering exposure to that asset. Understanding the full risk surface of what you hold or interact with is not equivalent to having read an audit report, and treating it as such is a risk in itself.

Neither approach replaces smart contract audits. They address the layer above: the decisions and configurations that point-in-time reviews were never designed to reach.

OpenZeppelin’s continuous security program and protocol risk assessment services are built around both of these gaps, augmenting expert advisory with AI-powered analysis to provide the kind of coverage that scales with how protocols actually evolve. The broader point, though, is not about any single provider. It is that these categories of security engagement need to become a baseline expectation, not an optional add-on, for any protocol in production.

The Takeaway

$292 million was not lost because someone wrote bad code. It was lost because a deployment configuration decision (a 1-of-1 verifier setup) was made once, and never revisited. The debate over whose documentation encouraged it is secondary. What matters is that no security program in place caught it before it was exploited.

The smart contracts were correct. The code was clean. The system failed operationally.

This is a category of risk the industry has consistently underweighted. As DeFi protocols deepen their integrations with cross-chain infrastructure, oracle networks, and shared liquidity layers, the operational surface grows faster than the auditable code surface. Security programs that don’t keep pace with that growth leave protocols exposed: not to bugs, but to the space between audits.

Code risk and operational risk are not the same problem. Treating them as one is what the next $292 million will cost.

Where to Go From Here

If your protocol depends on cross-chain messaging, oracle networks, or any third-party infrastructure, the risks exposed by the KelpDAO incident deserve the same scrutiny you apply to your contract code. Configurations and integrations that were correct on the day of deployment rarely stay correct for long, and most of them were never inside the scope of any audit to begin with.

OpenZeppelin's continuous security and protocol risk assessment engagements are built to cover exactly this layer: the configurations, integrations, and trust assumptions that evolve between audits, and the downstream exposure they create for institutions and protocols holding those assets.

Talk to our team about continuous security →

FAQs

How much did the KelpDAO hack cost?

The KelpDAO exploit on April 18, 2026 resulted in $292 million in losses, making it the largest DeFi exploit of 2026. Attackers drained 116,500 rsETH from KelpDAO's bridge escrow by forging a cross-chain message. Of that, 89,567 rsETH was deposited on Aave as collateral to borrow $190 million in WETH - against assets now backed by nothing.

How was the KelpDAO hack carried out?

Attackers compromised the RPC nodes that KelpDAO's single LayerZero DVN (Decentralized Verifier Network) relied on to validate cross-chain messages. By poisoning that infrastructure, they caused the verifier to attest to a fabricated message - one claiming 116,500 rsETH had been locked on the source chain when no such transaction existed. KelpDAO's bridge then released the funds. No smart contract was exploited; every contract behaved exactly as designed.

What is a DVN in LayerZero?

A DVN (Decentralized Verifier Network) is a configurable set of independent verifiers in LayerZero's cross-chain messaging protocol that validate whether a message sent on one blockchain is legitimate before it is accepted on another.

What is a 1-of-1 DVN configuration, and why is it risky?

A 1-of-1 DVN configuration means a cross-chain bridge relies on a single verifier to validate all messages. It is risky because compromising that one verifier - through RPC node manipulation, infrastructure poisoning, or other means, gives an attacker complete control over what messages the bridge accepts as legitimate. LayerZero's own documentation warned against this setup, and after the KelpDAO exploit, LayerZero announced it would block all 1-of-1 configurations entirely.

Why didn't a smart contract audit catch the KelpDAO vulnerability?

The KelpDAO vulnerability was a deployment configuration choice, not a code bug, so no smart contract audit would have flagged it. Audits examine contract logic, access control, arithmetic, reentrancy - but do not typically assess how third-party integrations like LayerZero are configured, whether those configurations follow the external protocol's own security recommendations, or what happens when off-chain infrastructure is compromised. The DVN configuration sat entirely outside that scope.

What is the difference between continuous security coverage and a standard audit?

A standard audit is a point-in-time review of contract code at a fixed moment; continuous security coverage stays active alongside a protocol's development cycle. The KelpDAO DVN configuration was set once at deployment and never revisited - a continuous engagement is specifically designed to catch that kind of configuration drift. Rather than asking "is this code correct today?", it asks "has anything changed since the last review that introduces new risk?"

What should protocols using cross-chain bridges do right now to reduce this risk?

Protocols using cross-chain bridges should: (1) confirm they are not running a single-verifier (1-of-1 DVN) setup and migrate if they are; (2) source RPC infrastructure from multiple independent providers with no shared points of failure; (3) cross-reference on-chain state across independent endpoints and treat discrepancies as potential attack signals; (4) treat cross-chain integration settings as ongoing security responsibilities, not one-time setup tasks that only get reviewed during audits.