ByAUJay
Smart Contract Audit Pricing: Per Line, Per Contract, or Per Risk?
Security leaders increasingly ask: what’s the smartest way to buy assurance—by the line of code, by the engagement, or by outcomes? Below is a practical, number‑driven guide to structuring, negotiating, and sequencing smart contract audits in 2025–2026.
Summary: In 2024 alone, Web3 saw ~$2.3B in on‑chain losses, pushing teams to rethink how they budget for security. This guide compares per‑line, per‑contract (time‑boxed/retainer), and risk‑based pricing—using real budgets from OpenZeppelin, Spearbit, Code4rena, Sherlock, Quantstamp, and Certora—then shows how to assemble a hybrid model that buys the most risk reduction per dollar. (globenewswire.com)
Why audit pricing strategy matters now
-
Losses are rising again. CertiK’s 2024 “Hack3d” report tallied more than $2.3B stolen across 760 on‑chain security incidents. Chainalysis likewise reported ~$2.2B in 2024 crypto hacks, underscoring persistent risk concentration. These numbers drive institutional pressure for credible, audit‑backed security signals. (globenewswire.com)
-
Boards and DAOs increasingly demand a documented security runway (pre‑launch audits + continuous assurance + incident cover). That pushes buyers to weigh cost certainty (per‑contract) against breadth of testing (risk‑based contests) and depth on critical invariants (formal verification add‑ons).
The three dominant pricing models
1) Per line (or “per module”) pricing
What it is: Quotes pegged to code size (e.g., $/LOC or brackets like “<1k LOC,” “1–5k LOC”). It’s common with budget firms and some content‑marketing benchmarks, and it appears in mainstream explainers. It can be useful for quick scoping—but it’s the least predictive of real risk. (techtarget.com)
Typical quotes in the wild:
- “Simple contracts under ~1k LOC: low five figures; 1–5k LOC: up to ~$50k; 5k+ LOC: $50k–$100k+,” per TechTarget’s industry roundup. (techtarget.com)
Why teams still consider it:
- It’s fast for budgeting when you’re pre‑MVP and mostly reusing standard libraries (ERC‑20/721 with minimal custom logic).
Where it breaks:
- Complex risk has little correlation with raw lines of code (think: cross‑chain messaging, oracle design, upgradeability patterns). Per‑line quotes can underprice intricate but short contracts and overprice boilerplate. Use cautiously for exchange‑listing optics; don’t rely on it for safety on novel logic.
When to use:
- “Sanity check” a floor budget for simple, well‑templated scopes—then switch to a more rigorous model for anything holding TVL or interacting with external protocols.
Emerging adjustment:
- Some buyers translate more robust models back into a “shadow” $/LOC to compare vendors, but decide on depth/coverage, not just unit price.
2) Per contract engagement (time‑boxed or retainer)
What it is: You buy expert weeks from a named team; the vendor commits seniority and calendar time. This is how tier‑1s and vetted researcher networks usually work.
Real, recent numbers you can benchmark:
- OpenZeppelin’s continuous security partnership for Venus proposed 24 weeks of research over six months for $554,400 (USDC), implying ~$23,100 per auditor‑week. This is a rare public datapoint from a top firm’s retainer deal. (community.venus.io)
- Spearbit’s Arbitrum proposal detailed blended weekly costs around $48,000 for a 4–5 person team (one lead + seniors + associates + junior). Use ~$48k/week as a working benchmark for high‑caliber researcher collectives. (forum.arbitrum.foundation)
- Quantstamp’s 2025 Venus retainer offered 450 “audit‑hour” credits for $130,000 (paid over four months). That implies ~$289/hour, with engagements staffed by at least two auditors and fix‑review included. Useful for translating quotes to hourly equivalents. (community.venus.io)
Why it’s favored:
- Predictable planning and direct collaboration with named engineers; easier to schedule iterative fix‑review cycles; strong credibility with institutional partners.
Caveats:
- You still need to manage scope creep. Lock code freeze windows and remediation windows into the SOW. Ask for staffing mix (senior/junior ratio) and explicit deliverables (coverage notes, test artifacts, and re‑audit counts).
Add‑on: formal verification retainers
- Certora’s DAO‑approved engagements show the going rate for continuous formal verification on large protocols:
- Aave v4 scope: $2.39M for ~4.5 FTEs over a year (2025). Good proxy for “enterprise‑grade” invariants and rule writing on a living codebase. (governance.aave.com)
- Earlier Aave and Compound proposals list weekly professional‑services rates of ~$70k–$80k, and annual totals in the $1.5M–$3.4M range, useful when deciding where FV fits in your stack. (governance.aave.com)
When to use:
- Complex DeFi, bridges, staking/validator flows, upgradeable systems, or anything with configurable parameters and governance hooks that demand iterative review and tight vendor collaboration.
3) Per risk (outcome‑based) pricing
What it is: You fund a prize pool or premium that pays for validated findings or risk transfer, not hours.
Two concrete forms:
- Competitive audit contests (Code4rena, Zellic‑run events, CodeHawks, etc.):
- Public budgets: $200,000 (Size, 2024), $103,250 (GTE Perps, 2025), and multiple contests in the $75k–$150k range. These are real procurement‑grade datapoints that help you size prize pools. (github.com)
- Zellic explains a 96% “conditional” pool refunded if no High/Medium issues are found—i.e., you pay primarily for actionable vulns. Judges’ fees are separate. This is classic pay‑for‑results. (zellic.io)
- Exploit cover/insurance:
- Sherlock offers audit‑linked protocol cover with premiums typically around 2.0% (public contest) to 2.5% (private contest) of the covered amount, up to $10M, with pricing based on risk/TVL and audit history. This converts some residual risk into a predictable OPEX line. (opencover.com)
When to use:
- You want massive reviewer diversity and adversarial creativity after a structured review, or you want to transfer catastrophic tail risk pre‑mainnet.
Caveats:
- Contests surface lots of issues; triage and fix bandwidth becomes the bottleneck. Budget for internal engineering cycles and at least one re‑audit on critical fixes. With cover, ensure precise definitions of “covered code,” triggers, exclusions, and payout committees.
A side‑by‑side budget example (realistic 2025 scenario)
Your scope:
- 7 contracts, ~2,200 LOC Solidity (custom AMM math, fee routing, upgradability, oracle reads), external interactions with an L2 bridge, planned TVL $60–$100M in year one.
Option A — Per‑contract time‑boxed
- Target team: 3 seniors + 1 associate, 3 weeks
- Spearbit‑like blended cost: ~3 weeks × $48k/week ≈ $144k
- Re‑audit (1 week): +$48k
- Total: ~$192k; you get deep collaboration and a named team. (forum.arbitrum.foundation)
Option B — Hybrid: time‑boxed + outcome‑based
- Structured review: 2 weeks × $48k = $96k (fix criticals first)
- Contest budget: $100k–$150k (typical recent ranges); judging ~$3k–$15k depending on spec. Let’s model $120k + $10k = $130k
- Re‑audit pass: 1 week × $48k = $48k
- Total: ~$274k; you buy both depth (structured) and breadth (contest). (outposts.io)
Option C — Retainer + cover
- Quantstamp‑style credits: buy 450 hours for $130k (allocate 300 hours pre‑launch, 150 hours for upgrades/fixes) → implied ~$289/hr
- Add Sherlock cover for post‑launch (e.g., $5M cover at ~2%/yr ≈ $100k; adjust for TVL and audit history)
- Total year‑one: ~$230k, with some risk transfer. (community.venus.io)
Option D — Formal verification add‑on (critical invariants only)
- Narrow FV sprint: 2–3 weeks of rule writing and proof work at market benchmarks ($70k–$80k/week) to verify AMM and fee‑router invariants pre‑launch → +$140k–$240k on top of A/B
- Use when financial logic has non‑obvious edge cases. (governance.aave.com)
Takeaway: For protocols moving real TVL, “hybrid” (B) consistently delivers the best coverage: structured depth → contest breadth → targeted re‑audit, optionally with exploit cover to cap tail risk.
What actually moves your quote (and how to control it)
- Codebase quality and freeze discipline: Stable, well‑documented repos can cut calendar time by 20–40%. Chaotic mid‑audit refactors add weeks.
- External dependencies: Bridges, oracles, and cross‑chain messaging increase attack surface—budget more senior time and/or an FV sprint.
- Language and ecosystem: Solidity on EVM is competitively priced; Rust (e.g., Solana) or ZK circuits often carry premiums due to scarcer expertise.
- Timeline pressure: Expect 20–50% rush premiums for sub‑two‑week starts; plan 4–8 weeks ahead for top vendors. (blockchainappfactory.com)
- Re‑audit cycles: Serious teams budget 1–2 re‑audits (10–30% uplift on base). If not included, add to your TCO. (hashultra.com)
- Continuous monitoring and incident response: Some firms package “always‑on” security as monthly retainers; pair with exploit cover to bound loss volatility. (sherlock.xyz)
Market benchmarks you can cite to your CFO
- Security loss context: ~$2.3B stolen in 2024 across 760 incidents. This is why exchanges and integrators increasingly expect credible audit footprints. (globenewswire.com)
- Top‑tier retainer math: OpenZeppelin × Venus: $554,400 for 24 weeks (≈$23.1k per auditor‑week). (community.venus.io)
- Researcher‑network weekly cost: Spearbit blended ≈$48k/week for a 4–5 person team. (forum.arbitrum.foundation)
- Credits model: Quantstamp 450 hours for $130k (≈$289/hour) with two‑auditor minimum and fix reviews. (community.venus.io)
- Contest budgets: $200k (Size 2024), $103k (GTE Perps 2025), numerous $75k–$150k contests; Zellic’s 96% conditional pool pays primarily for validated High/Medium issues. (github.com)
- Exploit cover pricing: Sherlock indicates ~2.0%–2.5% of covered amount (up to $10M), with pricing tied to risk/TVL and audit history. (opencover.com)
- Formal verification at scale: Certora deals approved by Aave/Compound show $70k–$80k weekly professional‑services equivalents, $1.5M–$3.4M annual programs, and a 2025 Aave v4 package at $2.39M for ~4.5 FTEs. (governance.aave.com)
Best‑in‑class sequencing (what the leading protocols do)
- Pre‑audit hardening (2–3 weeks internal)
- Fuzz critical paths, add invariant tests, enable eventing for state changes, finalize upgrade strategy, and lock a code freeze. This improves signal‑to‑noise for your external auditors.
- Structured audit with a named team (2–4 weeks)
- Buy 2–3 senior reviewers who co‑design test scenarios with you; expect tight loops on fixes and a scoped re‑audit.
- Competitive contest (10–20 days)
- Fund a prize pool sized to your risk appetite ($100k–$200k is now common for serious protocols) and ensure a strong judging spec. This widens coverage and catches “weird” attack paths that structured reviews sometimes miss. (github.com)
- Re‑audit pass and launch gating
- Require “all Highs addressed and Mediums mitigated/accepted.” Track this in a public changelog for partners/integrators.
- Post‑launch risk cap
- Purchase exploit cover aligned to launch TVL (e.g., $5M–$10M with ~2% annualized premium), then scale as TVL grows and your security posture matures. (opencover.com)
- Continuous assurance
- Keep a credit/retainer buffer for upgrades and emergency response. This can be a credits pool (Quantstamp‑style) or a weekly allocation in a longer retainer (OpenZeppelin‑style). (community.venus.io)
How to choose the right model for your stage
-
Pre‑PMF, low TVL (<$1M): Use a small per‑line/per‑module quote to set a floor, but bias toward a short time‑boxed review by a named senior and hold a focused micro‑contest ($25k–$50k) before mainnet.
-
Growth‑stage DeFi (target TVL $10–$100M): Hybrid model. Book a 2–3 week structured audit with re‑audit, then run a 10–20 day contest ($75k–$150k). Add $5M–$10M cover for the first 90–180 days post‑launch. (outposts.io)
-
Enterprise/DAO with frequent upgrades: Retainer + formal verification. Lock a quarterly cadence with named researchers; use FV to prove invariants around liquidation, accounting, or cross‑chain settlement. Budget ~$48k/week for researcher networks, and $70k–$80k/week for targeted FV sprints. (forum.arbitrum.foundation)
RFP checklist that gets you better quotes
Include these in your request to reduce padding and win better teams:
- Scope artifacts:
- Threat model, invariants to preserve, admin model (EOA vs Safe), upgrade pattern (UUPS/Transparent/Beacon), external call graph (bridges/oracles/routers).
- Code hygiene:
- Tests with coverage, fuzz seeds for known edge cases, reproducible Foundry/Hardhat harness, deployment scripts, and a frozen tag/commit.
- Deliverables you expect:
- Named staff with seniority, depth of manual review, tool list (Slither/Echidna/fuzzers/custom harnesses), exact number of re‑audit passes, and artifact set (PoCs, coverage notes, differential reports).
- Contest spec (if applicable):
- Severity rubric, duplicate handling, PoC requirements for High/Medium, reproduction environment, and judge lineup.
- Post‑launch:
- Cover options and exclusions, incident response SLA, and an allowance of credits/hours for hotfix review.
Negotiation tactics that actually work
- Convert quotes to common units:
- Translate everything to auditor‑weeks or $/hour equivalents (e.g., Quantstamp credits ≈$289/hour) to compare apples‑to‑apples without reducing your choice to raw LOC. (community.venus.io)
- Fix the re‑audit ambiguity:
- Write 1–2 re‑audit passes into the SOW with turnaround SLAs tied to your launch gate.
- Calendar, not just cost:
- Top vendors book out weeks in advance. If you can align to their available windows (and promise a hard freeze), you can sometimes trade time flexibility for price or better staff.
Frequently asked buyer questions, answered with data
-
“Is a single audit enough?”
Not for high‑impact systems. Public postmortems routinely show exploits that slipped past one review but were caught by divergent approaches (contest adversarial thinking vs. structured audit depth). Contest budgets of $100k–$200k are now a mainstream layer for serious launches. (github.com) -
“How much should we spend before mainnet?”
For protocols expecting $10M+ TVL in the first 90 days, budgets of ~$200k–$300k across structured audit(s) + contest + re‑audit are common and defensible to investors/partners. Use retainer credits to handle post‑launch fixes. (forum.arbitrum.foundation) -
“Do we really need formal verification?”
Use it surgically. If your value accrual depends on complex accounting/math (AMMs, lending, liquidations, interest rate models), a 2–3 week FV sprint validating invariants can pay for itself. For continuous change at scale, study DAO‑level FV programs like Aave/Compound. (governance.aave.com) -
“What about insurance/cover?”
Products like Sherlock’s pair discounts with completed audits and a clean finding record; budget ~2% of covered TVL (capped) and read triggers/exclusions carefully. It’s not a substitute for good engineering—but it caps tail risk. (opencover.com)
Putting it together: a 6‑step, risk‑adjusted buying plan
- Pre‑audit readiness: achieve a clean freeze, add invariant tests, and document external dependencies.
- Book a named team for 2–3 weeks; scope one re‑audit in the SOW.
- Run a 10–20 day contest ($75k–$150k) with a strong spec and required PoCs for High/Medium. (outposts.io)
- Gate launch on “Highs fixed, Mediums mitigated/accepted,” then publish your changelog.
- Purchase $5M–$10M exploit cover for 3–6 months; revisit premiums as TVL evolves. (opencover.com)
- Hold a credits/retainer buffer (e.g., 150–300 hours) for upgrades and incident response. (community.venus.io)
Key sources you can reference in your board memo
- Losses context (why this spend matters): CertiK Hack3d 2024 ($2.3B losses); Chainalysis 2024 (~$2.2B). (globenewswire.com)
- Retainer/time‑boxed pricing: OpenZeppelin × Venus ($554.4k for 24 weeks); Spearbit weekly blended costs (~$48k); Quantstamp credits (450h for $130k). (community.venus.io)
- Outcome‑based pricing: Code4rena public prize pools ($200k; $103k; $75k+); Zellic’s conditional pool. (github.com)
- Formal verification at scale: Certora weekly/annual rates and DAO approvals (Aave, Compound). (governance.aave.com)
Final word
There is no one “right” way to pay for audits—but there is a right sequence for your risk profile. For anything that will hold real capital, treat per‑line quotes as a planning floor, buy a named team for depth, add an adversarial contest for breadth, and cap residual risk with cover and credits. That’s how 2025’s best teams are turning security spend into launch certainty—and avoiding the headlines that no one wants to make. (forum.arbitrum.foundation)
7Block Labs helps founders and enterprises put this exact model into practice—from audit‑ready engineering to vendor selection, contest specs, cover procurement, and on‑call incident response. If you want a second set of eyes on your scope and budget, we’re here to help.
Like what you're reading? Let's build together.
Get a free 30‑minute consultation with our engineering team.

