7Block Labs
Blockchain Infrastructure

ByAUJay

Geth Requirements, Geth Full Node Disk Size 2026, and HSM PQC Considerations for Validators

A practical 2026 guide for CTOs and infra leads: what hardware to buy for Geth, how much disk you actually need post–history-expiry, and how to plan HSM and post‑quantum cryptography (PQC) for validator key management and remote signing, with concrete commands and migration steps. Sources include current client docs, Ethereum Foundation updates, and NIST/IETF PQC guidance.

Who this is for

  • Startup teams moving from hosted RPC to self‑run execution clients (Geth) and considering validator operations
  • Enterprise infra/security architects designing durable, compliant validator and RPC stacks

TL;DR (exec summary)

  • A production Geth full node in 2026 fits well on a 2 TB TLC NVMe. Expect 500 GB+ for Geth (snap/full), ~12 TB for legacy archive, and ~1.9 TB for the new path‑based archive state; consensus data adds ~200 GB. Plan bandwidth ≥25 Mbit/s sustained. (ethereum.org)
  • Partial History Expiry (PHE) shipped across all execution clients in July 2025; pre‑Merge block bodies/receipts can be pruned, saving ~300–500 GB and letting more nodes stay on 2 TB disks. Geth v1.16+ adds a one‑shot prune command and “era1” history retrieval for targeted data restore. (blog.ethereum.org)
  • PQC is now standardized (FIPS 203/204/205) and landing in HSM firmware (ML‑KEM, ML‑DSA, SLH‑DSA). Ethereum still signs with BLS12‑381, so apply PQC today to your transport, control plane, and certificate PKI, and plan HSM rollouts that can do PQC for TLS and code‑signing while you continue to use remote signers for BLS. (nist.gov)

Section 1 — Geth requirements in 2026: what actually matters

Hardware budget lines (recommended for mainnet, single‑box EL+CL with light RPC):

  • CPU: 4–8 modern cores (3.0+ GHz). More cores help RPC bursts and compaction; clock speed helps sync. (ethereum.org)
  • RAM: 16–32 GB. Geth itself is modest, but CL + OS + monitoring + DB caches benefit from headroom. (ethereum.org)
  • Disk:
    • Primary: 2 TB TLC NVMe SSD with DRAM cache (IOPS > 100k). NVMe latency dominates sync and RPC. (ethereum.org)
    • Optional secondary: inexpensive HDD or SATA SSD for “ancients”/freezer via --datadir.ancient to offload cold history. (geth.ethereum.org)
  • Network: ≥25 Mbit/s symmetric, unmetered if possible. Validators are sensitive to downtime and bandwidth caps. (ethereum.org)
  • OS/filesystem basics: ext4/xfs with noatime, periodic TRIM, and nvme firmware up to date. Keep swap minimal but present.

Why these picks: Ethereum.org’s run‑a‑node page still lists 2 TB SSD, 16 GB RAM as recommended for execution clients, with another ~200 GB for consensus beacon data. In practice this leaves room for history growth between prunes and for log/index overhead. (ethereum.org)

Key Geth storage concepts you should exploit:

  • Freezer/Ancients: Geth stores old block bodies/receipts (“ancients”) in a separate append‑only store that you can relocate with --datadir.ancient, ideal for slower/larger disks. (geth.ethereum.org)
  • Database engine: LevelDB is default; Pebble is available via --db.engine=pebble as an actively maintained alternative (requires resync/fresh datadir). This is useful for long‑term maintainability testing. (geth.ethereum.org)
  • Snapshot/snap sync: modern default sync; combine with periodic pruning to keep disk stable. (geth.ethereum.org)

Production‑grade start command (single box with CL co‑located, ancients on HDD):

geth \
  --syncmode snap \
  --authrpc.jwtsecret /var/lib/ethereum/jwtsecret \
  --datadir /nvme/geth \
  --datadir.ancient /hdd/geth-ancients \
  --http --http.addr 127.0.0.1 --http.api eth,net,engine,web3 \
  --ws --ws.addr 127.0.0.1 --ws.api eth,net,engine,web3 \
  --metrics --pprof

Place the consensus client on the same host and point it to the same JWT secret; Ethereum.org recommends co‑location for the Engine API. (ethereum.org)


Section 2 — Geth full node disk size (2026): where you’ll land and why

Today’s reference points you can plan against:

  • Execution full node (snap/full) rough size: 500 GB+ (Geth), 500 GB+ (Nethermind). Archive around 12 TB for legacy hash‑based indexing. (ethereum.org)
  • Geth’s new path‑based archive (v1.16+): approximately 1.9 TB for full historical state, with the tradeoff that eth_getProof is not served for deep history yet; you can choose how much historical state to retain with --history.state=N, defaulting to a rolling window. (chainrelease.info)
  • Growth dynamics: hash‑scheme DBs historically grew ~14 GB/week before pruning; periodic prune resets you to the baseline. Post‑PHE, history growth pressure is lower because pre‑Merge bodies/receipts can be removed locally. (geth.ethereum.org)

What changed in 2025 (and why you care):

  • Partial History Expiry: On July 8, 2025 the EF announced that all execution clients support pruning pre‑Merge block bodies/receipts. Operators save roughly 300–500 GB with no impact on validating head blocks. Geth v1.16 added prune‑history and era1 integration so you can remove pre‑Merge history, then selectively re‑hydrate ranges later. This is the first concrete step toward EIP‑4444 rolling expiry. (blog.ethereum.org)

Practical sizing guidance for 2026:

  • If you only need a full node (no archive queries): buy 2 TB NVMe; expect 500–800 GB for Geth plus growth; prune quarterly or when ~80% full. Keep consensus data budgeted (~200 GB). (ethereum.org)
  • If you require historical state reads but can accept the proof limitation: use path‑based archive with --state.scheme path and --gcmode archive; plan ~2 TB for state plus ancients. Put ancients on cheaper storage via --datadir.ancient. (chainrelease.info)
  • If you need deep historical proofs and full indices: legacy archive remains ~12 TB+ and rising; consider Erigon/Reth alternatives or external history providers. (ethereum.org)

Budget example (two‑disk layout):

  • NVMe1 (2 TB): datadir “hot” LevelDB/Pebble + OS/CL
  • HDD/SATA SSD (2–4 TB): ancients via --datadir.ancient Result: fast state, cheap history; crash recovery uses ancients as source of truth. (geth.ethereum.org)

Section 3 — Commands you’ll actually run (PHE and targeted history restore)

  • One‑time prune of pre‑Merge block bodies/receipts (Geth v1.16+):

    • Stop Geth cleanly, then:
      geth prune-history --datadir /nvme/geth
      
    • Restart normally. Expect hundreds of GB freed if you previously stored pre‑Merge bodies/receipts. (geth.ethereum.org)
  • Fetch specific history later (era1 files) without re‑sync:

    • While running or offline:
      geth download-era --server https://mainnet.era1.nimbus.team --block 100000-200000 --datadir /nvme/geth
      
    • Geth verifies checksums and drops files into ancients. Use community‑maintained mirrors. (geth.ethereum.org)
  • Move ancients to cheaper storage:

    • Stop Geth, copy your ancient folder to the new path, then:
      geth --datadir /nvme/geth --datadir.ancient /hdd/geth-ancients
      
    • Do not start Geth with an invalid ancients path; it’s expressly forbidden. (geth.ethereum.org)
  • Optional: run with Pebble to evaluate future DB backend:

    geth --db.engine=pebble --datadir /nvme/geth-pebble
    

    Requires a fresh datadir; Pebble is an actively maintained alternative to LevelDB. (geth.ethereum.org)


Section 4 — Validator key management today: HSMs, remote signers, and what PQC changes

What Ethereum signs with today

  • Validator keys are BLS12‑381. Most general‑purpose HSMs don’t natively expose BLS signing; mainstream practice is to use a remote signer (e.g., Web3Signer) that manages BLS keys, with a slashing‑protection database. For Eth1 (secp256k1) keys, HSMs or cloud KMS are supported; for Eth2 (BLS) Web3Signer loads keys to memory and enforces slashing protection. (docs.web3signer.consensys.io)

What PQC changes (and what it doesn’t)

  • NIST finalized PQC standards in 2024: ML‑KEM (FIPS 203), ML‑DSA (FIPS 204), and SLH‑DSA (FIPS 205). These are now FIPS‑approved algorithms and the ecosystem (TLS, X.509, HSMs, ACVP/CMVP) is converging on them. Ethereum’s consensus still depends on BLS12‑381; PQC doesn’t replace validator signatures in the near term. Instead, apply PQC to your “management plane” today: TLS key agreement, PKI, code signing, and backups. (nist.gov)

HSM reality in 2026

  • Thales Luna HSM 7.9.x: firmware added ML‑KEM and ML‑DSA mechanisms (PKCS#11 identifiers, keygen/sign/wrap), plus hybrid cloning ciphers and PQC key attestation improvements. Requires Luna Client 10.9+. (thalesdocs.com)
  • Entrust nShield 5: firmware supports ML‑KEM, ML‑DSA, and SLH‑DSA; vendor reports CAVP validation achieved and CMVP updates in flight. For enterprises, this means you can run PQC in FIPS‑track HSMs for TLS, code signing, and PKI while keeping validator BLS in remote signer workflows. (entrust.com)

PQC for transport: secure the pipes now

  • Hybrid PQ TLS is shipping: Cloudflare has enabled X25519+ML‑KEM hybrid for TLS 1.3 client‑to‑edge and edge‑to‑origin; major browsers now default to X25519MLKEM768. Using Cloudflare (or your own OpenSSL 3 + OQS provider) you can PQC‑harden remote signer, RPC, and admin plane connections today. (developers.cloudflare.com)
  • DIY hybrid TLS: OpenSSL 3 with the OQS provider supports ML‑KEM and hybrid key exchange; see oqs‑provider guidance. For embedded/edge, wolfSSL has PQC TLS 1.3 suites and recently fixed a Kyber security‑level bug—keep libraries current. (openquantumsafe.org)
  • Standards track: IETF TLS WG has drafts for hybrid design and ECDHE+ML‑KEM named groups; track these for policy baselines and interop. (datatracker.ietf.org)

Remote signer patterns that work

  • Web3Signer layout:
    • Store BLS keystores on encrypted disk or in a vault (AWS Secrets Manager/GCP Secret Manager/HashiCorp Vault) and let Web3Signer enforce slashing protection via Postgres.
    • For execution‑layer secp256k1 keys, HSM/KMS is supported; for BLS, keys load to memory but access is gated and audited. (docs.web3signer.consensys.io)
  • Network posture: terminate PQC‑hybrid TLS at the signer; mutual TLS with short‑lived certs; lock down source IPs; segregate signer from beacon/validator clients and from public RPC. Use a separate Postgres for slashing DB with synchronous commit on a fast NVMe.

Where DVT fits

  • Distributed Validator Technology (Obol/SSV) reduces single‑machine key risk and improves uptime. Adoption is accelerating (e.g., Lido cohorts and “super clusters” in 2025), making it a strong complement to HSM/KMS for operational resilience. (blog.lido.fi)

Section 5 — Concrete playbooks

Playbook A — “Fit Geth + Consensus on 2 TB and sleep well”

  1. Hardware: 8 cores, 32 GB RAM, 2 TB TLC NVMe with DRAM.
  2. Place ancients on a secondary disk:
  3. Turn on metrics and monitor disk fill %; prune when at ~80%:
    geth prune-history --datadir /nvme/geth
    
    Expect hundreds of GB reclaimed if pre‑Merge history was present. (geth.ethereum.org)
  4. If you later need specific history ranges, download era1 files on demand:
    geth download-era --server https://mainnet.era1.nimbus.team --block 12000000-13000000 --datadir /nvme/geth
    
    (geth.ethereum.org)

Playbook B — “I need historical state but not heavy proofs”

  1. Geth >= v1.16 with path‑based archive:
    • Full sync, then enable archive indexing:
      geth --syncmode full --gcmode archive --history.state=0
      
    • Plan ~1.9 TB for historical state; proofs via eth_getProof aren’t served for deep history yet. (chainrelease.info)
  2. Put --datadir.ancient on slow disk. Monitor indexing completion before relying on history. (geth.ethereum.org)

Playbook C — “Harden remote signer with PQC and enforce slashing protection”

  1. Web3Signer for validators; Postgres for slashing DB.
  2. Store BLS keys in AWS Secrets Manager or Vault; configure signer to load from vault and enforce slashing locks. (docs.web3signer.consensys.io)
  3. Terminate hybrid TLS at the signer:
    • Cloud‑edge: enable Cloudflare’s PQC on TLS 1.3 or deploy OpenSSL 3 + oqs‑provider on your ingress proxy. (developers.cloudflare.com)
  4. Audit and alerts on “slashable” event attempts; backup the slashing DB frequently.

Playbook D — “Plan HSM and PQC rollout without breaking BLS”

  1. Near‑term (2026): use HSM for org PKI, code‑signing, and EL secp256k1 keys; adopt PQC algorithms now (ML‑KEM for key establishment, ML‑DSA/SLH‑DSA for signatures) where supported by firmware (Thales/Entrust). (thalesdocs.com)
  2. Transport: require hybrid PQ TLS for RPC/admin/remote‑signer channels. Track IETF drafts and vendor TLS stacks for named‑group support. (datatracker.ietf.org)
  3. Mid‑term: watch client roadmaps for any BLS‑in‑HSM or enclave projects; today, enterprise‑grade slashing protection and process segregation mitigate much of the BLS‑in‑HSM gap. (docs.web3signer.consensys.io)

Section 6 — Emerging practices we recommend (and why)

  • Co‑locate EL and CL on the same box for the Engine API and use a local JWT secret; avoid cross‑host Engine API unless you have a strong reason. (ethereum.org)
  • Keep ancients cheap: always enable --datadir.ancient to offload the freezer to non‑NVMe storage; it’s explicitly designed for O(1) reads from slow disks. (geth.ethereum.org)
  • Embrace PHE now: prune once on upgraded clients to reclaim hundreds of GB, then set a quarterly maintenance window. Restore specific ranges via era1 only when necessary. (blog.ethereum.org)
  • Pick your archive flavor: if you need “all historical state,” prefer the new path‑based archive (~1.9 TB) and accept today’s eth_getProof limitation; otherwise outsource heavy history to community mirrors or a data provider. (chainrelease.info)
  • PQC where it counts today:
    • TLS: hybrid X25519MLKEM768; verify with Cloudflare’s tools or your own OpenSSL build. (developers.cloudflare.com)
    • PKI: start issuing dual‑stack or PQC‑ready certs where supported; track LAMPS WG drafts for ML‑DSA in X.509. (datatracker.ietf.org)
    • HSM: plan firmware and CMVP timelines; run pilots on non‑critical services first (code signing, internal APIs). (entrust.com)
  • Raise resilience with DVT: use Obol/SSV for production validator sets where possible to reduce single‑box risk while you mature HSM and PQC posture. (blog.lido.fi)

Section 7 — Decision checklists

Buy sheet for a first production node (EL+CL, light RPC):

  • 8 cores, 32 GB RAM; 2 TB TLC NVMe (datadir), 2–4 TB HDD/SATA SSD (ancients)
  • Geth v1.16+; prune pre‑Merge history once; enable metrics
  • Consensus client co‑located; JWT secret shared; Engine API local only
  • Remote signer (if used) on dedicated host/VPC with PQC TLS, slashing DB on NVMe
  • Backups for keystores and slashing DB; restore runbooks tested

HSM/PQC rollout (12‑month plan):

  • Inventory TLS endpoints; enable hybrid PQ TLS via Cloudflare or OpenSSL OQS
  • Upgrade HSM firmware for ML‑KEM/ML‑DSA/SLH‑DSA where available; validate with CAVP test vectors if applicable
  • Issue PQC‑capable test certificates; monitor IETF LAMPS/TLS drafts
  • Keep validator BLS with remote signer + slashing protection; revisit HSM options annually

  • Ethereum.org run‑a‑node hardware and sizes; snapshot table for clients, EL+CL disk budgets. (ethereum.org)
  • Geth storage and pruning: freezer/ancients docs; offline prune; history pruning; Pebble; path‑based archive how‑to. (geth.ethereum.org)
  • PHE announcement and ecosystem history mirrors (era1). (blog.ethereum.org)
  • EIP‑4444 (history expiry). (eips.ethereum.org)
  • Geth v1.16 release highlights (history mode, era1). (chainrelease.info)
  • PQC standards (FIPS 203/204/205) and NIST announcements. (nist.gov)
  • TLS hybrid drafts and vendor deployments (Cloudflare, AWS policy notes). (datatracker.ietf.org)
  • HSM vendor firmware PQC support (Thales Luna 7.9.x, Entrust nShield 5 CAVP). (thalesdocs.com)
  • Remote signer architecture and key storage (Web3Signer). (docs.web3signer.consensys.io)

7Block Labs note: if you need a tailored BOM and runbook for your workloads (RPC mix, indexing, validators with DVT, compliance constraints), we’ll model disk growth, prune cadence, and PQC/HSM timelines against your internal SLAs and change‑control windows.

Like what you're reading? Let's build together.

Get a free 30‑minute consultation with our engineering team.

Related Posts

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2025 7BlockLabs. All rights reserved.