7Block Labs
Ethereum Development

ByAUJay

Ethereum Node Hardware Requirements, Ethereum Node Requirements, and Ethereum RPC Node Requirements

Summary: The fastest path to stable, production‑grade Ethereum nodes in 2026 is NVMe-first storage, client combinations matched to workload (full vs archive vs tracing), and RPC hardening behind a proxy. This guide distills current, cited baselines and tuning specifics for EL/CL clients, blob-era bandwidth/storage realities, and practical deployment patterns for startups and enterprises.

Who this guide is for

Decision‑makers evaluating whether to run their own Ethereum infrastructure (vs. managed providers) and architects sizing machines for production EL/CL nodes and high‑throughput RPC. We focus on current, verifiable numbers and practices as of January 2026, not generic “it depends.” (ethereum.org)


What changed since 2024: the blob era and why it matters for hardware

  • EIP‑4844 (“proto‑danksharding,” shipped in Dencun on March 13, 2024) introduced data “blobs” with a short required retention window (~18 days ≈ 4096 epochs). EL nodes don’t store blobs long‑term; CL nodes retain sidecars briefly. This reduces long‑term disk growth but increases short‑term bandwidth and some CL storage needs. (eips.ethereum.org)
  • The spec caps maximum per‑block blob overhead at ~0.75 MB; practical block propagation remains manageable on modern links. Several CLs added transient blob storage; for example Teku warned operators to budget ~50 GB (up to ~100 GB worst‑case) for blob files, which do not grow unbounded. (eips.ethereum.org)

Implication: plan for solid bandwidth and modest extra CL disk headroom, but you don’t need “blob‑sized archives.”


Baseline Ethereum node hardware: credible, current numbers

  • General guidance (combined EL+CL on one host):

    • Minimum: 2 TB SSD, 8 GB RAM, 10+ Mbit/s.
    • Recommended: fast 2+ TB SSD, 16+ GB RAM, 25+ Mbit/s.
    • Add ~200 GB for consensus (beacon) data, depending on client/features. (ethereum.org)
  • Execution clients (disk footprints and modes):

    • Geth: snap‑synced full nodes historically >500–650+ GB; plan ~2 TB practical. New path‑based archive mode lands ~2 TB for full history but does not yet support historical eth_getProof (Merkle proofs) — hash‑based archive still needed for that (20+ TB). (geth.ethereum.org)
    • Erigon v3: pruned “full” ≈ 920 GB; “minimal” ≈ 350 GB; archive ≈ 1.77 TB on mainnet (Sep 2025 measurements). (docs.erigon.tech)
    • Nethermind: mainnet full on fast disk; 2 TB NVMe recommended; 16 GB RAM/4 cores suggested baseline (archive: 128 GB RAM/8 cores). (docs.nethermind.io)
    • Reth: full ≈ 1.2 TB; archive ≈ 2.8 TB; stable 24+ Mbps recommended; emphasizes TLC NVMe. (reth.rs)
  • Consensus clients (CL) resource snapshots:

    • Teku guidance for full node + validator: 4 cores, 16 GB RAM, 2 TB SSD. Real‑world beacon DB footprints vary by client (~80–170 GB range from community measurements). (docs.teku.consensys.net)
  • Bandwidth: for healthy peer counts and validators, aim ≥50 Mbps; non‑staking nodes can get by with ~25 Mbps (Erigon guidance). EIP‑4844’s max blob load still fits typical links. (docs.erigon.tech)


Storage that actually syncs: NVMe, not wishful thinking

  • Use TLC NVMe with DRAM and good sustained low‑latency IOPS; avoid DRAM‑less or QLC drives for mainnet EL databases. Community‑maintained hall‑of‑fame/”hall‑of‑blame” data consistently shows budget SSDs failing to keep up with state I/O. (gist.github.com)
  • Practical picks: WD Black SN850X, Seagate FireCuda 530, KC3000, enterprise NVMe with PLP. Keep SSD temps <50°C and mount the DB filesystem with noatime to cut write amplification. (gist.github.com)
  • Cloud gotchas: elastic network volumes (e.g., gp3) with “headline IOPS” can still exhibit higher write latency; local NVMe or RAID0 of local NVMe outperforms for EL DBs. Base’s production notes recommend RAID0 of local NVMe (ext4). (gist.github.com)

Node types and when you need them

  • Full/pruned node (default for Geth/Nethermind, pruned/full for Erigon):
    • Keeps recent state, supports current dapp operations, recent historical queries, and logs. Fastest to sync (snap or staged sync). (geth.ethereum.org)
  • Archive node:
    • Required for rapid random access to historical state at arbitrary old blocks and some analysis use cases. With Geth, choose:
      • Hash‑based archive (legacy): complete historical tries, full eth_getProof at any block; 20+ TB.
      • Path‑based archive (recommended): ~2 TB but no historic eth_getProof (yet). (geth.ethereum.org)
    • Erigon archive: ~1.77 TB; excellent for heavy historical scans/traces via rpcdaemon. (docs.erigon.tech)
  • Tracing node:
    • For parity‑style trace_* or deep debug_* workloads. Erigon and Nethermind expose trace_; Geth exposes debug_ (different semantics); run tracing on a dedicated node to avoid impacting write path. (docs.erigon.tech)

RPC node requirements that matter in production

  • Method coverage vs. client:
    • Need trace_replayTransaction or trace_filter? Prefer Erigon or Nethermind (trace_). Need debug_traceTransaction? Geth supports debug_. Many teams run both classes behind a proxy to cover all workloads. (docs.nethermind.io)
  • Historical proofs:
    • If you must serve eth_getProof at arbitrary historical blocks, you need Geth hash‑based archive or another store that maintains historical tries; Geth path‑based archive currently does not satisfy this. (geth.ethereum.org)
  • Concurrency knobs that actually help:
    • Erigon rpcdaemon: tune --rpc.batch.concurrency, --rpc.batch.limit, --db.read.concurrency; disable HTTP/WS compression for raw throughput. Run rpcdaemon out‑of‑process and pin it to dedicated cores. (github.com)
    • Nethermind: performance guide covers pre‑warming, peer connection rates, and high‑RAM RocksDB options for RPC workloads. Use with care; these trades increase DB size/CPU in exchange for speed. (docs.nethermind.io)
    • Geth: since v1.13+, cache flags no longer influence pruning/DB size in the new path schema; don’t rely on cranking --cache to “fix” growth or OOMs. (blog.ethereum.org)
  • Security:
    • Keep HTTP/WS RPC bound to localhost; expose only via a reverse proxy with auth/TLS and method allow‑lists. Never expose Engine API (8551) publicly; it must be JWT‑authenticated and private to CL. (geth.ethereum.org)

Ports and networking you must get right

  • Execution P2P: TCP/UDP 30303 (Geth/Besu/Nethermind) and often 30304 for Erigon’s sentry; open/forward for healthy peering. Consensus P2P: 9000 TCP/UDP (Lighthouse/Nimbus/Lodestar), Prysm 13000/TCP + 12000/UDP. (docs.ethstaker.org)
  • RPC defaults: 8545 (HTTP), 8546 (WS). Engine API: 8551 (JWT‑auth, private). Lighthouse REST: 5052 (default). Keep CL/EL APIs private unless intentionally publishing. (setup-guide.web3pi.io)
  • Bandwidth targets:
    • Non‑staking nodes: ≥25 Mbps recommended; validators: ≥50 Mbps. EIP‑4844 increases peak payload but remains within these envelopes. (docs.erigon.tech)

Fast, safe syncing in 2026

  • Execution layer:
    • Use snap sync (Geth) or staged sync (Erigon/Reth). Reth’s staged sync downloads headers/bodies online and completes state processing mostly offline; plan for a few online hours, then CPU/disk‑bound processing. (geth.ethereum.org)
  • Consensus layer:
    • Use checkpoint sync (weak‑subjectivity) from multiple trusted endpoints; validates from a recent finalized checkpoint and cuts sync to minutes. Community tools like checkpointz and quorum checkers exist; verify across multiple providers. (docs.ethstaker.org)
  • Note: Geth also ships “blsync,” a beacon light client integrated with Geth for non‑validator use; not for production money‑handling or validators due to weaker guarantees. (geth.ethereum.org)

Practical hardware profiles (2026, mainnet)

  • Solo validator + light RPC on one box (cost‑efficient)
    • 8 cores (modern Xeon/EPYC or Ryzen), 32 GB RAM, 2 TB TLC NVMe (with heatsink), 100/50 Mbps+ Internet, UPS.
    • EL: Geth or Nethermind; CL: Lighthouse/Prysm/Teku. Add ~50 GB CL headroom for blobs. (docs.teku.consensys.net)
  • Read‑heavy RPC node (non‑archive)
    • 16 cores, 64 GB RAM, 2–4 TB TLC NVMe (RAID0 if cloud local NVMe), 1 Gbps NIC.
    • EL: Erigon “full” with separate rpcdaemon tuned for batch concurrency; CL: lightweight client (e.g., Lighthouse). Put RPC behind NGINX/HAProxy with rate limiting and IP allow‑lists. (docs.erigon.tech)
  • Archive + tracing node (analytics/explorer)
    • 16–24 cores, 64–128 GB RAM, 4 TB TLC NVMe (or more), 1 Gbps+.
    • EL: Erigon archive for trace_* and historical scans; add a Geth hash‑archive if you must support historical eth_getProof at arbitrary blocks. (docs.erigon.tech)

Example: production‑ready single‑host mainnet node

  • OS/filesystem:
    • Ubuntu 22.04 LTS, ext4 on TLC NVMe, mount with noatime. Monitor SSD temps; keep <50°C. (ethdocker.com)
  • EL (Erigon full + rpcdaemon):
    • erigon --prune.mode=full --http=false
    • rpcdaemon --http --http.api eth,net,debug,trace,web3,txpool --rpc.batch.concurrency=64 --db.read.concurrency=64
    • Place rpcdaemon behind NGINX (TLS, auth, rate limits). (github.com)
  • CL (Lighthouse):
    • lighthouse beacon --http --http-address 127.0.0.1 --http-port 5052 (keep private); enable checkpoint sync at first boot.
    • Open P2P port 9000 TCP/UDP. (lighthouse-book.sigmaprime.io)
  • Networking:
    • Forward 30303 (and 30304 if Erigon sentry) TCP/UDP and 9000 TCP/UDP from your router/firewall. Keep 8545/8546/8551 internal. (docs.ethstaker.org)
  • Monitoring:
    • Enable Geth/Erigon metrics or Prometheus endpoints and import the standard Grafana dashboards; Geth defaults to 127.0.0.1:6060 for metrics. (geth.ethereum.org)

RPC hardening checklist (what we implement for clients)

  • Bind EL/CL APIs to localhost; publish only through a reverse proxy with TLS, auth, and IP allow‑lists. Never expose Engine API (8551) publicly. (geth.ethereum.org)
  • Whitelist only the JSON‑RPC namespaces you truly need (e.g., eth, net, web3). Avoid exposing debug over public HTTP. Prefer WS only for subscriptions that require it. (geth.ethereum.org)
  • Separate tracing and archive workloads onto dedicated nodes. Use client features (Erigon rpcdaemon, Nethermind tuning) to throttle CPU/disk impact during spikes. (github.com)
  • Rate limit and cap batch sizes at the proxy; keep batch requests within server limits. (github.com)

EL/CL client combinations: choose by workload

  • High‑throughput reads and deep history:
    • Erigon archive + rpcdaemon for trace_* + log scans; optionally add Geth hash‑archive to satisfy historic eth_getProof users. (docs.erigon.tech)
  • “Typical dapp” RPC and staking on one host:
    • Geth or Nethermind full + Lighthouse/Prysm/Teku, with checkpoint sync on CL, snap/staged sync on EL. (geth.ethereum.org)
  • Fast sync and efficient steady‑state:
    • Reth full for fast staged sync and responsive eth_call/logs; pair with a mainstream CL. (reth.rs)

Client diversity is still a network health objective; consider minority clients where they meet your requirements. (ethereum.org)


Testnets in 2026: where to practice at scale

  • Sepolia remains the recommended application testnet. Holesky is being sunset after Pectra testing; Hoodi launched in March 2025 for validator/infrastructure testing. Plan migrations accordingly if you still rely on Holesky. (blog.ethereum.org)

Cost/performance tuning that moves the needle

  • Disk first, then CPU:
    • EL block processing is usually I/O‑bound; prioritize NVMe latency/consistency over “more vCPU.” Pre‑warm and tune DBs only if you understand the trade‑offs (bigger DB, more CPU). (docs.nethermind.io)
  • Filesystem and kernel:
    • ext4 + noatime, adequate open‑files limits, and avoiding CoW filesystems for EL DBs usually yields fewer surprises than exotic stacks. (ethdocker.com)
  • Cloud layout:
    • Prefer instances with local NVMe; stripe them (RAID0) for bandwidth. Base’s production examples: RAID0 local NVMe on i7i.* with ext4 for both Reth archive and Geth full. (docs.base.org)

Quick decision matrix

  • You primarily need…
    • Current state, logs, submitting transactions → Full/pruned node on fast TLC NVMe; 2 TB suffices today (plan 4 TB if you want multi‑year runway). (ethereum.org)
    • Historical state at arbitrary blocks (and proofs) → Geth hash‑archive (20+ TB) or a specialized historical store; otherwise use Erigon/Reth archives for most analytics without proofs. (geth.ethereum.org)
    • Debug/trace introspection at scale → Erigon or Nethermind with trace_* on a dedicated tracing node; tune rpcdaemon/DB. (github.com)
    • Validator operations → Any mainstream CL with checkpoint sync, ≥50 Mbps, and extra ~50 GB blob headroom; keep Engine API private, use UPS and monitoring. (docs.teku.consensys.net)

Implementation snippets

  • Geth + CL (secure Engine API):
    • geth --authrpc.addr localhost --authrpc.port 8551 --authrpc.vhosts localhost --authrpc.jwtsecret /path/jwt --http --http.api eth,net,web3
    • CL configured to point at http://localhost:8551 with the same JWT secret. Keep 8545/8551 internal. (geth.ethereum.org)
  • Erigon RPC separation:
    • erigon --prune.mode=full … (no HTTP)
    • rpcdaemon --http --http.api eth,net,debug,trace,web3 --rpc.batch.concurrency=64
    • Put rpcdaemon behind NGINX with TLS/auth and reasonable per‑IP rate limits. (github.com)
  • Lighthouse REST (local only):

Takeaways (TL;DR)

  • Size for disk first. For most full nodes, 2 TB TLC NVMe works today; aim for 4 TB if you want years of headroom. Erigon/Reth give lean archives (~1.8–2.8 TB) vs. legacy Geth hash‑archive (20+ TB) which you still need for historical eth_getProof. (docs.erigon.tech)
  • Keep RPC private and minimal. Publish only behind a proxy with auth/TLS; separate tracing/analytics from serving writes. (geth.ethereum.org)
  • Expect modest CL disk/bandwidth overhead from blobs (≈18‑day retention); budget ~50 GB transient CL storage and stable 25–50+ Mbps. (newreleases.io)
  • For throughput, Erigon’s rpcdaemon tuning and Reth’s staged architecture deliver meaningful wins when paired with fast NVMe and sane proxy limits. (github.com)

If you want a tailored bill of materials and an HA topology for your workload (traffic shape, trace depth, retention, multi‑region), we’re happy to blueprint it.

Like what you're reading? Let's build together.

Get a free 30‑minute consultation with our engineering team.

Related Posts

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2025 7BlockLabs. All rights reserved.