ByAUJay
Ethereum Node Hardware Requirements and API Performance Tuning
Summary: What you actually need to run high‑reliability Ethereum nodes in late‑2025, and the concrete knobs that move JSON‑RPC performance. This guide distills current client‑specific disk/CPU/RAM needs and shows exactly how to tune queries, caches, and infrastructure for production workloads.
1) What changed in 2025 (and why it matters for sizing)
- May 7, 2025: Pectra mainnet upgrade landed. In practice for operators this meant new Engine API nuances and client version minimums across EL/CL. (blog.ethereum.org)
- December 3, 2025: Fusaka mainnet upgrade activated PeerDAS, with “Blob Parameter Only” (BPO) follow‑ups that raise blob capacity beyond the original EIP‑4844 3‑target/6‑max design. Short‑term: more rollup blob throughput and different consensus‑layer network/storage profiles. BPO1 (Dec 9, 2025) raises target/max blobs to 10/15; BPO2 (Jan 7, 2026) to 14/21. You must stay current on both EL and CL versions. (blog.ethereum.org)
- EIP‑4844 fundamentals still apply: blobs are ~128 KiB each, stored on the consensus layer and pruned after ~18 days. Operators must plan CL disk for short‑lived blob sidecars while EL storage remains dominated by state/history. (eips.ethereum.org)
What that means for sizing:
- Consensus clients now handle higher blob sampling/serving (post‑PeerDAS) but no longer require every node to download all blob data; bandwidth pressure patterns change while long‑term disk impact remains bounded because blobs are time‑limited. For EL nodes, the big drivers remain state size, schema (e.g., path‑based vs legacy), and whether you need archive‑grade history. (blog.ethereum.org)
2) Minimums vs reality: current hardware baselines by client and node type
Below are current, cited figures as of December 2025. Plan headroom beyond these numbers.
-
Ethereum.org baseline (general):
- Minimum: 2 TB SSD, 8 GB RAM, 10+ Mbit/s; Recommended: fast 2+ TB SSD, 16+ GB RAM, 25+ Mbit/s. Expect ~200 GB extra for consensus data. (ethereum.org)
-
Geth (EL):
- Snap‑synced full node currently >650 GB; growth ≈14 GB/week; practical disk planning = 2 TB to avoid frequent offline pruning. Use offline prune periodically to reset usage. (geth.ethereum.org)
- New “path‑based” archive mode (v1.16+): ≈2 TB for full history with trade‑offs (e.g., historical eth_getProof not yet supported). Hash‑based (legacy) archive can exceed 12–20 TB. Choose mode based on historical proof/query needs. (geth.ethereum.org)
-
Nethermind (EL):
- Suggested: Mainnet full 16 GB RAM / 4 cores; archive 128 GB / 8 cores; disk: use at least a 2 TB fast SSD/NVMe; IOPS ≳10,000 helps both sync and RPC. (docs.nethermind.io)
-
Erigon (EL):
- Minimal: ~350 GB; Full: ~920 GB; Archive: ~1.77 TB (Ethereum mainnet). Recommended disk 1–4 TB depending on prune mode; RAM 16–64 GB. Separate rpcdaemon allows better scaling. (docs.erigon.tech)
- Bandwidth guide: non‑staking nodes 25 Mbit/s recommended; staking 50 Mbit/s. (docs.erigon.tech)
-
Reth (EL):
- Full ≈1.2 TB; Archive ≈2.8 TB; stable 24 Mbit/s+ suggested. Emphasizes fast TLC NVMe. (reth.rs)
-
Consensus clients (CL) sizing snapshot:
- Nimbus: ~200 GB for beacon data; can co‑host with EL (2 TB SSD + 16 GB RAM for both on one machine). (nimbus.guide)
- Teku: 4 cores, 16 GB RAM, 2 TB SSD for a combined EL/CL “full node + validator” baseline. (docs.teku.consensys.io)
Rule of thumb (Dec 2025):
- Single‑box validator + light RPC: 8 cores, 32 GB RAM, 2 TB TLC NVMe with DRAM, plus a second SSD for backups/OS.
- Dedicated RPC (read‑heavy): scale out multiple EL nodes (Erigon/Reth/Geth) behind a proxy; prioritize NVMe with strong sustained write and low latency over raw capacity. (docs.erigon.tech)
3) Storage that won’t bite you at 3 a.m.
-
Prefer TLC NVMe with DRAM cache; avoid QLC for write‑intensive sync phases; keep SSDs cool to prevent throttling. Nethermind’s sync notes and performance guide explicitly call out sustained write speed and cooling. (docs.nethermind.io)
-
Public cloud EBS/PD/Azure SSDs are fine—know their ceilings:
- AWS EBS gp3 now goes up to 80k IOPS and 2,000 MiB/s per volume (as of Sep 26, 2025). If you under‑provision IOPS/throughput, snap/state sync slows or stalls. (aws.amazon.com)
- Google Persistent Disk SSD: up to 80k IOPS and 1,200 MiB/s per instance; queue depth matters for networked storage—target 32–128+ outstanding I/Os when chasing 16k–64k+ IOPS. (docs.cloud.google.com)
- Azure Premium SSD v2: up to 80k IOPS and 1,200 MB/s per disk; 3,000 IOPS and 125 MB/s are “free” baselines, scale above as needed. (learn.microsoft.com)
-
For Erigon, don’t put the database on ZFS; RAID0 over multiple NVMe for capacity/throughput is acceptable if you handle redundancy at a higher layer. (docs.erigon.tech)
4) Pruning and state scheme: reclaim space safely
- Geth offline prune (state): stop node and run
; expect hours, not minutes. Don’t wait until the disk is 99% full—prune at ~80% capacity and keep at least ~40 GB free or pruning may fail. (geth.ethereum.org)geth snapshot prune-state - Geth history prune (PoW bodies/receipts):
to shed large amounts of pre‑Merge history you likely don’t need on RPC nodes. (geth.ethereum.org)geth prune-history - Archive needs:
- Need historical state queries at arbitrary blocks and long‑range tracing? Choose archive. Geth’s new path‑based archive uses ~2 TB with caveats for historical proofs; legacy hash‑based archive still exists for full feature parity at much larger sizes. (geth.ethereum.org)
5) JSON‑RPC throughput: 10 knobs that actually move the needle
- Pick the right EL for the workload
- Erigon’s architecture (staged sync, separate rpcdaemon) shines for heavy historical queries and high concurrency. Run rpcdaemon out‑of‑process and pin to dedicated CPU cores. (docs.erigon.tech)
- Reth emphasizes fast call throughput with modern Rust internals; size disk to ≥1.2 TB full / ~2.8 TB archive. (reth.rs)
- Geth remains the most ubiquitous; use new path‑based state where appropriate and plan pruning routines for steady‑state sizing. (geth.ethereum.org)
- Transport choice matters
- Use HTTP for stateless bursts; use WebSocket for subscriptions/streaming (logs, newHeads). Geth supports HTTP/WS/IPC; expose only what you need. (geth.ethereum.org)
- Batch—but within server limits
- Geth defaults: BatchRequestLimit=1000; BatchResponseMaxSize=25,000,000 bytes—requests larger than this will fail or throttle. Tune client‑side split size accordingly. (geth.ethereum.org)
- Nethermind: JsonRpc.MaxBatchSize (default 1024) and MaxBatchResponseBodySize (default 32 MiB). Set explicit limits to protect the node. (docs.nethermind.io)
- eth_getLogs is expensive—paginate correctly
- Best practice is small block windows and topic narrowing. Many providers enforce 1k–10k block ranges; Besu ships a default hard cap of 1000 via
. Nethermind lets you cap logs per response (--rpc-max-logs-range
, default 20000). Design pagination at the SDK layer. (besu.hyperledger.org)JsonRpc.MaxLogsPerResponse
- Tracing safely
calls can stall nodes without archive or when targeting far‑past blocks. Use dedicated archive nodes and scoped tracers; disable capture of memory/storage/stack unless required. Expect very large responses. (geth.ethereum.org)debug_trace*
- Geth cache flags: know what they do in 2025
- With Pebble + path‑based state, Geth moved most caching outside the Go GC. The old habit of cranking
for pruning effect isn’t relevant; it no longer influences pruning or DB size. Start with defaults; only increase modestly if you’ve validated a benefit. (blog.ethereum.org)--cache
- EL–CL Engine API isolation
- Never expose the authenticated Engine API (default port 8551). Use proper JWT secret configuration on EL/CL; bind to localhost only; firewall everything else. Geth/Teku/Nethermind docs show JWT secret parameters. (geth.ethereum.org)
- Peer count and snap tuning (Nethermind)
andNetwork.MaxActivePeers
can accelerate snap/state sync (e.g., raise from default 20 to ~50 outgoing connects/sec), but ISPs may throttle if dial rates are too high. Use judiciously. (docs.nethermind.io)Network.MaxOutgoingConnectPerSec
- RPC role separation and proxies
- Separate “ingest/sync” from “serve RPC.” For Erigon, run
anderigon
separately; for Geth/Nethermind, consider a read‑only RPC node behind Nginx/HAProxy with HTTP keep‑alive, connection reuse, and server‑side request body limits. (docs.erigon.tech)rpcdaemon
- Monitor the right metrics
- Geth: enable
and scrape Prometheus endpoint (--metrics
). Watch p2p traffic, chain insert timings, and RPC batch metrics via dashboards. (geth.ethereum.org)/debug/metrics/prometheus - Nethermind: monitor
,nethermind_json_rpc_requests
,_errors
, and Engine metrics (forkchoice/newPayload execution time) to spot saturation early. (docs.nethermind.io)_bytes_sent/received
6) Practical examples (apply these today)
A) Validator + light JSON‑RPC on one box (cost‑efficient, resilient)
- Hardware: 8 cores, 32 GB RAM, 2 TB TLC NVMe with DRAM cache; 25–50 Mbit/s uplink. Add UPS. (ethereum.org)
- Software: Geth or Reth (EL) + Teku or Nimbus (CL). Keep Engine API on localhost; expose RPC only on LAN and least‑privilege namespaces. (docs.teku.consensys.io)
- Maintenance: monthly
and occasionalgeth snapshot prune-state
if space trends upward; alert at 70% disk, act at 80%. (geth.ethereum.org)geth prune-history
B) Read‑heavy public RPC (app/API backend)
- Topology: 3–5 EL nodes behind HAProxy/Nginx; prefer Erigon (rpcdaemon out‑of‑process) or Reth for call throughput; keep debug/txpool/admin namespaces disabled. (docs.erigon.tech)
- Settings:
- Geth: respect BatchRequestLimit=1000 and BatchResponseMaxSize=25 MB; at proxy, enforce client‑side pagination for
. (geth.ethereum.org)eth_getLogs - Nethermind: set
(≤1024) andJsonRpc.MaxBatchSize
(e.g., 5000–10000) to cap blast radius. (docs.nethermind.io)JsonRpc.MaxLogsPerResponse
- Geth: respect BatchRequestLimit=1000 and BatchResponseMaxSize=25 MB; at proxy, enforce client‑side pagination for
- Storage: 2 TB NVMe per node; watch write amplification during catch‑up; keep SSDs cooled—Nethermind sync is I/O‑intense. Cloud disks: provision enough IOPS on EBS gp3 (≥8–16k to start; scale to 30–60k on spikes). (docs.nethermind.io)
- Queries: always segment
by block range and topics; do log indexing off‑chain if you need wide windows. Besu’s recommended hard limit is 1000 blocks per query; mirror that in your API. (besu.hyperledger.org)eth_getLogs
C) Deep analytics and full historical tracing
- Choose archive:
- Erigon archive ≈1.77 TB (fast for historical reads).
- Geth path‑based archive ≈2 TB with limitations on historical Merkle proofs; use legacy hash‑based archive (12–20+ TB) if you require historical
. (docs.erigon.tech)eth_getProof
- Run tracing on worker nodes; keep public RPC isolated. Use tuned timeouts and tracers that disable memory/stack capture unless required to keep payloads manageable. (therpc.io)
7) Consensus‑layer specifics post‑EIP‑4844 and Fusaka
- With EIP‑4844, blobs live on the CL for ~4096 epochs (~18 days). Even before PeerDAS, this keeps CL disk manageable; with PeerDAS (Fusaka) nodes sample rather than fully download all blob data, shifting bandwidth/storage characteristics while enabling higher blob capacity via BPO forks. (eips.ethereum.org)
- Capacity constants you should know:
- Blob size: 4096 field elements × 32 bytes = 128 KiB.
- Pre‑Fusaka target/max per block: 3/6; post‑Fusaka BPOs raise these progressively (10/15 then 14/21). Expect higher L2 data rates without EL disk growth. (eips.ethereum.org)
- Operationally: ensure your CL (Lighthouse/Teku/Nimbus/Prysm) is on the client version recommended in EF posts for the active fork; pay attention to REST/metrics surge during BPO ramp‑ups. (blog.ethereum.org)
8) OS and network hygiene for RPC nodes
- Keep file‑descriptor limits high (e.g., 100k) and use HTTP keep‑alive at the proxy.
- Linux network stack: increase listen backlog (
≥1024–4096) and SYN backlog to handle bursts; set sane TCP buffer ceilings. Validate in staging—these are generic high‑throughput webserver tunings, but they apply equally to JSON‑RPC. (docs.pingidentity.com)net.core.somaxconn - Co‑locate Prometheus exporters and alert on: RPC errors/s, P95/P99 method latency (especially
,eth_call
,eth_getLogs
), disk busy time, and CL head lag >1 slot. Geth and Nethermind expose rich metrics to wire into Grafana. (geth.ethereum.org)debug_trace*
9) Quick client‑by‑client tuning checklist
-
Geth
- Prefer path‑based state for new deployments needing compact history; schedule periodic
andsnapshot prune-state
.prune-history - Respect batch limits and keep
minimal; don’t expose Engine API. (geth.ethereum.org)--http.api
- Prefer path‑based state for new deployments needing compact history; schedule periodic
-
Nethermind
- Tune
/Network.MaxActivePeers
cautiously for faster snap and block import; setMaxOutgoingConnectPerSec
/JsonRpc.MaxBatchSize
to protect RPC.MaxLogsPerResponse - If RPC throughput matters more than validator latencies, consider disabling block pre‑warm (
). (docs.nethermind.io)Blocks.PreWarmStateOnBlockProcessing=false
- Tune
-
Erigon
- Choose
(minimal/full/archive) intentionally; run--prune.mode
as a separate process for scale; avoid ZFS. (docs.erigon.tech)rpcdaemon
- Choose
-
Reth
- Size disk to ≥1.2 TB full (TLC NVMe); for heavy APIs, deploy multiple stateless RPC nodes behind a proxy and warm OS caches with realistic load profiles. (reth.rs)
10) Sane starting BOMs (bare metal)
-
“All‑in‑one validator + light RPC”
- CPU: 8C/16T modern x86
- RAM: 32 GB ECC
- Disk: 2 TB TLC NVMe with DRAM (≥10k IOPS sustained), plus 500 GB SSD for OS/logs
- Network: 25–50 Mbit/s, wired; UPS
- Software: Geth/Reth + Teku/Nimbus; Prometheus/Grafana
- Maintenance: prune monthly; patch on EF release cadence. (ethereum.org)
-
“Public RPC cluster (read‑heavy)”
- 3× EL nodes (Erigon or Reth) each: 8–16 cores, 32–64 GB RAM, 2 TB NVMe
- HAProxy/Nginx with connection pooling, request size caps, and per‑IP rate limits; forced pagination for
.eth_getLogs - Cloud variant: single 4–8 TB consolidated EBS gp3 (IOPS ≥20–40k) per node or multiple smaller NVMe; ensure queue depth at OS/driver is high enough during sync. (aws.amazon.com)
11) Common failure patterns we see (and how to avoid them)
-
“My node is synced but RPC is timing out on wide log scans.”
- Cause: huge
windows. Fix: cap range (1k–5k blocks typical), narrow topics, and parallelize; Besu defaults to 1000. (besu.hyperledger.org)eth_getLogs
- Cause: huge
-
“Disk looks fine… then sync falls off a cliff.”
- Cause: thermal throttling / low sustained writes of consumer SSDs. Fix: enterprise TLC NVMe with heat sinks; monitor SSD temps and throttle points; for clouds, increase provisioned IOPS. (docs.nethermind.io)
-
“Archive queries are fast but eth_getProof for old blocks fails.”
- Cause: Geth path‑based archive doesn’t yet support historical proofs. Fix: legacy hash‑based archive or Erigon for historical state reads (proof needs may still require hash‑based). (geth.ethereum.org)
12) Final guidance for decision‑makers
- Treat EL disk as your primary constraint; use TLC NVMe with headroom, and plan an operational routine around pruning (Geth) or prune modes (Erigon). (geth.ethereum.org)
- Separate concerns: run dedicated RPC nodes; keep Engine API private; cap batch sizes and log ranges; and alert on JSON‑RPC error rates and tail latencies. (geth.ethereum.org)
- Stay on EF‑recommended client versions at each fork (Pectra → Fusaka → BPO phases) and budget brief maintenance windows; CL/EL mismatches are the fastest way to downtime. (blog.ethereum.org)
If you want 7Block Labs to validate your exact workload, we’ll replay your production RPC mix against a short‑listed client matrix (Geth/Erigon/Reth; Nethermind where relevant), then produce a right‑sized BOM and proxy policy with measured P95/P99 latencies and safe concurrency caps for each method.
Like what you're reading? Let's build together.
Get a free 30‑minute consultation with our engineering team.

