ByAUJay
Ethereum Validator Hardware Requirements and Ethereum RPC Dedicated Nodes for High-Throughput Workloads
Short description: What it really takes in 2026 to run robust Ethereum validators and to serve high‑throughput RPC traffic—hardware you actually need, client-specific disk footprints, EIP‑4844 blob implications, and concrete blueprints for production-grade, dedicated RPC clusters.
Why this guide
Decision-makers ask two separate questions that often get conflated:
- What hardware is required to reliably operate an Ethereum validator?
- What’s the right way to provision dedicated Ethereum RPC nodes for heavy read/tracing workloads?
Those are different roles with different bottlenecks and failure modes. Below we separate them, add post‑Dencun realities, and give concrete, client-specific numbers and build blueprints that work in 2026.
What changed recently: blobs, bandwidth, and storage
- Dencun (Deneb/Cancun) activated on mainnet on March 13, 2024, introducing EIP‑4844 “blob” transactions. Blobs are ephemeral data sidecars used mainly by L2s; they reduce L2 fees and modestly raise CL bandwidth/storage needs. (blog.ethereum.org)
- Each blob is 128 KB; the protocol targets 3 blobs per block (max 6). Blobs persist ~4096 epochs (~18 days), adding roughly 48 GiB of rolling CL storage on average, ~96 GiB at the max. This is a CL change—execution storage footprints didn’t jump. (docs.teku.consensys.net)
- The EIP‑4844 spec notes worst‑case additional bandwidth per block is under ~0.75 MB; sustained load is far lower because blobs expire quickly compared to execution history. (eip.directory)
Implication: validators today need a bit more bandwidth headroom and 50–100 GiB of extra CL disk for blob sidecars, but your execution client footprints and RPC patterns remain the primary sizing drivers.
Role 1: Ethereum validator node (consensus + execution)
A validator machine runs both:
- One execution client (EL) such as Geth, Nethermind, Besu, Erigon, or Reth.
- One consensus client (CL) such as Teku, Lighthouse, Prysm, Nimbus, or Lodestar.
You’ll also open the P2P ports for EL and CL to peer properly. Defaults:
- EL: 30303 TCP/UDP (Geth, Nethermind, Besu); Erigon also uses 30304 in some setups.
- CL: 9000 TCP/UDP (Lighthouse, Teku, Nimbus, Lodestar); Prysm uses 13000/TCP and 12000/UDP. (docs.ethstaker.org)
Current, cited hardware baselines (2025–2026)
- Ethereum.org “run a node” baseline: 2+ TB SSD, 8 GB RAM minimum; recommended 2+ TB fast SSD, 16+ GB RAM, ≥25 Mbit/s. EL disk by client (snap/“full” vs archive): Besu ~800 GB snap; Geth ~500 GB snap; Nethermind ~500 GB snap; Erigon/Reth are archive‑oriented with ~2.5 TB (Erigon) and ~2.2 TB (Reth) archive footprints. Add ~200 GB for consensus data. (ethereum.org)
- Geth specifics: snap‑synced full node >650 GB; growth ~14 GB/week; plan 2 TB to avoid frequent offline pruning. Archive >12 TB in legacy “hash‑based” scheme; pruning periodically brings full node back ~650 GB. (geth.ethereum.org)
- Nethermind: mainnet full nodes run well on 16 GB RAM/4 cores; archive suggests 128 GB/8 cores; disk: use fast SSD/NVMe; 2 TB is “comfortable” for mainnet+CL. (docs.nethermind.io)
- Reth: full node ~1.2 TB; archive ~2.8 TB; 8–16 GB RAM typical; stable ≥24 Mbps bandwidth. (reth.rs)
- Besu: snap+pruned default; documented sync time and disk usage around 800 GB with Bonsai; JVM min 8 GB; NVMe recommended for validators. (besu.hyperledger.org)
Consensus side:
- Teku lists a practical validator baseline: 4 cores @2.8 GHz, 16 GB RAM, SSD with 2 TB free. (docs.teku.consensys.net)
- Running Lighthouse’s slasher is optional but adds ~256 GB SSD plus more CPU/RAM; recommended for experts. (lighthouse-book.sigmaprime.io)
Blob sidecars (post‑Dencun):
- Expect an extra ~48 GiB average (max ~96 GiB) rolling CL storage for blobs; bandwidth uptick is modest (roughly tens of KB/s sustained), but give yourself headroom. (docs.teku.consensys.net)
Practical interpretation:
- For a single mainnet validator today, a quiet but robust build is 4–8 CPU cores, 32 GB RAM, 2–4 TB TLC NVMe, and ≥50/25 Mbps down/up. If you plan to run a slasher, max peers, or heavy local RPC, raise RAM and disk. Proposed EIP‑7870 suggests 4 TB NVMe and 32–64 GB RAM for future headroom, but treat it as draft guidance. (eips.ethereum.org)
Execution-client specifics you should care about
- Geth pruning and databases:
- Snap‑sync full nodes grow ~14 GB/week; run “offline prune” periodically:
. “History pruning” is also documented for older PoW bodies; check your version. (geth.ethereum.org)geth snapshot prune-state - Pebble can be used via
; Geth has been evolving toward a path‑based state scheme where pruning is built‑in and the cache flag matters less than it used to. (geth.ethereum.org)--db.engine=pebble
- Snap‑sync full nodes grow ~14 GB/week; run “offline prune” periodically:
- Nethermind performance knobs:
- With ≥32 GB RAM, enlarge pruning and RocksDB buffers; with ≥128 GB or even ≥350 GB, you can shift to MMAP/no‑compression profiles for lower latency (RPC/attestations). These are documented tunables. (docs.nethermind.io)
- Erigon:
- Very efficient footprints; current repo notes around 1.1 TB (full) and ~1.6 TB (archive) for mainnet on Erigon 3 in 2025. Use
to enable--http.api
if you need trace RPC. (github.com)trace
- Very efficient footprints; current repo notes around 1.1 TB (full) and ~1.6 TB (archive) for mainnet on Erigon 3 in 2025. Use
- Reth:
- Clear full/archive disk guidance and optional history indexes; you can enable account/storage history indexing to accelerate specific RPC access patterns. (reth.rs)
Network and ports checklist
- Forward EL 30303 TCP/UDP and your CL’s default ports (9000 TCP/UDP for Lighthouse/Teku; 13000/TCP+12000/UDP for Prysm) to achieve healthy peer counts. (docs.ethstaker.org)
MEV‑Boost (PBS) considerations for validators
- MEV‑Boost lets validators source blocks from competitive builders via relays; it is widely used to improve returns. Configure your CL (e.g., Teku) with
or use--builder-endpoint
to multiplex relays. Understand liveness risks and local fallback. (docs.teku.consensys.net)mev-boost - The proposer flow and relay interactions are documented (register validator, obtain header, submit blinded block). Keep multiple relays and use the client’s “circuit breaker” fallback so the node proposes locally if relays misbehave. (docs.flashbots.net)
Two validated hardware profiles (2026)
- Minimal-but-safe validator (solo, 1–2 keys):
- 4 cores, 32 GB RAM, 2 TB TLC NVMe (not QLC), ≥50/25 Mbps, UPS, forwarded ports; Geth+Teku or Nethermind+Lighthouse. Plan monthly prune (Geth) and 50–100 GiB extra for blob sidecars. (geth.ethereum.org)
- Enterprise validator (multiple keys, MEV‑Boost, dashboards):
- 8–16 cores, 64 GB RAM, mirrored 4 TB TLC NVMe, dual NICs, redundant power; run a remote signer (Web3Signer) with slashing protection DB and enable builder endpoints across multiple relays. (docs.web3signer.consensys.net)
Role 2: Dedicated Ethereum RPC nodes for high‑throughput workloads
A validator machine should not be your high‑QPS API box. Heavy RPC (eth_call/eth_getLogs/debug_/trace_) demands its own, separately scaled fleet.
Workload taxonomy
- Read‑only transactional APIs: eth_getBlockBy… / eth_getTransaction… / eth_getBalance / eth_call
- Event scanning: eth_getLogs across large ranges; subscription streaming via WebSocket eth_subscribe
- Tracing and forensics: debug.traceTransaction, trace_* filtering and replay
- Mempool/watchers: txpool_* for pending transactions, plus WS subscriptions; tune txpool only on boxes that need it. (geth.ethereum.org)
Why client choice matters for RPC
- Geth
- Stable, broad compatibility; GraphQL endpoint can combine multiple fields into a single query to cut round trips and lower overhead for complex dashboards. (geth.ethereum.org)
- Nethermind
- Strong tracing support (debug_ and trace_), plus documented performance tuning for large RAM (file warmer, larger pruning caches, RocksDB options). (docs.nethermind.io)
- Erigon
- Lean footprints and first‑class trace namespace (
,trace_callMany
,trace_block
, etc.). Also providestrace_filter
APIs for Otterscan acceleration. Useots_
on dedicated tracing nodes. (docs.erigon.tech)--http.api eth,erigon,trace
- Lean footprints and first‑class trace namespace (
- Reth
- Modern Rust client with fast path; official guidance shows ~1.2 TB full / ~2.8 TB archive. Offers configurable indexing stages (account/storage history) and clear docs on how pruning affects RPC availability. Great candidate for low‑latency read APIs. (reth.rs)
- Besu
- Solid JVM client; snap+pruned default with ~800 GB footprint; NVMe recommended for high‑throughput RPC. (besu.hyperledger.org)
Reference RPC spec and test suite: the canonical Execution JSON‑RPC is standardized and conformance‑tested; stick to spec’d methods for portability. (ethereum.github.io)
Architecture patterns that scale
- Separate pools per capability:
- Read pool: 2–3 Reth or Geth nodes behind an L4/L7 load balancer (HTTP+WS), with WS sticky sessions for subscriptions.
- Logs/scan pool: Erigon archive or Reth with history indexes, tuned for eth_getLogs and range scans.
- Trace pool: Erigon with
namespace enabled; keep it isolated so tracing spikes don’t degrade standard read latency. (docs.erigon.tech)trace
- Prefer WebSockets for subscription/event workloads and GraphQL where you benefit from “one query/one round trip” semantics. Geth’s WS and GraphQL docs cover the necessary flags. (geth.ethereum.org)
- Keep the Engine API private: consensus↔execution uses the authenticated Engine API on localhost:8551 with a JWT secret. Never expose it publicly. (geth.ethereum.org)
- Peer counts: cap
reasonably on RPC boxes to reduce P2P noise; validators can hold higher peer counts for resilience. (geth.ethereum.org)--maxpeers
Concrete hardware for a high‑throughput RPC box (per node)
- CPU: 8–16 cores, high base clocks; memory: 32–64 GB.
- Storage: TLC NVMe (not QLC), 4 TB for archive or 2 TB for full, with ample free space to avoid SSD performance cliffs; NVMe latency matters more than headline IOPS. Nethermind highlights response time/IOPS sensitivity. (docs.nethermind.io)
- Network: 1 Gbps for busy public endpoints; private enterprise clusters do fine at 100–500 Mbps depending on workload mix.
Example: a three‑tier RPC cluster
- Tier A (read hot path): 3× Reth full nodes (1.2 TB each), HTTP+WS, auto‑healed; enable account/storage history indexes only if your product needs “what block did this key change?” queries. (reth.rs)
- Tier B (logs/indexing): 2× Erigon archive nodes with
and Otterscan support if you operate an internal block explorer. (docs.erigon.tech)--http.api eth,erigon,trace - Tier C (deep traces): 2× Erigon tracing boxes on isolated subnets for ad‑hoc
/trace_replayBlockTransactions
bursts. Use request budgets and batch windows. (docs.erigon.tech)debug_traceTransaction
Add a small Geth GraphQL node if your dashboards benefit from single‑query aggregation. (geth.ethereum.org)
Tuning that actually moves the needle
- Geth
- Use snap sync; prefer OS cache and default cache splits; prune history/state offline on a cadence. If evaluating Pebble or path‑scheme, resync on a fresh datadir and drop legacy cache habits. (geth.ethereum.org)
- Nethermind
- If memory-rich, adopt the ≥32 GB or ≥128 GB profiles; enable file warmer; on ultra‑RAM systems (≥350 GB), MMAP/no‑compression profiles cut CPU per request. (docs.nethermind.io)
- Erigon
- Keep RPC daemon namespaces minimal per host; enable
only on boxes that need it; consider archive mode only when required by product features. (docs.erigon.tech)trace
- Keep RPC daemon namespaces minimal per host; enable
- Reth
- Understand pruning trade‑offs; pruning sender/tx lookup/receipts/history disables corresponding historical RPC calls—plan indexes and retention around your endpoints’ SLAs. (reth.rs)
Method placement strategy
- Place eth_call/eth_getBalance/eth_getTransaction* on “read” nodes.
- Place eth_getLogs wide‑range scans and all trace/debug on isolated “heavy” nodes.
- Keep txpool on a dedicated mempool watcher node; the txpool namespace is non‑standard and heavy. (geth.ethereum.org)
Security and reliability patterns
- Remote signing: run a dedicated Web3Signer with slashing protection DB for consensus keys; it lets you fail over execution/consensus clients without risking double‑sign. (docs.web3signer.consensys.net)
- MEV‑Boost liveness: always configure multiple relays and enable your client’s circuit breaker fallback to local block building. Test relay outages in staging. (docs.flashbots.net)
- Keep Engine API private; never expose 8551 beyond your host/cluster boundary. (geth.ethereum.org)
- Monitoring: use client dashboards; Geth’s Grafana panels show P2P ingress/egress, txpool saturation, and peer health; build SLOs around p50/p95 RPC latency per method family. (geth.ethereum.org)
Port, bandwidth, and blob planning quick math
- Ports to open/forward:
- EL: 30303 TCP/UDP
- CL: 9000 TCP/UDP (most) or 13000/TCP+12000/UDP (Prysm) (docs.ethstaker.org)
- Blob storage (rolling):
- Average: 3 blobs × 128 KB × 32 blocks/epoch × 4096 epochs ≈ 48 GiB
- Max: 6 blobs × 128 KB × 32 × 4096 ≈ 96 GiB
- Treat this as additional CL space headroom. (docs.teku.consensys.net)
- Bandwidth:
- Target ≥50/15 Mbps for validators and ≥100 Mbps for RPC boxes serving public traffic; the blob addition is modest compared to P2P and RPC egress, but reserve margin. (geth.ethereum.org)
“Do this, not that” validator checklist (2026)
- Do:
- Use TLC NVMe ≥2 TB; plan monthly pruning if Geth full; allocate +50–100 GiB for blobs; keep OS and clients updated. (geth.ethereum.org)
- Run a consensus client and execution client on the same host, Engine API private with JWT secret. (geth.ethereum.org)
- Forward P2P ports and verify peers; poor peering hurts attestation inclusion and sync. (docs.ethstaker.org)
- If using MEV‑Boost, configure multiple relays and a circuit breaker fallback. (docs.flashbots.net)
- Don’t:
- Don’t expose JSON‑RPC on your validator host to the public internet; don’t run heavy trace/debug there.
- Don’t depend on QLC SSDs or networked/capped disks for state DBs; latency spikes cause missed duties. (docs.nethermind.io)
Example build recipes
- Solo validator (quiet home/office, 1–4 validators)
- CPU: 6C/12T
- RAM: 32 GB
- Disk: 2 TB TLC NVMe
- Clients: Geth + Teku, MEV‑Boost with 2–3 relays, UPS and 4G/5G failover
- Maintenance: prune Geth monthly; verify CL blob store room (~50–100 GiB) post‑Dencun. (geth.ethereum.org)
- Enterprise validator (dozens of keys)
- CPU: 8–16C
- RAM: 64 GB
- Disk: mirrored 4 TB TLC NVMe
- Clients: Nethermind + Lighthouse; Web3Signer with slashing protection DB; MEV‑Boost across multiple relays; on‑box Prometheus/Grafana. (docs.nethermind.io)
- High‑throughput RPC stack (internal product APIs)
- Read: 3× Reth full nodes (1.2 TB each), HTTP+WS, sticky WS; p95 <100 ms targets.
- Logs/traces: 2× Erigon archive with
enabled, isolated autoscaling.trace - Dashboards: 1× Geth with GraphQL for multi‑field queries.
- Strictly private Engine API; LB health checks per method class. (reth.rs)
Final notes on client selection and diversity
The network is healthiest when operators diversify EL and CL clients. For enterprise fleets, deliberately split across at least two ELs and two CLs. Keep your validator machines minimal and boring; push risky performance tuning and heavy APIs to separate RPC nodes.
If you want help converting the above into a migration plan, 7Block Labs can benchmark your exact method mix (eth_call vs logs vs trace) and right‑size a client blend and storage profile for your SLAs.
References
- Ethereum.org “Run a node” current guidance and client disk sizes; add ~200 GB for CL. (ethereum.org)
- Geth hardware, pruning, DB options, GraphQL, RPC transports, peer limits, txpool metrics. (geth.ethereum.org)
- Dencun/EIP‑4844 activation, blob size/retention/bandwidth. (blog.ethereum.org)
- Nethermind system requirements and performance tuning (pruning cache, RocksDB, MMAP). (docs.nethermind.io)
- Reth system requirements, pruning/indexing effects on RPC. (reth.rs)
- Erigon system requirements and trace namespace usage. (github.com)
- Besu system requirements and snap default. (besu.hyperledger.org)
- Ports and forwarding guidance (EthStaker, Prysm docs). (docs.ethstaker.org)
- MEV‑Boost overview, relay APIs, risks and circuit breaker. (boost.flashbots.net)
Like what you're reading? Let's build together.
Get a free 30‑minute consultation with our engineering team.

