ByAUJay
Nethermind Hardware Requirements 2026, Aztec Node Requirements, and Base Node Requirements Explained
A clear, decision‑grade guide to sizing, tuning, and operating Nethermind (Ethereum EL), Aztec full/sequencer/prover nodes, and Base (OP Stack) nodes as of January 7, 2026—with concrete hardware SKUs, storage math, and operational gotchas. Expect exact specs, client choices, and the fastest practices teams are standardizing on right now.
TL;DR (1–2 sentence summary)
For 2026, plan on 16–32 GB RAM, modern multi‑core CPUs, and fast TLC NVMe with 2–6+ TB depending on your role: Nethermind full ≥2 TB, Aztec full ≥1 TB plus solid L1 endpoints, and Base full 2 TB (Reth preferred; archive ~4+ TB). Use local NVMe or io2 Block Express, prune/history settings where applicable, and snapshots to cut sync time from days to hours—or minutes. (docs.nethermind.io)
Why this matters for 2026 decision‑makers
- Nethermind remains a top Ethereum execution client and is also usable for OP Stack chains; correct sizing and pruning choices directly affect stability and cost. (docs.nethermind.io)
- Aztec is rolling out a decentralized, privacy‑preserving L2 with distinct roles (full, sequencer/validator, prover) that have very different hardware profiles. (testnet.aztec.network)
- Base is consolidating around Reth for performance and archive functionality; their docs now publish concrete instance and storage guidance. (docs.base.org)
Part I — Nethermind hardware requirements in 2026
Baseline specs and OS support
- Memory and CPU (Ethereum Mainnet):
- Full node: 16 GB RAM, 4 cores
- Archive node: 128 GB RAM, 8 cores
- Supported OS: modern 64‑bit Linux, Windows, macOS (current LTS releases). (docs.nethermind.io)
What this means in practice:
- Full nodes are comfortable on 8–16 vCPU cloud instances with 16–32 GB RAM, as long as storage is right (see below). (docs.nethermind.io)
Disk and IOPS you actually need
- Full node disk: budget ≥2 TB fast SSD/NVMe. Nethermind’s DB is ~1 TB right after a fresh sync (2024 reference), plus growth and consensus client headroom. Aim for ≥10,000 read/write IOPS; slower disks jeopardize sync and validator rewards. (docs.base.org)
- Archive node disk: ≥14 TB as of mid‑2023 and growing ~60 GB/week; only choose this if you truly need on‑prem historical state. (docs.base.org)
Practical storage picks for 2026:
- Local TLC NVMe with high sustained write is markedly better than network disks for snap sync; avoid QLC drives that throttle to ~0.5 GB/s under sustained writes. Ensure NVMe cooling to prevent thermal throttling during state import. (docs.nethermind.io)
Sync and pruning modes that save money
- Snap sync on strong hardware can finish state in as little as ~25 minutes; it’s intensely I/O‑bound, so SSD choice determines your wall‑clock time. (docs.nethermind.io)
- Ancient barriers: keep recent receipts/bodies and discard very old ones; default barrier targets the ETH deposit contract era (Block 11,052,984), still sufficient for validators scanning deposits. (docs.base.org)
- Rolling pruning (about 1 year of history by default):
(default retention ~82,125 epochs; minimum value)--History.Pruning=Rolling
for ancient‑barrier mode. (nethermind.io)--History.Pruning=UseAncientBarriers
Recommended operational flags (examples; adjust to your hardware and workload):
- Increase memory hint on >16 GB hosts, e.g.,
(2 GB) or higher to strengthen caches; consider reducing peer count post‑sync to improve block processing time, e.g.,--Init.MemoryHint 2000000000
. (docs.nethermind.io)--Network.MaxActivePeers 20 - If you need faster block processing and can trade disk, consider
(expect 3–5% faster execution with more disk). (docs.nethermind.io)--Db.StateDbDisableCompression true
Example 2026 build sheets
-
Cost‑efficient mainnet full + validator:
- CPU: 8 vCPU
- RAM: 16–32 GB
- Disk: 2 TB TLC NVMe (≥10k IOPS)
- Notes: Snap sync, Rolling pruning, consensus client co‑located (give it ~200 GB). (docs.nethermind.io)
-
Research/archive:
- CPU: 16 vCPU+
- RAM: 128 GB
- Disk: 16–20 TB TLC NVMe or tiered SSD+HDD with careful tuning; expect steady growth. (docs.base.org)
Part II — Aztec node requirements (full, sequencer/validator, prover)
Aztec is a privacy‑first zkRollup that’s pushing toward a fully decentralized network. Operators can run several roles; each has different hardware and networking needs. (testnet.aztec.network)
Full node (most teams start here)
Minimum hardware (mainnet/testnet are similar today):
- 8 cores / 16 vCPU (2015 or newer)
- 16 GB RAM
- 1 TB NVMe SSD
- 25 Mbps network Run via Docker Compose; keep images up to date and follow the network flag defaults. (docs.aztec.network)
Operational prerequisites that are easy to miss:
- You must have high‑quality Ethereum L1 endpoints (execution and consensus). Running your own L1 node is recommended to avoid throttling and latency; ensure the provider supports Beacon APIs if you use third‑party. (docs.aztec.network)
- Ports:
- P2P: 40400/tcp and 40400/udp (discovery)
- Public Aztec RPC: typically 8080
- Admin API: 8880 (intentionally not exposed to host; use docker exec for local admin calls) (web3creed.gitbook.io)
L1 endpoint examples (when self‑hosting for Aztec):
- Execution RPC on 8545 (e.g., Geth/Nethermind), Beacon API on 3500 (e.g., Prysm/Lighthouse/Teku/Nimbus). (github.com)
Sequencer/validator nodes
- Hardware looks similar to full nodes for testnet and early mainnet conditions: 8–16 cores, 16 GB RAM, fast NVMe.
- You will need BLS keys and validator configuration; recent testnet upgrades added BLS aggregation support and a redesigned slashing system. (aztec.network)
Operational realities from the 2025–2026 testnets:
- Slashing is active and designed not to punish short home‑staker outages; sustained downtime or malicious behavior will be penalized. Keep your machine stable and monitored. (aztec.network)
Prover nodes (data‑center scale)
- Expect data‑center‑class capacity: Aztec has publicly guided that provers require roughly 40 machines at 16 cores and 128 GB RAM each for target workloads; this is why public testnet TPS is deliberately throttled (e.g., ~0.2 TPS) without economic incentives. This is not a home‑lab role. (aztec.network)
Practical Aztec deployment checklist (full/sequencer)
- Disk: start with 1 TB TLC NVMe; watch growth and logs; ensure thermal headroom on NVMe during heavy proving traffic even if you don’t run a prover. (docs.aztec.network)
- L1: Prefer self‑hosted L1 EL+CL; otherwise pick a provider that exposes Beacon endpoints and is not rate‑limited for your expected QPS. (docs.aztec.network)
- Networking: open 40400/tcp+udp and your RPC port; leave Admin (8880) unexposed. (web3creed.gitbook.io)
- Upgrades: use image tags that match current network versions (e.g., v2.1.x); follow the docs’ “Ignition/Testnet” versioned pages. (docs.aztec.network)
Part III — Base node requirements explained (OP Stack, 2026 edition)
Base nodes have two components: op‑node (consensus/derivation) and an execution client. The Base team is converging on Reth for performance and archive features, with Geth de‑emphasized for archive workloads. Nethermind is also supported. (docs.base.org)
Minimums vs. production‑grade hardware
-
Minimums to get started:
- CPU: 8 cores
- RAM: ≥16 GB (32 GB recommended)
- Storage: local NVMe SSD; calculate capacity as (2 × current chain size) + snapshot size + 20% buffer. (docs.base.org)
-
Production examples from Base:
- Reth archive node: AWS i7i.12xlarge or larger, RAID0 across local NVMe, ext4
- Geth full node: AWS i7i.12xlarge or larger, RAID0 local NVMe, ext4
- If you must use EBS, choose io2 Block Express and ensure buffered reads can keep up during initial sync; local NVMe is still preferred. (docs.base.org)
-
Client guidance:
- Reth is the recommended execution client now; Base is migrating and optimizing primarily for Reth. Geth is no longer supported for archive snapshots. (docs.base.org)
How much disk do you actually need?
-
Reth system‑requirements (as of 2025‑06‑23):
- Base full node: at least ~2 TB
- Base archive: at least ~4.1 TB These are live, chain‑specific figures that grow over time; always apply the Base docs’ storage formula to include snapshot decompression headroom. (reth.rs)
-
Snapshots to accelerate initial sync:
- Official snapshot endpoints are published (Reth archive mainnet, Geth full, and testnet variants). Using a recent snapshot cuts sync time dramatically—make sure you have space for both the compressed archive and extracted data. (docs.base.org)
OP Stack interoperability and client diversity
- OP Stack’s operator docs confirm you can run either
orop-geth
as the execution client in a rollup node. Base supports Reth as well via its node repository. (docs.optimism.io)nethermind - Base’s engineering blog explains why Reth reduces outages and improves performance for their throughput profile—hence the push toward Reth for archive. (blog.base.dev)
Example 2026 Base builds
-
Full node (Reth/Geth):
- CPU: 8–16 vCPU
- RAM: 32–64 GB
- Disk: 2–4 TB local TLC NVMe (RAID0 if multiple devices), ext4; use snapshots for initial sync; keep an L1 RPC and Beacon endpoint handy and synced. (docs.base.org)
-
Archive (Reth):
- CPU: 16–32 vCPU
- RAM: 64 GB+
- Disk: 4–8 TB local TLC NVMe; follow Reth/Base guidance. (reth.rs)
Emerging best practices that cut time and incidents
- Prefer local TLC NVMe over network storage for initial sync and sustained performance; if on AWS and you must use EBS, use io2 Block Express and monitor read latency during catch‑up. (docs.base.org)
- Stripe multiple NVMe devices with RAID0 for higher throughput (Base uses this in production) and format ext4; keep good monitoring on device temps and SMART stats. (docs.base.org)
- Plan storage using hard numbers, not wishful thinking:
- Base: Disk ≥ (2 × current chain size) + snapshot size + 20% buffer. (docs.base.org)
- Nethermind: Full ≥2 TB, archive ≥14 TB and growing; keep ≥10k IOPS. (docs.nethermind.io)
- Aztec: Full node 1 TB baseline, plus room for logs and updates; the heavy lift is your L1 endpoints. (docs.aztec.network)
- Use snapshots for Base and snap‑sync for Nethermind to compress time‑to‑ready from days to hours/minutes; ensure decompression headroom. (docs.base.org)
- Tune conservatively first, then iterate:
- Nethermind: raise
, reduce peers post‑sync, consider state‑DB no‑compression if you’re CPU‑bound and have disk to spare. (docs.nethermind.io)--Init.MemoryHint - Base: move to Reth for archive; if still on Geth, follow caching recommendations but note it’s deprecated for archive in the docs. (docs.base.org)
- Nethermind: raise
- Don’t under‑spec L1 endpoints for Aztec; Beacon API and EL RPC throughput must keep pace or your node will fall behind. (docs.aztec.network)
- Secure your Aztec node surfaces correctly:
- Open 40400/tcp+udp for P2P, expose user RPC as needed, keep admin port unexposed (use
for admin). (web3creed.gitbook.io)docker exec
- Open 40400/tcp+udp for P2P, expose user RPC as needed, keep admin port unexposed (use
Concrete sizing scenarios (worked examples)
1) Ethereum staking plus dApp analytics (Nethermind full)
- Goal: run Nethermind EL + Lighthouse/Teku CL for staking and serve moderate RPC to internal apps.
- Hardware: 8 vCPU, 32 GB RAM, 2 TB TLC NVMe (≥10k IOPS).
- Config:
- Nethermind with snap sync;
;--History.Pruning=Rolling
tuned above 2 GB; cap peers to ~20 after sync.--Init.MemoryHint - Consensus client on same box; reserve ~200–300 GB for Beacon data.
- Nethermind with snap sync;
- Why it works: stays within Nethermind’s full‑node envelope while reducing read/write pressure and DB bloat via rolling pruning. (docs.nethermind.io)
2) Aztec full node + sequencer candidate (testnet/mainnet)
- Goal: privacy‑preserving UX for internal apps and join sequencer set.
- Hardware: 16 vCPU, 32 GB RAM, 1–2 TB TLC NVMe.
- Network: open 40400/tcp+udp; 8080 for RPC; keep 8880 internal.
- Dependencies: Run your own Geth/Nethermind + Prysm/Lighthouse and feed EL/CL URLs into Aztec; ensure provider supports Beacon API if you don’t self‑host.
- Why it works: follows Aztec minimums and the networking model; L1 endpoints are the usual bottleneck, not local CPU. (docs.aztec.network)
3) Base full node for production workloads
- Goal: reliable RPC for internal microservices, high read QPS, fast re‑syncs.
- Hardware: 16 vCPU, 64 GB RAM, 4 TB TLC NVMe on RAID0, ext4.
- Software: Reth execution, op‑node; restore from the latest official snapshot; keep an Ethereum L1 RPC and Beacon endpoint synced and highly available.
- Why it works: aligns with Base’s production examples and storage math; Reth is the durable choice for archive and high‑throughput environments. (docs.base.org)
Cost, risk, and vendor choices
- Cloud vs. bare‑metal: If you’re latency‑sensitive or snapshot‑heavy, local NVMe on bare‑metal often beats networked storage. If you must be in AWS, i7i.12xlarge with local NVMe (RAID0) matches Base’s own production pattern; io2 Block Express is the only EBS tier we recommend for initial syncs. (docs.base.org)
- Client diversity: For OP Stack (incl. Base), keep diversity in mind (Reth + another client) to avoid single‑client failures. For Ethereum mainnet, mixing Nethermind with other ELs improves resilience. (docs.optimism.io)
- Archive vs. on‑demand data: Many teams over‑buy disk for “archive” when an indexer or data provider would be cheaper; a Reth archive on Base is still multi‑terabyte and growing. Validate your retrieval SLAs first. (reth.rs)
Operational playbook you can standardize on
- Monitoring:
- Export client metrics; for Nethermind, use Grafana/Seq dashboards from the docs; track NVMe temperature/SMART and per‑device latency. (docs.nethermind.io)
- Backup/restore:
- Trust snapshots for fast rebuilds; document exact snapshot source, block height, and checksum in your runbooks. (docs.base.org)
- Change management:
- Pin docker image tags to specific versions (e.g., Aztec Ignition/Testnet versions) and roll forward during maintenance windows; record config diffs. (docs.aztec.network)
- Network hygiene:
- For Aztec, verify 40400/tcp+udp open on both OS firewall and provider console; for Base/OP nodes, ensure L1 RPC/Beacon URLs are reachable and not rate‑limited. (web3creed.gitbook.io)
Key takeaways
- Nethermind in 2026: budget 2 TB TLC NVMe and ≥10k IOPS for full nodes; use snap sync and rolling/ancient‑barrier pruning to stay lean and fast. (docs.nethermind.io)
- Aztec: full nodes are lightweight relative to provers; your L1 endpoints determine reliability; secure ports properly and don’t expose the admin API. (docs.aztec.network)
- Base: move to Reth, size full nodes at ~2+ TB and archives ~4+ TB, and prefer local NVMe RAID0; snapshots make rebuilds practical. (reth.rs)
Need a reference architecture or a turnkey build?
7Block Labs designs, provisions, and operates these stacks for startups and enterprises—across bare‑metal and major clouds—with SLO‑driven monitoring, snapshot pipelines, and client‑diverse failover. If you want our ready‑to‑deploy Terraform + Docker bundles (Nethermind+CL, Aztec full/sequencer, Base Reth full/archive) sized to your traffic and compliance needs, reach out.
Like what you're reading? Let's build together.
Get a free 30‑minute consultation with our engineering team.

