7Block Labs
Blockchain Technology

ByAUJay

API in Blockchain Architectures: When to Use Gateways vs Direct Node Access

Decision-makers keep asking us the same question: should we call a hosted RPC gateway (Infura, Alchemy, QuickNode, Chainstack, Cloudflare, etc.) or run and expose our own nodes? This guide gives you a concrete, up-to-date decision framework, with cost, latency, reliability, security, and data-consistency trade-offs—plus ready-to-implement patterns you can ship this quarter.

Summary: Use gateways when you need elastic, multi-chain scale, advanced data APIs, and SLAs. Use direct node access when you need low-level control (trace/txpool), strict determinism, or specialized performance and cost envelopes. Hybrid patterns often win in production.


Who this is for

  • Startup CTOs planning mainnet MVPs and growth.
  • Enterprise platform leaders building wallets, exchanges, analytics, or tokenized assets across multiple chains.
  • Product managers deciding where latency, cost, and reliability lines should be drawn.

Gateways vs direct nodes: what each actually gives you

  • Gateways (managed RPC platforms)

    • Pros:
      • Elastic capacity, global routing, caching, retries, and rate-limit smoothing.
      • Extra APIs (NFT, token, account-abstraction, subgraphs, webhooks, gRPC, etc.).
      • SLAs and security attestations (SOC/ISO) for procurement. (quicknode.com)
    • Cons:
      • Method caps and per-method pricing units; certain namespaces or heavy calls limited (e.g., eth_getLogs ranges; some txpool/trace restrictions by provider or plan). (alchemy.com)
      • Opaque infrastructure; edge cases differ by provider/client mix.
  • Direct node access (your cluster or dedicated nodes from a provider)

    • Pros:
      • Full control of client/version/flags; stable behavior for heavy methods (trace/debug/txpool/GraphQL). (geth.ethereum.org)
      • Predictable data semantics; easier to implement strict block-pinning (EIP‑1898) and consistent reorg handling. (eips.ethereum.org)
    • Cons:
      • Operational burden (sync, upgrades, monitoring, storage growth, incident response).
      • High-performance hardware (NVMe, IOPS) and capacity planning for spikes. (docs.nethermind.io)

The 8-signal decision framework

Use gateways when most of these signals are “true”; choose direct node access when most are “false.”

  1. Traffic volatility: unpredictable spikes and region shifts benefit from gateway autoscaling and global routing. Many providers publicly claim 99.99% uptime SLAs and multi-region routing; always verify current SLA terms. (blog.quicknode.com)
  2. Chain diversity: multi-chain builds (EVM + Solana + new L2s) are faster via platforms offering 60–70+ chains and add‑on data services. (alchemy.com)
  3. Method mix: if you rely on eth_getLogs across large ranges, trace/debug, or txpool, direct nodes or dedicated plans are safer; many gateway tiers cap ranges or lock heavy methods. (alchemy.com)
  4. Deterministic reads: if you must reproduce exact historical state around reorgs or forks, pin requests using EIP‑1898 and prefer single-client, single-region direct nodes for critical paths. (eips.ethereum.org)
  5. Write-path needs: mempool strategy (public vs private), MEV protection, and builder routing often push you to specialized endpoints (Flashbots Protect/MEV‑Blocker) with or without a standard RPC gateway. (docs.flashbots.net)
  6. Compliance & procurement: enterprises often require SOC 2/ISO attestations and VIP support; gateways (and some managed dedicated-node vendors) cover this out of the box. (quicknode.com)
  7. Cost profile: bursty reads and complex methods may be cheaper via dedicated nodes (flat) than CU/credit models; conversely, low/medium steady traffic is cheaper via PAYG CUs. (alchemy.com)
  8. Talent & ops: if you don’t have SREs comfortable with client diversity, snapshot/repair, and observability, use gateways or dedicated managed nodes.

Current realities (end of 2025): facts worth knowing

  • Latency and routing claims vary; e.g., QuickNode publishes live benchmarks (QuickLee) with p95s across providers (self-reported), and many providers advertise sub-100 ms global reads depending on region and method. Validate for your regions and methods before committing. (blog.quicknode.com)
  • Alchemy’s 2025 PAYG plan prices by Compute Units, with per-method CU tables (e.g., eth_call: 26 CUs; eth_blockNumber: 10 CUs) and throughput measured in CUPs. This can be cost‑efficient for light methods, more variable for heavy logs/trace. (alchemy.com)
  • Infura’s public status page can show sub-100% rolling uptime on specific transports (e.g., mainnet HTTPS over the last 90 days). Treat “four nines” as an aspirational target; design for failover. (status.infura.io)
  • Ankr’s PAYG advertises per‑request pricing (e.g., $0.00002 per EVM HTTPS request), a useful control point for cost modeling. (ankr.com)
  • Cloudflare Ethereum Gateway caches reads at the edge and forwards writes to its nodes; a practical “surge-protector” in hybrid setups. (developers.cloudflare.com)

Data correctness and method-level nuance (don’t skip this)

  • Pin your reads: use EIP‑1898 “blockHash” in methods like eth_call/getProof to guarantee the state you’re querying—especially under reorgs—regardless of provider routing. (eips.ethereum.org)
  • Respect eth_getLogs limits: providers impose block-range or payload caps (e.g., Alchemy: unlimited ranges on some chains for PAYG/Enterprise with 150 MB response cap; practical range recommendations vary). Build windowed queries, index by address/topic, and paginate. (alchemy.com)
  • Heavy namespaces: trace/debug and txpool are frequently paywalled, disabled, or throttled on shared endpoints. If you must rely on them, control the node (dedicated/self-hosted) or procure an explicit plan supporting these namespaces. (besu.hyperledger.org)
  • Standardization: EIP‑1474 and the Execution APIs repo define canonical JSON‑RPC behavior, but client differences remain; test with the actual clients (Geth, Nethermind, Erigon, Besu) you’ll run in prod. (eips.ethereum.org)

Performance and hardware: what “direct node access” really entails

Typical guidance for 2025 (Ethereum mainnet execution clients; actual footprints change monthly):

  • Erigon 3 (full): ~920 GB; archive: ~1.77 TB; recommend NVMe, 16–64 GB RAM depending on mode. (docs.erigon.tech)
  • Nethermind (full): plan for ~2 TB total if co-locating with consensus client; archive: ~14 TB (client-dependent). High IOPS NVMe (>10k) recommended. (docs.nethermind.io)
  • Geth still supports GraphQL (EIP‑1767) for efficient multi-field queries; enable explicitly. (geth.ethereum.org)

Operational must‑haves:

  • Prometheus/Grafana metrics enabled on clients (both Geth and Nethermind provide first‑class support). (geth.ethereum.org)
  • Snapshot/Checkpoint sync strategy; warm spares in another AZ/region for failover.
  • Log ingestion and alerting around peer counts, head lag, pruning, DB growth, and RPC latency.

Security, compliance, and private connectivity

  • SOC 2/ISO attestations matter for enterprise procurement: e.g., QuickNode states SOC 1/2 Type 2 and ISO/IEC 27001; Blockdaemon advertises SOC 2 Type II and ISO 27001. Ask for current reports via their trust portals. (quicknode.com)
  • Private networking: on AWS, PrivateLink support from some providers (e.g., Chainstack) can reduce latency and eliminate public egress. (docs.chainstack.com)
  • Regional routing and data residency are increasingly common discussions—coordinate with InfoSec early.

MEV-aware write paths

  • For Ethereum user transactions requiring frontrunning/sandwich protection, route writes via private orderflow (Flashbots Protect RPC) while reads go through your usual RPC. Flashbots recently announced API deprecations and batching changes—track timelines to avoid breakage. (docs.flashbots.net)
  • Alternative OFAs (e.g., MEV‑Blocker) offer inclusion speed and rebates. Evaluate impact on UX (landing time) vs public mempool exposure. (docs.cow.fi)

Cost modeling: two quick examples

These examples are not endorsements; they illustrate how to estimate spend.

  1. Read-heavy EVM analytics (50M calls/month):
  • Ankr PAYG: 50,000,000 × $0.00002 ≈ $1,000/month (HTTPS) before egress/networking. (ankr.com)
  • Alchemy PAYG: convert to CUs by method mix (e.g., eth_blockNumber ≈ 10 CUs; eth_call ≈ 26 CUs; getLogs varies). With rates starting at $0.45/M CU (dropping to $0.40/M after 300M CUs), your actual bill depends on the CU-weighted blend. Use their CU tables to model accurately. (alchemy.com)
  1. Trace/txpool–intensive debugging:
  • Direct nodes (dedicated or self-hosted) often win because shared endpoints gate trace/txpool or price them steeply; a flat monthly dedicated node can be more predictable. Verify per‑method multipliers and SLAs if you must stay on managed gateways. (docs.chainstack.com)

Three production-ready architecture patterns

Pattern A — “Gateway-first with deterministic reads”

Best for: wallets, NFT marketplaces, consumer apps at scale.

  • Reads: Primary RPC gateway with retries and block-tag pinning; log ingestion via windowed eth_getLogs and webhooks/subscriptions when available. (alchemy.com)
  • Writes: Standard public mempool; for sensitive flows (DEX swaps), switch to Flashbots Protect RPC. (docs.flashbots.net)
  • Fallback: Second provider endpoint and a lightweight Cloudflare Ethereum Gateway for surge read‑through caching. (developers.cloudflare.com)

Code sketch (Node.js/ethers v6):

import { FallbackProvider, JsonRpcProvider } from "ethers";

const providers = [
  new JsonRpcProvider(process.env.RPC_PRIMARY),
  new JsonRpcProvider(process.env.RPC_SECONDARY),
  // optional surge-protector read path
  new JsonRpcProvider("https://cloudflare-eth.com")
];

export const rpc = new FallbackProvider(providers, 1); // quorum 1

// Deterministic read pinned to blockHash (EIP-1898)
export async function safeCall(callData, blockHash) {
  return rpc.send("eth_call", [callData, { blockHash }]); // reorg-stable
}

Pattern B — “Direct-node core with gateway burst buffer”

Best for: exchanges, risk engines, indexers that need trace/txpool determinism.

  • Operate two execution clients (e.g., Erigon + Nethermind) in separate AZs; expose only to your VPC. Size NVMe per current footprints and growth. (docs.erigon.tech)
  • Enable Prometheus metrics and alert on head lag, RPC latency, and DB size. (geth.ethereum.org)
  • Use a managed gateway strictly for overflow reads and global distribution; route write-paths via Protect RPC when user safety matters. (docs.flashbots.net)

Example Erigon flags (storage-optimized):

erigon \
  --chain mainnet \
  --http --ws --private.api.addr=127.0.0.1:9090 \
  --db.pagesize=16k \
  --prune=hrtc \
  --authrpc.addr=0.0.0.0 --authrpc.jwtsecret=/secrets/jwt.hex

Reference the client’s current hardware guidance when selecting prune mode and disk. (docs.erigon.tech)

Pattern C — “Multi-chain hybrid (EVM + Solana)”

Best for: cross-chain trading, wallets, and real-time analytics.

  • Solana: use gRPC streams (Yellowstone/Geyser) for unmetered, low-latency data; consider dedicated clusters for consistent throughput. (quicknode.com)
  • EVM: gateway for standard reads; dedicated node (or plan) for trace/logs; private networking to reduce latency if on AWS. (docs.chainstack.com)
  • Optional: Separate write-paths with private orderflow where applicable.

Emerging best practices (late 2025)

  • Block-pinning by default: add blockHash (EIP‑1898) to state reads under load or when reconciling balances. This eliminates “moving target” bugs from reorgs. (eips.ethereum.org)
  • Windowed logs + backfills: cap ranges per provider guidance and backlog via jobs; avoid million‑log single calls. (alchemy.com)
  • Mempool strategy as a feature: expose “Protected Send” in your UI; many users value zero failed-tx fees and frontrunning protection. Track provider API updates and required headers. (collective.flashbots.net)
  • Observability SLOs, not just SLAs: for gateways, track your p95/p99 latencies by method and region; benchmark periodically (vendor claims are marketing until measured). (blog.quicknode.com)
  • Private networking: if you’re on AWS, ask for PrivateLink; it simplifies firewalling and reduces tail latency. (docs.chainstack.com)

Practical details you can use tomorrow

  • Alchemy CU math: use their live table (e.g., eth_call 26 CUs) and rate card ($0.45/M CU down to $0.40/M at scale) to simulate bills in CI for each PR, blocking merges that push projected MRR above thresholds. (alchemy.com)
  • eth_getLogs guardrails: codify per‑chain ranges in your data layer (e.g., Ethereum: ≤2k–5k blocks per call; Polygon lower). Use subscriptions/webhooks for “new” and batch backfills offline. (alchemy.com)
  • Cloudflare Gateway as read cache: for hot blocks and common calls, edge caching cuts latency and provider costs; set strict cache keys that include block numbers. (developers.cloudflare.com)
  • txpool and trace access: if you need them, pick dedicated nodes or enterprise plans that explicitly list support (don’t assume). Besu, Geth, Reth expose txpool variants; confirm enablement flags and auth. (besu.hyperledger.org)

Solana-specific notes (because it’s different)

  • Choose gRPC for real-time streams (accounts, programs). Consider dedicated Yellowstone clusters for consistent performance and “unmetered” ingestion; shared plans can start around $499/mo. (quicknode.com)
  • For write-path latency (trading/bots), specialized endpoints (e.g., “Fastlane” type services) enforce tips/priority fees and publish P99 slot-latency metrics; verify region placement vs your infra. (marketplace.quicknode.com)

When a gateway is the wrong choice

  • You need guaranteed availability of debug/trace/txpool 24/7 (no plan gating) and must tune client flags yourself.
  • You require deterministic replay against a fixed client and dataset (e.g., audits, regulated reporting).
  • Your cost pattern punishes per‑method multipliers (e.g., massive logs/trace) vs a flat dedicated-node bill. (docs.chainstack.com)

When direct nodes are the wrong choice

  • You lack team capacity for upgrades, snapshots, and incident response (e.g., reorg storms, disk failures).
  • You must support 60+ chains fast or pivot networks frequently.
  • You need enterprise compliance and SLAs immediately without an in-house platform. (quicknode.com)

Implementation checklist (copy/paste into your RFC)

  • Decide per-path routing:
    • Reads: Primary gateway; deterministic reads pinned by blockHash for sensitive flows. (eips.ethereum.org)
    • Writes: Default public mempool; “Protected Send” for swaps/mints via Flashbots Protect. (docs.flashbots.net)
  • Provision fallbacks: a second provider, plus Cloudflare Ethereum Gateway for surge reads. (developers.cloudflare.com)
  • Heavy methods: secure a dedicated node (or enterprise plan) for eth_getLogs backfills, trace/debug, and txpool access. (besu.hyperledger.org)
  • Observability: enable Prometheus; track RPC latency, head lag, peer count, DB growth. (geth.ethereum.org)
  • Cost controls: enforce CU/credit budgets in CI (fail builds that exceed monthly threshold). (alchemy.com)
  • Compliance/security: request current SOC/ISO reports; prefer PrivateLink in regulated environments. (quicknode.com)

What “good” looks like in 2025

  • Multi-provider FallbackProvider with block-pinned reads and windowed logs.
  • Private orderflow RPC for sensitive writes; public mempool for routine transactions. (docs.flashbots.net)
  • A small, well‑tuned direct-node cluster for heavy analysis and determinism; gateway for elastic scale.
  • Prometheus/Grafana dashboards tied to SLOs; periodic latency and correctness canaries across providers. (geth.ethereum.org)

How 7Block Labs helps

We design and operate hybrid RPC architectures that blend gateway elasticity with node‑level determinism. Typical engagements include:

  • Method-by-method cost/latency baselining (CUs/credits vs dedicated). (alchemy.com)
  • Direct-node blueprints (Erigon/Nethermind + consensus) with NVMe sizing, pruning, and CI‑based disaster drills. (docs.erigon.tech)
  • MEV-aware write paths and endpoint governance. (docs.flashbots.net)
  • Private networking on AWS (PrivateLink) and enterprise compliance packages. (docs.chainstack.com)

If you want a concrete 2–4 week plan tailored to your method mix, traffic, and security constraints, we’ll map the architecture, cost model, and rollout steps.


References and further reading


By combining deterministic direct-node reads for the few methods that need them with a robust, SLA-backed gateway for everything else, most teams get the best of both worlds: lower tail latency, lower surprise bills, and higher correctness. That’s the pattern we recommend—and implement—at 7Block Labs.

Like what you're reading? Let's build together.

Get a free 30‑minute consultation with our engineering team.

Related Posts

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2025 7BlockLabs. All rights reserved.