7Block Labs
Blockchain Technology

ByAUJay

Ethereum API and Ethereum.org “Run a Node” Hardware Requirements in 2026

What’s New for Running Ethereum Nodes Post-Dencun and the 2025-2026 Fusaka/BPO Upgrades

So, you’re probably wondering how running Ethereum nodes has shifted after the Dencun upgrade and what’s coming with the 2025-2026 Fusaka/BPO upgrades. Well, this guide breaks down the latest requirements straight from ethereum.org and the client teams, giving you the lowdown on what it means for your hardware, bandwidth, and API setup come 2026.

Hardware Requirements

With the upgrades in play, you might need to rethink your hardware. Here’s what to keep in mind:

  • CPU: A multi-core CPU is essential now more than ever.
  • Memory: Aim for at least 16 GB of RAM to keep everything running smoothly.
  • Storage: SSDs are a must--consider at least 500 GB to handle the ever-growing blockchain.

Bandwidth Considerations

As the network grows, your bandwidth needs are going to change. Here’s what to consider:

  • Minimum Requirements: Expect to have at least 1 Gbps internet connection for optimal performance.
  • Data Usage: With the increased transaction volume, plan for higher data usage--up to 5 TB per month isn’t unheard of.

API Architecture

Your API setup might need some tweaking due to structural changes in the network. Keep these points in mind:

  • Rate Limits: Be prepared for stricter rate limits that could affect how often you can ping the network.
  • New Endpoints: Make sure to integrate new API endpoints that come with the Fusaka/BPO upgrades to access the latest features and data.

Conclusion

Making these updates to your hardware, bandwidth, and API architecture will help you stay competitive and ensure your setup is optimized for the future of Ethereum. By following these guidelines based on the latest specs from the community, you’ll be well-equipped to handle the changes ahead.


TL;DR for decision‑makers

  • So, Ethereum.org is still recommending a setup with a 2 TB SSD, 8-16 GB of RAM, and a connection speed of 10-25+ Mbit/s as the basic requirements. But as we look ahead to 2026, things are changing. With higher blob targets coming in after Fusaka/BPO and heavier API workloads, if you're aiming to run production-grade full nodes, you should be eyeing something more like 4 TB NVMe, 32+ GB of RAM, and a speed of at least 50/15+ Mbit/s--especially if you’re a validator or running your own RPC. Check it out here: (ethereum.org)
  • With the increase in blob throughput due to the PeerDAS and BPO forks, you’ll need to up your game on consensus-layer bandwidth and short-term storage. Plan for an additional 160-224 GiB just for blob retention at those new targets. Here’s the breakdown: (blog.ethereum.org)

1) What’s new in 2026: PeerDAS, Fusaka, and BPO forks

  • The 2025-2026 “Fusaka” upgrade, which combines the Fulu consensus with the Osaka execution, went live on the mainnet on December 3, 2025. It then introduced two Blob Parameter Only (BPO) bumps that raised the per-block blob targets and maximum limits:

    • BPO1 (Dec 9, 2025): target/max → 10/15
    • BPO2 (Jan 7, 2026): target/max → 14/21
      These bumps are already set up in client releases, so there's no need for a separate client upgrade for each one, but just make sure you’re using Fusaka-ready versions. You might want to brace yourself for some increased bandwidth and transient storage pressure on your consensus clients. (blog.ethereum.org)
  • PeerDAS (data-availability sampling) means that the usual full beacon nodes will only sample and store parts of the blob data. If you need to retrieve full blobs after the Fusaka upgrade, Lighthouse has introduced some new modes (--supernode/--semi-supernode) or you can opt for very high custody thresholds. Just a heads-up: don’t expect the Beacon APIs to automatically return full blobs anymore. (github.com)

Implication: If your business relies on historical blob reads (like L2 analytics) through Beacon APIs, make sure to set aside extra disk space and bandwidth. You might also want to consider running a “supernode/semi-supernode” mode where it makes sense. Alternatively, think about moving blob archival to dedicated infrastructure. (github.com)


2) Ethereum.org’s 2026 baseline and how to interpret it

Ethereum.org's “Run a node” guide (updated late 2025) gives a solid idea of what you'll need:

  • Minimum Requirements:

    • 2+ core CPU
    • 8 GB RAM
    • 2 TB SSD
    • 10+ Mbit/s
  • Recommended Requirements:

    • 4+ cores
    • 16+ GB RAM
    • 2+ TB fast SSD
    • 25+ Mbit/s

When it comes to the execution client disk sizes (like snap or other pruned options), here's a rough breakdown:

  • Geth: 500 GB+
  • Nethermind: 500 GB+
  • Besu: 800 GB+
  • Archive: roughly 12 TB+ (By the way, Erigon/Reth have some different modes, so check out the client notes below for more details.)

And don’t forget to add about 200 GB for beacon data in most setups. For more info, check out the full details on ethereum.org.

Reality Check for 2026:

  • As we approach 2026, keep in mind that with those increased blob targets, consensus workloads are going to chew up way more bandwidth and short-term disk space than what we saw back in early 2024. Make sure to treat the figures from ethereum.org as the bare minimum; you'll definitely want to plan for some extra headroom. According to EIP-7870, a solid setup for a production "Full Node" should include a 4 TB NVMe drive, 32 GB of RAM, and a connection speed of 50/15 Mbps. (eips.ethereum.org)

3) Precise client‑by‑client storage and RAM today

Execution Clients (Mainnet, Current Documentation Snapshots):

  • Geth

    • If you’re going for a snap-synced full node, expect to need over 650 GB (it grows about 14 GB a week, but pruning will reset it back to around 650 GB).
    • The archive version? You’re looking at over 12 TB (that’s the legacy hash-based one); newer modes can differ based on the features they support.
    • A good plan is to aim for 2 TB to keep things running smoothly without too much hassle. (geth.ethereum.org)
  • Nethermind

    • For the mainnet, they suggest you have at least 16 GB of RAM and 4 cores; if you’re going for the archive version, ramp that up to 128 GB of RAM and 8 cores.
    • Disk space? As of October 2024, the combined Ethereum Layer (EL) and typical Consensus Layer (CL) will be around 2 TB, while Nethermind itself will take up about 1 TB. (docs.nethermind.io)
  • Erigon (v3 docs)

    • For minimal use, you’ll need about 350 GB, but if you’re going full throttle, that jumps to around 920 GB, and for the archive version, it’s about 1.77 TB (just for the execution database; depending on the pruning modes you pick).
    • They recommend having between 1 and 4 TB of NVMe storage, along with 16 to 64 GB of RAM. (docs.erigon.tech)
  • Reth (as of 2025‑06‑23)

    • For a full setup, you’ll need around 1.2 TB, while the archive version will take about 2.8 TB. They recommend a stable internet connection of 24+ Mbps and really emphasize using TLC NVMe. (reth.rs)

Consensus Clients (Beacon Data Only, Excluding Blobs)

  • Nimbus Guidance: If you're going for a full beacon node, you’ll want around 200 GB of disk space. But if you're planning to co-host both Ethereum 1 (EL) and Ethereum 2 (CL), you’re looking at a solid setup with at least a 2 TB SSD and 16 GB of RAM. Check out more details here.
  • Teku Practical Full-Node + Validator Baseline: For Teku, the sweet spot is about 4 cores, 16 GB of RAM, and a 2 TB SSD. You can find more info in the documentation here.

Lightweight Option

The Nimbus Consensus Light Client is super compact, taking up less than 1 MB of disk space. It does come with about a 15-second head lag and has slightly weaker security assumptions. This setup is great for non-validating EL nodes that simply need a Beacon counterpart to get things running. However, it's not recommended for validators. You can check out more details here.


4) How blobs change storage math in 2026 (with real numbers)

Constants from EIP‑4844:

  • Blob size: 4,096 field elements × 32 bytes = 131,072 bytes = 128 KiB.
  • Retention: roughly 4,096 epochs, which is about 131,072 blocks or around ~18 days on the beacon node (temporary). (eips.ethereum.org)

Here's the storage breakdown for kept blobs:

  • For the Pre‑Fusaka target of 3:
    3 blobs × 128 KiB per blob × 131,072 blocks gives us roughly 48 GiB, which lines up with Teku’s estimate. You can check it out here.
  • For the BPO1 target of 10:
    We’re looking at around ~160 GiB.
  • And for the BPO2 target of 14:
    That brings us to about ~224 GiB.

Practical Planning:

  • Make sure to set aside an extra 160-224 GiB on top of your usual beacon storage budget for 2026. The exact amount depends on when your region reached BPO2 and how much storage overhead your client has (like indexes and metadata). If you're thinking about going with the “supernode”/full-blob availability option, you’ll want to budget even more. Check it out on GitHub.

5) Bandwidth you actually need (validators, builders, full nodes)

EIP‑7870 takes field tests and breaks them down into clear bandwidth targets:

  • Full node: 50 Mbps down / 15 Mbps up.
  • Attester (validator): 50 / 25 Mbps.
  • Local block builder (if you're building your own payloads): 100 / 50 Mbps. (eips.ethereum.org)

Why the step-up in 2026:

  • After the Fusaka BPOs come into play, we're seeing an uptick in blob counts, which means there's more room for gossip. Plus, with client releases like Lighthouse v8, they're giving us the heads-up about these bandwidth increases as the blob target and max go up. You can check it out here: github.com.

6) The Ethereum API surface in 2026: what to expose and how

Think of it like this: there are four layers of API, and each one comes with its own vibe in terms of trust and performance:

1) Execution JSON-RPC (public-facing via your proxy)

  • You can find the official spec in the Execution APIs repo. It's worth checking out the reference on ethereum.org, which points out key conventions like the “safe” and “finalized” block tags. Make sure to use these tags thoughtfully when you’re dealing with read paths that require some level of probabilistic or final safety. (ethereum.github.io)
  • For Geth transports, you’ve got options like HTTP, WS, and IPC. It’s best to only enable the ones you really need, and remember to put your public HTTP behind a reverse proxy that can do rate limiting and method filtering. (geth.ethereum.org)
  • If you’re using Erigon, its rpcdaemon allows you to set up method allowlisting (for example, --rpc.accessList=rules.json), which is a smart move for securing your public endpoints. (docs.erigon.tech)

2) Engine API (CL↔EL only; never public)

  • The default port is 8551, and it’s secured with a JWT secret that you need to share between the EL and CL. For example, in Besu, you’d use: --engine-rpc-port=8551 --engine-host-allowlist=localhost,127.0.0.1 --engine-jwt-secret=jwt.hex. If you're using Geth, the equivalent flags would be --authrpc.port 8551 --authrpc.jwtsecret . Just make sure to keep this configured for localhost or a private interface. Check out more about it here.

3) Beacon API (CL REST; not for the public internet)

  • You can find everything standardized in the Beacon‑APIs repo. Just a heads up--after using PeerDAS, don’t take for granted that blob retrieval endpoints will deliver complete payloads unless you specifically opt into supernode modes (which are client-specific). Keep this secured within your private network. (github.com)

4) Builder/Relay APIs (MEV‑Boost ecosystem)

  • Keep an eye on Flashbots’ Relay/Builder API specs; make sure to validate the relay set and check its health. You’ll want to be ready for more data exchange when blob counts go up. (github.com)

Emerging Utility RPCs to Adopt in Ops Tooling

  • eth_chainId (EIP‑695): This one's a game-changer for identifying chains accurately. Check it out here.
  • eth_config (EIP‑7910): Slated to hit the last call in 2025, this RPC lets you grab the current and upcoming fork parameters, including the blob schedule. It's super handy for pre-fork checks to avoid any pesky misconfigurations. More details can be found here.

7) Concrete 2026 hardware profiles (pick by role)

Baseline Assumptions

  • We're going with a production Linux setup.
  • Using TLC NVMe drives that have a DRAM cache.
  • We'll have ECC RAM whenever we can.
  • Don't forget about a UPS for power backup, plus we'll keep an eye on power and cooling.
  • Full node + light internal RPC (post‑BPO2 goals)

    • CPU: 4-8 cores, with a solid single-thread performance.
    • RAM: 32 GB (ensuring there’s space for client caches and the OS I/O).
    • Storage: 4 TB TLC NVMe (covers EL DB + CL base + 224 GiB blobs + some extra for growth).
    • Network: Aim for 50/15+ Mbps; unmetered is preferred. (eips.ethereum.org)
  • Validator box (1-4 validators) using MEV‑Boost

    • CPU: 8 cores to give you some breathing room on latency.
    • RAM: 32-64 GB, depending on your needs.
    • Storage: 4 TB NVMe; it’s a smart move to go for a separate OS disk for better resilience.
    • Network: Aim for 50/25+ Mbps; if you can swing it, a dual‑WAN setup is awesome. (eips.ethereum.org)
  • Local block builder (like research or private order flow)

    • CPU: 8c/16t upper-mid server tier; high ST/MT PassMark according to EIP-7870.
    • RAM: 64 GB.
    • Storage: 4 TB NVMe (for high IOPS).
    • Network: 100/50+ Mbps; low jitter. (eips.ethereum.org)
  • For read-heavy private RPC setups (like block explorers and indexers):

    • It’s a good idea to put multiple Execution Layers (ELs) behind a proxy. Mixing up your clients can be beneficial too--consider using Erigon for those historical or range queries, while Reth or Geth can handle the head traffic.
    • As for RAM, aim for around 64-128 GB across your entire pool.
    • When it comes to storage, you’ll need about 2-8 TB of NVMe per node. This will depend on your pruning mode and retention needs, so keep that in mind. If you have archive tiers, it’s smart to offload those to specialized nodes. You can check out more details in the documentation here.

Note: If you’re co-hosting EL+CL on one machine, keep in mind the blob retention budget (which is around 160-224 GiB) and make sure to leave about 20-25% of your NVMe free for better performance. Check out the details here.


8) Production API patterns we deploy (and why)

  • Separate concerns:

    • Public JSON‑RPC → Nginx/Envoy with:

      • HTTP/2 keepalives, request size limits, and rate limiting per IP/key.
      • Method allowlisting/denylisting (for example, consider blocking debug_* and trace_* if you don’t need them).
      • Sticky routing for subscription websockets, plus rolling restarts. (docs.erigon.tech)
    • Private Engine API/Beacon API → Keep it on localhost or a VLAN; manage your JWT secret with your secret store and rotate it every time clients upgrade. (geth.ethereum.org)
  • It’s best to use finalized/safe tags for financial reporting and any risk-sensitive reads (look for "finalized" and "safe" block params), and make sure to communicate the latency implications to your internal consumers. (ethereum.org)
  • Choose the right EL for the job:

    • Erigon’s rpcdaemon is awesome for handling range/history with those pruning controls;
    • Reth is all about high throughput, especially when paired with TLC NVMe and compact full nodes;
    • Geth has proven itself over time, with robust tooling and snap sync. Mix and match these tools to meet your API SLAs. (docs.erigon.tech)

9) Hands‑on: minimal, correct CL↔EL wiring in 2026

  • Geth EL (Engine API on port 8551, JWT stored at /secrets/jwt.hex):

    • Run this command: geth --authrpc.addr localhost --authrpc.port 8551 --authrpc.vhosts localhost --authrpc.jwtsecret /secrets/jwt.hex (geth.ethereum.org)
  • Teku CL (pairs with local EL; REST is off by default for external access):

    • Use the following command: teku --network=mainnet --ee-endpoint=http://localhost:8551 --ee-jwt-secret-file=/secrets/jwt.hex (docs.teku.consensys.net)
  • Besu EL equivalent (for those dual-homing or switching ELs):

    • Just run this: besu --engine-rpc-port=8551 --engine-host-allowlist=localhost,127.0.0.1 --engine-jwt-secret=/secrets/jwt.hex (besu.hyperledger.org)

To speed up your initial sync time, make sure to checkpoint sync your CL. Also, double-check that the finalized head is correct before you kick off your validators. Check out the details here.


10) Practical examples: sizing + ops playbooks

  • Example A: “Solo + light RPC” (on a budget, on-prem)

    • Hardware: Ryzen/E-core 8c, 32 GB ECC, 4 TB TLC NVMe + 512 GB OS SSD.
    • Clients: Geth + Teku, using MEV-Boost to connect with reputable relays; JSON-RPC is set up behind Caddy with a 10 RPS limit per IP.
    • Headroom: After syncing, you've got about 1.5 TB free; remember to set NVMe SMART alerts at 70% wear and prune quarterly if the Geth DB approaches 1.1-1.2 TB. (geth.ethereum.org)
  • Example B: “API‑first startup” (cloud, multi‑EL)

    • You’ll set up 3 EL nodes (that’s Erigon full, Reth full, and Geth snap) behind Envoy, plus a 1× CL “supernode” if you need to handle blob retrieval.
    • For storage, aim for 2-4 TB TLC NVMe for each EL and at least 1 TB for the beacon plus blob retention target.
    • When it comes to practices, keep in mind method allowlists for each tenant, implement per-method budgets, use WS for newHeads only, and run nightly eth_call replay tests at “finalized” to make sure your indexer is doing its job right. (docs.erigon.tech)
  • Example C: “Enterprise validator fleet”

    • We've got dedicated validator boxes that don’t mess around (no public RPC), with some serious specs: 64 GB RAM, 4 TB NVMe, and dual-WAN 100/50 Mbps for that extra resilience. Plus, we’re rocking remote signer HSMs and adding DVT where it makes sense.
    • Monitoring: keep an eye on gossip peers, check out attestation inclusion distance, and set up alerts if the blob gossip throughput takes a hit post-BPO2. (eips.ethereum.org)

11) Client‑specific footnotes worth knowing in 2026

  • Lighthouse v8 and PeerDAS: After Fulu, just a heads-up--Beacon APIs won’t share blobs by default anymore. If you really need to access full blob data, don’t forget to enable --supernode or --semi-supernode. Just be ready for way more disk space and bandwidth than what you might consider the usual. Check it out on GitHub.
  • Erigon: Make sure to follow the storage guidelines. Try to steer clear of using the remote-DB rpcdaemon, unless you're in a unique situation. For better performance, go with the embedded rpcdaemon on native Linux file systems. More details can be found here.
  • Reth: This one's all about speed! If you're planning to run on TLC NVMe, you’ll want at least 1.2 TB for the full mainnet and make sure you have a steady bandwidth of 24+ Mbps. Dive into the specifics on their site: Reth.
  • Nethermind: If you’re hosting both the EL and CL on a single machine, stick with a fast SSD of 2 TB or more. And if you’re thinking about running in archive mode, be prepared for some serious RAM requirements--like 128 GB. For more info, visit Nethermind’s docs.

12) Checklist: what to buy and how to configure (2026 edition)

Hardware

  • CPU: Aim for a modern 4-8 core processor with solid single-thread performance; if you're validating or building, go for an 8-core with 16 threads.
  • RAM: You'll want a minimum of 32 GB for a production full node, but bump it up to 64 GB if you're validating or dealing with heavy RPC requests.
  • Storage: Get yourself a 4 TB TLC NVMe drive with DRAM; make sure to keep at least 20% of your storage free, and steer clear of QLC.
  • Network: You should have at least 50 Mbps down and 15 Mbps up; if you’re building blocks, shoot for 100 Mbps down and 50 Mbps up. Don’t forget a UPS and a router with dual-WAN failover. (eips.ethereum.org)

Software

  • EL: Make sure to choose at least two different clients if you’re exposing public RPC. Also, don’t forget to pin versions that are compatible with Fusaka.
  • CL: Only enable REST on private networks. Think about using "supernode" mode only if it’s absolutely necessary.
  • Engine API: Always use JWT and bind to localhost or a private IP--never expose this to the public. Check out more details here.

API Hygiene

  • Stick to using "finalized" or "safe" for anything related to accounting and reporting. For debugging, make sure to throttle trace_* and debug_*. And remember, WebSockets should be used just for subscriptions.
  • It's a good idea to enforce method allowlists and set rate limits per tenant. Also, don’t forget to log and sample those long-tail latencies. Check out more on this at (ethereum.org).

Capacity Planning

  • Increase beacon storage by 160-224 GiB for BPO1→BPO2 blob targets; check back quarterly as the parameters change. (blog.ethereum.org)

Governance/Ops

  • Make sure to add eth_config to your pre-fork checks. This will help you verify blob schedules and ensure that everything's set for the fork across all fleets. You can find more details here.

13) FAQ for CTOs and platform leads

  • “Is a 2 TB SSD still enough?”
    If you're just playing around, sure! But for production environments where downtime is a big deal, it’s better to upgrade to a 4 TB NVMe. This gives you extra space for blob retention, pruning cycles, and room to grow through 2027. (eips.ethereum.org)
  • “What’s the real cost of bandwidth for BPO2?”
    We should be ready for a bump in gossip utilization on CL and expect longer peaks during sync. Targeting EIP‑7870’s recommendations of 50/25 Mbps for validators and 100/50 Mbps for local builders seems like a smart move for 2026. (eips.ethereum.org)
  • “So, can we still read blobs using Beacon APIs like we used to?”
    Not automatically with PeerDAS; you need to switch to supernode modes (which are specific to clients). A lot of teams choose to delegate full blob access to specialized nodes or providers. (github.com)

14) Bottom line

  • By 2026, you’ll still be able to run Ethereum nodes on regular servers, but now when we say “regular,” we’re talking about TLC NVMe storage, 32 to 64 GB of RAM, and some serious internet speed if you plan to validate transactions or run your own RPC.
  • One of the biggest changes you’ll see for operators is the increase in blob targets through BPOs. So, get ready for a bandwidth spike and the added need to keep an extra 160 to 224 GiB of short-term blob data on your beacon nodes.
  • Think of API design as if it’s top-notch production software: keep your public and private endpoints separate, ensure your Engine and Beacon APIs are secure, and use finalized or safe reads for anything that’s business-critical.

If you're looking for a customized bill of materials and API topology tailored to your specific needs--whether that's for an internal data lake, compliance archives, or SLA’d RPC--7Block Labs has got you covered. We can whip up a perfectly sized design and migration plan in less than a week, all based on the realities of 2026.


References

  • Check out the Run a node and JSON-RPC documentation on ethereum.org where you’ll find the minimum and recommended specs, plus some details on method conventions and what “safe/finalized” means.
  • If you’re looking for hardware and bandwidth recommendations, take a peek at EIP-7870.
  • Don’t miss the latest on Fusaka/BPO schedules on the EF blog, along with the Lighthouse v8 release notes to see how PeerDAS is affecting the Beacon APIs.
  • For a deep dive into core parameters like blob size and retention, along with Teku’s blob storage estimates, check out EIP-4844.
  • And if you need client documentation for sizing and operations, you’ll find valuable info for Reth, Erigon, Nethermind, and Geth, as well as the Engine API JWT/port configuration for Besu and Geth at reth.rs.

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.