ByAUJay
Ethereum.org Run a Node Hardware Requirements 2026 and Base Node Requirements for New Validators
Summary
This guide is all about what you need to know for sizing, purchasing, and running Ethereum nodes and validator infrastructure in 2026, especially after the Fusaka/PeerDAS updates and Blob Parameter Only (BPO) increases. We’ve pulled together the latest insights from ethereum.org, relevant client documentation, and EIPs to create practical specs, handy build recipes, and operational checklists. It’s perfect for teams gearing up to set up production-grade nodes!
What changed in late‑2025/early‑2026 (and why your hardware plan must, too)
- Fusaka went live on the mainnet on December 3, 2025, bringing along PeerDAS, which means nodes don’t have to download every single blob anymore. Shortly after, Ethereum rolled out some “Blob Parameter Only” (BPO) steps to up the number of blobs per block. On December 9, 2025, BPO1 kicked things off by raising the target/max blobs to 10/15, and then on January 7, 2026, BPO2 bumped it up to 14/21. This is a big deal because it really changes how consensus-layer bandwidth and storage work, while still keeping blob data temporary. (blog.ethereum.org)
- EIP‑4844 is still in charge of blob data: each blob is about ~128 KiB and needs to stick around for about 4096 epochs (which is roughly 18 days) before clients can prune them. Operators should set up short-lived storage for blob sidecars on the consensus side to keep things running smoothly. (eips.ethereum.org)
What This Means in Practice
So, here’s the scoop: If we’re working with a 14-blob target, we’re looking at a rough upper limit for extra temporary blob data that consensus clients might serve, which is around 12.9 GiB per day. That's calculated by multiplying 128 KiB by 14 blobs and then by 7200 slots per day. When you stretch this out over an 18-day retention window, that adds up to about 230-240 GiB.
Keep in mind, actual usage can fluctuate based on the blob load and the sampling methods you use, but it’s a good idea to set aside this extra space beyond what you already have in your consensus database. This info is based on the EIP-4844 parameters and the BPO schedule. You can check it out here.
The authoritative baseline: ethereum.org’s 2026 node specs
According to the latest info on the ethereum.org “Run a node” page (checked out just this past week):
- Minimum (just one machine running EL+CL):
- CPU: At least 2 cores
- RAM: 8 GB
- SSD: 2 TB
- Bandwidth: 10+ Mbit/s
- Recommended:
- CPU: A speedy 4+ core processor
- RAM: At least 16 GB
- SSD: A fast 2+ TB drive (NVMe is the way to go)
- Bandwidth: 25+ Mbit/s
- Execution-layer client disk usage (just a rough idea, snap/full archive):
- Besu: around 800 GB+ for snap and over 12 TB+ for archive
- Geth: roughly 500 GB+ for snap, and again, about 12 TB+ for archive
- Nethermind: similar to Geth with around 500 GB+ for snap and over 12 TB+ for archive
- Erigon: no snap available; you can prune fully, which brings it down to about 2 TB
- Reth: skipping snap too; archive runs between 2.2-2.8 TB, while full is around 1.2 TB
- Consensus layer: You can generally expect to add around ~200 GB for beacon data (though optional slasher increases might bump that up). (ethereum.org)
Tip: Keep in mind that these figures are estimates for the entire system. It’s a good idea to leave around 30-50% headroom to accommodate growth, client upgrades, and additional indexes for APIs.
Execution clients in 2026: precise storage and pruning realities
- Geth
- If you're running a Snap-synced full node, be ready for it to take up over 650 GB and add about 14 GB every week. You can do some periodic offline pruning to bring the usage back down to a more manageable level, but it’s smart to plan for more than 2 TB to avoid any last-minute pruning panic. (geth.ethereum.org)
- With the new path-based archive mode in v1.16 and up, you can compress the "archive" down to around 1.9-2.0 TB for the full history. Just keep in mind that it won't serve historical eth_getProof requests beyond a recent time frame; if you go with the old hash-based archive, you might be looking at more than 12-20+ TB. So, pick your mode based on what kind of proof or query you need. (geth.ethereum.org)
- Nethermind
- If you're looking to run the mainnet full node, you'll want a setup with at least 16 GB of RAM and 4 cores. For the archive node, aim for 128 GB of RAM and 8 cores. Don't skimp on the storage either--go for a fast SSD or NVMe with a minimum of 2 TB. For the best sync and RPC stability, it's a good idea to have over 10k IOPS on your disk. (docs.nethermind.io)
- Reth
- If you're looking to run on the Ethereum mainnet, you'll need about 1.2 TB for full storage and around 2.8 TB for the archive. A stable internet connection is a must--aim for at least 24 Mbps or faster. They really recommend using high-quality TLC NVMe drives for the best performance. For more details, check out their official site: (reth.rs)
- Erigon
- If you're diving into Ethereum mainnet, you’ll find that the usual storage size for a full node is anywhere from just under 1 TB to about 1.7-3.5 TB for an archive, all depending on the version and how you set up pruning. It's a good idea to go for NVMe if you can. Make sure to check out the latest docs for the version you’re using. (erigon.gitbook.io)
Practical takeaway: If you're working with a long-lived production Ethereum node, you really want to aim for a 2-4 TB TLC NVMe with DRAM cache. Trust me, skip over QLC and those low-IOPS cloud disks, especially if you're handling state-heavy workloads. For a solid reference while picking out models, check out the community-driven list, “Great and less great SSDs for Ethereum nodes.” It’s a handy sanity check! (gist.github.com)
Consensus clients in 2026: bandwidth, blobs, and DB sizing
- If you're setting up a combined full node with Teku (that’s both the execution layer and consensus layer), here are the minimum specs you'll need: 4 cores at 2.8 GHz, 16 GB of RAM, a 2 TB SSD, and a broadband connection that's around 10 Mbps or faster. Oh, and don’t forget a UPS to keep things running smoothly! You can check out more details here.
- Now, if you’re leaning towards Nimbus, it’s quite efficient but still suggests having a 2 TB SSD and 16 GB of RAM, especially if you're co-hosting it with an execution client. You can find the specifics here.
- For those using eth-docker, the resource snapshots indicate that typical beacon DB sizes sit around 80 to 170 GiB across different consensus clients, not counting those temporary blob sidecars. This should give you a good baseline for setting up your monitoring and alerting thresholds. More info is available here.
Blob‑era Planning:
- After BPO2 (14/21), you’ll want to set aside about 230-240 GiB of extra short-lived CL storage for blob sidecars. This should cover you for an 18-day period at your target usage. If you’re looking into supernode modes that keep more data for network resilience, you might want to budget for even more. Don’t forget to check your client’s pruning defaults and set up some alerts for any growth in blob storage. (This info is based on EIP‑4844 parameters and Lighthouse supernode docs.) (eips.ethereum.org)
The 2026 “base node” for new validators: what’s the safe floor?
Going with pure minimums will get the job done, but it doesn’t leave much room for reorganizations, blobs, or any growth. Nowadays, the ecosystem is leaning towards stronger baselines, and there’s even an EIP in the works that’s taking that “folk wisdom” and turning it into clear recommendations, complete with PassMark guidance:
- Full node (EL+CL, no validator duties): You’ll need a 4 TB NVMe drive, 32 GB of RAM, and a CPU with 4 cores and 8 threads (aim for around 1000 single-threaded / 3000 multi-threaded PassMark). A decent internet connection at 50/15 Mbps will do the trick. Check out more details here.
- Attester/validator (MEV‑Boost typical): For this setup, go for a 4 TB NVMe drive, bump up to 64 GB of RAM, and get an 8-core, 16-thread CPU (around 3500 single-threaded / ~25,000 multi-threaded PassMark works well). You’ll want an internet speed of 50/25 Mbps. More info can be found here.
- Local block builder (if you build locally instead of using relays): You’ll still need that 4 TB NVMe, plus 64 GB of RAM and an 8c/16t CPU. Make sure your internet speed is at least 100/50 Mbps. Details are available here.
Why This Matters Now
Fusaka’s PeerDAS and the increasing blob throughput are shaking things up for validator bandwidth sensitivity, especially when it comes to proposer/builder roles. If you're using MEV-Boost and find yourself needing to switch back to local block building, having that higher bandwidth tier can really be a game-changer. It could mean the difference between getting timely propagation and missing out on potential value. (eips-wg.github.io)
Practical build recipes (Bills of Materials you can actually order)
Note: Exact models change pretty fast, so the main thing to focus on is the type of component. Go for TLC NVMe drives that come with DRAM cache and have high endurance (TBW). If you’re ever unsure, just check the “good SSDs” list to make sure you’re on the right track.
- Home/SMB Validator + Light RPC
- CPU: Go for an 8c/16t desktop-class processor with solid single-thread performance (aim for a PassMark ST of around 3500+).
- RAM: You’ll want between 32 and 64 GB of DDR4 or DDR5--it’s better to have a little extra if you can swing it.
- Storage: Look at getting 2 to 4 TB of TLC NVMe (with DRAM), and don't forget a secondary 1 to 2 TB SSD for your OS and backups.
- Network: A wired Ethernet connection with at least 50/25 Mbps is a must.
- Power: Make sure you have a UPS that can handle at least 15 to 30 minutes of downtime.
- Rationale: This setup is robust enough for running EL and CL, serving as a validator, handling MEV-Boost, and doing some light RPC without getting bogged down during those blob spikes. When picking your SSD, check out the community drive list and steer clear of QLC and DRAM-less models. (eips.ethereum.org)
2) Split EL/CL with Remote Signer (Higher Resilience)
- EL box: 8c/16t, 32 GB RAM, 2-4 TB NVMe;
- CL box: 4-8c, 16-32 GB RAM, 1 TB NVMe (+ blob overhead)
- Remote signer: You can go with Web3Signer or a similar setup using a Postgres slashing database. It doesn't need much power--usually, Web3Signer runs with less than 2 GB heap even when it's handling a lot.
- Why: This setup helps minimize the risk in case something goes wrong. By isolating the keys, you create a safer environment for the signer. Plus, it allows for a smooth failover between beacon nodes, making everything a bit more secure. Check out the details here: docs.web3signer.consensys.io
3) Data-Center “Local Builder” Node
- CPU: Go for an 8c/16t server-class chip that has a high ST score
- RAM: Aim for somewhere between 64-128 GB
- Storage: You'll want at least 4 TB of TLC or enterprise NVMe storage (consider setting it up with RAID1)
- Network: Look for 100/50 Mbps or better with a low-latency uplink
Why? Well, builder workloads really benefit from having a solid CPU and plenty of bandwidth. Plus, EIP-7870 highlights the benefits of sticking to this higher recommended tier. Check it out here.
SSD Selection Pro Tip
When it comes to choosing SSDs, operators often find that the latency and IOPS (Input/Output Operations Per Second) are way more critical for syncing stability than the advertised sequential throughput. If you dig into the community chatter, you'll find that certain DRAM/TLC models consistently perform well under client write patterns. A couple of popular picks include the WD Red SN700 and the Seagate FireCuda 530. Check out this gist for more insights!
Client choices and state management: 2026 nuance that saves outages
- Geth users: If you want to keep your snap-synced nodes around 650 GB, make sure to schedule offline pruning. If you’re working with a 1 TB disk, you’ll need to prune every month or think about upgrading to 2 TB or more. If you need to run full historical queries, you might want to check out path-based archive mode, which takes about 2 TB, but keep in mind that there are limits with
eth_getProoffor older blocks. (geth.ethereum.org) - Nethermind: Aim for SSDs that deliver over 10k IOPS and have at least 2 TB of space if you’re combining Execution Layer (EL) and Consensus Layer (CL) on one machine. Just a heads-up: if you’re going for archive mode, you’ll need some serious RAM--around 128 GB. (docs.nethermind.io)
- Reth: They offer a competitive “full” footprint of about 1.2 TB and a hefty 2.8 TB for archive, with a strong focus on TLC NVMe storage. This setup works well for both home setups and data centers. (reth.rs)
- Consensus DBs: Keep an eye on growth and set up alerts using eth-docker snapshots for your client. Also, don't forget to refresh your blob sidecar retention assumptions after BPO2. (ethdocker.com)
Client Diversity
When it comes to client diversity, it’s a good idea to always pair minority clients whenever it makes sense. Before making your picks, just take a quick look at the current distributions. For more details, you can check out the full overview and additional resources at clientdiversity.org and the client diversity explainer on ethereum.org.
Networking and ports: get peering right on day one
Make sure to forward and enable the P2P ports for both the execution and consensus clients. This way, you’ll be able to connect with peers and contribute to the network:
- Execution: Ports 30303 TCP/UDP are used by Geth, Besu, and Nethermind. Just a heads up, Erigon sometimes rolls with 30304 in certain setups.
- Consensus: For Lighthouse, Teku, Nimbus, and Lodestar, it's 9000 TCP/UDP. As for Prysm, it sticks to 13000 TCP and 12000 UDP.
Make sure to lock down your JSON‑RPC, REST, and metrics so they’re only accessible via localhost or through an SSH tunnel/VPN. Seriously, don't let them out into the wild. Check out the details here: docs.ethstaker.org.
When you're working with containers and automation, it's super important to check which ports are connected to the host compared to those on the internal network. This helps prevent any accidental exposure. For more info, check out the documentation here.
Validator operations: 2026‑grade reliability patterns
- Thinking about using MEV‑Boost? It’s pretty lightweight, but make sure your validator has some extra bandwidth to play with--aim for around 50/25 Mbps at the very least. If you’re going to rely on local building, you might want even more. (docs.flashbots.net)
- Setting up remote signing with slashing protection? Here’s what you need to know:
- Web3Signer needs Java 21+ and a Postgres slashing database. If you're handling a lot of keys, consider scaling things horizontally behind a load balancer. (docs.web3signer.consensys.io)
- Make sure to configure your slashing protection the right way. If you're using Lighthouse for some keys and Web3Signer for others, check the docs to avoid any conflicts with double protection. (docs.web3signer.consensys.io)
- Proposer-only beacon nodes: Lighthouse lets you split the proposer and attester roles, which can help lower the risk of DoS attacks on proposals. If you're scaling up, definitely think about using this setup. (lighthouse-book.sigmaprime.io)
- DVT (Distributed Validator Technology): If you’re grouping validators together (like with Obol or SSV), make sure your machines have at least 16-32 GB of RAM and 2-4 TB NVMe storage. Don't forget to keep an eye on the extra network and IOPS overhead, as well as the port matrix. (docs.obol.org)
Uptime Target
If you're aiming for almost constant connectivity, keep in mind that failures during those high-traffic times can really add up. Having a backup ISP or even a 5G failover, along with a UPS, is a smart and budget-friendly way to protect yourself.
“Base” clarification: L2 Base nodes ≠ Ethereum validators
- So, when it comes to Base (which is Coinbase’s OP Stack L2), you won’t find those L1-style validators you might be used to. Instead, Base operates with execution nodes that sync up with the L2 chain. If you’re considering setting one up for data or indexing, Reth’s Base profile suggests you’ll need quite a bit of storage: either 2 TB for a full node or 4.1 TB if you’re looking at an archive node. You'll also want to have some impressive RAM--aiming for 128 GB or more. Just a heads up, this setup is a whole different ballgame compared to what you’d see on the Ethereum mainnet. (reth.rs)
If you're looking to validate Ethereum and snag some consensus rewards, check out ethereum.org's solo staking/launchpad route. You'll want to run EL+CL+VC on the Ethereum mainnet. For more info, head over to ethereum.org.
Blob‑era capacity math you can reuse in planning
Check out this handy planner to get a rough estimate of the consensus-layer transient storage needed for blobs:
- For each block, it’s a simple calculation: blobs_per_block × 128 KiB.
- Daily totals? Just take the previous number and multiply it by 7200 blocks/day.
- When it comes to the retention window, you’ll want to multiply the previous figure by 18 days (thanks to EIP‑4844).
Examples:
- BPO1 (10 target): About 8.8 GiB per day; roughly 160 GiB over 18 days.
- BPO2 (14 target): Around 12.9 GiB per day; so, around 232 GiB over 18 days.
This isn't a permanent footprint, and with PeerDAS, you won't always have to pull in all the blob data. Still, it's a good idea for operators to set aside some local disk space and bandwidth for those busy times. Check it out here: (eips.ethereum.org)
Go‑live checklist for decision‑makers
- Hardware
- Go for TLC NVMe with DRAM, at least 2 TB for both EL and CL. If you want to worry less about upgrades, bump it up to 4 TB. Aim for an endurance of ≥1,000 TBW, especially if you’re doing heavy RPC stuff. (gist.github.com)
- Get a baseline of 32 GB RAM; if you’re running validators along with a local builder or juggling multiple clients, 64 GB is the way to go.
- For your CPU, focus on single-thread performance. Try to hit those EIP‑7870 PassMark tiers that suit your role. (eips.ethereum.org)
- Network
- Make sure to forward EL on ports 30303 TCP/UDP and CL on 9000 TCP/UDP (or if you’re using Prysm, go for 13000/12000). Keep your RPC, REST, and metrics stuff private. (docs.ethstaker.org)
- For validators, aim for at least 50/25 Mbps; if you’re a local builder, shoot for 100/50 Mbps. (eips.ethereum.org)
- Software
- It’s a good idea to opt for a minority client combination when you can. Check out current diversity dashboards before you make any final decisions. (clientdiversity.org)
- If you’re rolling with Geth, think about pruning your windows or going for a path-based archive if you want to keep history without needing those massive 12-20 TB disks. (geth.ethereum.org)
- Security/Resilience
- Implement a remote signer with a Postgres slashing database; and if you’re managing a bigger fleet, consider using a proposer-only Beacon Node and DVT.
- Make sure every box has a UPS; a secondary ISP or LTE/5G failover is a smart optional backup.
A note on growth and timing
- The cleanup efforts after Pectra and Fusaka are still in motion, and they’re really changing how node footprints look. Make sure to keep an eye on the EF blog for updates about those “interfork” adjustments. You can check it out here.
- As for blob capacity, it's now all about configuration (BPOs). Just a heads-up: make sure both your EL and CL are using the releases mentioned by EF for the live network to dodge any consensus mismatches. More info can be found here.
Bottom line recommendations for 2026 deployments
- If you're a new validator looking at a 2-3 year plan, kick things off with 8 cores and 16 threads, 64 GB of RAM, and a solid 4 TB TLC NVMe drive along with a 50/25 Mbps+ internet connection. As you grow, think about splitting your execution layer (EL) and consensus layer (CL) and adding a remote signer. This approach fits nicely with the current EIP‑7870 recommendations and the realities of the blob era. (eips.ethereum.org)
- If you’re anticipating some heavy RPC usage or plan to build blocks locally from time to time, consider boosting your internet to 100/50 Mbps. It's also a good idea to prioritize SSD latency and IOPS as key performance indicators--be sure to pick your drives from reliable operator lists. (gist.github.com)
- Don't forget to reevaluate your storage every few months. Geth’s path-based archive, Reth’s compact archives, and Erigon pruning all help keep your storage requirements in check--but this only works if you're proactive about selecting the right mode and keeping up with maintenance. (geth.ethereum.org)
With these solid specs, up-to-date client modes, and tweaks from the blob era, your team can confidently tackle procurement and architecture choices for 2026. Plus, you’ll steer clear of those dreaded late-night surprises from a full disk or a sync that’s gone totally off track.
Like what you're reading? Let's build together.
Get a free 30-minute consultation with our engineering team.
Related Posts
ByAUJay
EIP-7702 and the Set Code for EOAs: Advancing Account Abstraction Beyond EIP-4337
EIP-7702 aims to make it super easy for all legacy EOAs to run smart-account code right at the same address. This cool feature is scheduled to roll out on the Ethereum mainnet with the Pectra hard fork on May 7, 2025. For anyone involved in decision-making, it’s a no-brainer: pair 7702 with
ByAUJay
2026 Ethereum Full Node Disk Size and Storage Requirements
**Short version:** Starting in January 2026, if you're planning to run a production-grade Ethereum full node, expect it to take up around 0.9 to 1.3 TB for the execution client. You’ll also need about 80 to 200 GB for the consensus client, plus an extra 100 to 150 GB for blobs. So make sure you’ve got at least a fair bit of storage ready!
ByAUJay
Building Supply Chain Trackers for Luxury Goods: A Step-by-Step Guide
How to Create Supply Chain Trackers for Luxury Goods

