7Block Labs
Ethereum

ByAUJay

Ethereum Full Node Disk Size 2026, Ethereum Full Node Storage Requirements 2026, and Ethereum Full Node Size 2026

Short version: By January 2026, if you're running a solid Ethereum full node, expect to use around 0.9-1.3 TB for the execution client, plus another 80-200 GB for the consensus client, and on top of that, an extra 100-150 GB for blobs. It’s a good idea to grab at least a speedy 2 TB NVMe drive now; if you’re planning to handle heavy RPC, archives, or just want some extra breathing room for a few years, shoot for a 4 TB drive. (ethereum.org)


Why this matters to decision‑makers in 2026

Storage is becoming a big hurdle for Ethereum nodes. With the introduction of blob data in Dencun/Deneb (EIP‑4844) and the upcoming partial history expiry in 2025 (that’s EIP‑4444's first step), the definition of "full history" is shifting on the execution layer. So, the ideal disk size really depends on your client, how you’ve set up your pruning, and whether you’re handling historical queries. Below, you’ll find a summary of the latest, client-specific details to help you figure out the right size, whether you're on-premise or using the cloud. Check it out: (prysm.offchainlabs.com)


What changed since last year

  • Blobs (EIP‑4844) introduce a rolling window of around 18 days of extra data on consensus clients. Typically, this adds about 100-150 GB to your storage, but don’t worry, it gets pruned automatically after the retention period--unless you decide to keep it longer. You can check out more details here.
  • Partial History Expiry (PHE) rolled out across execution clients on July 8, 2025. This feature allows you to get rid of pre‑Merge block bodies and receipts, which can free up about 300-500 GB on your existing nodes. Thanks to this, 2 TB NVMe drives are up and running again for most full nodes! For more info, head over to the Ethereum blog.
  • Geth’s path‑based archive mode (v1.16+) has trimmed down the “archive” size to roughly 2 TB for full state history. Just keep in mind, with this setup, the eth_getProof function will only work for the latest ~128 blocks. If you need historical proofs, you’ll want to stick with the legacy hash‑based archive. You can find more details here.

Current execution‑layer (EL) disk sizes in 2026

You can expect the numbers to gradually increase through 2026. Here are the latest snapshots maintained by vendors.

  • Geth

    • If you're running a fresh, full node, expect around ~1.2 TiB of storage. If you let some of the pre-Merge history expire, that drops down to about ~830 GiB. Right now, the typical growth rate is around ~7-8 GiB per week on the latest versions. There are tools out there for offline pruning and history pruning, so you're not stuck with all that data forever. Check it out here.
    • You might come across some older guidance that talks about a snap-synced node needing more than 650 GB and increasing by about 14 GB each week, which can be pruned back to around 650 GB. While that gives you a rough idea, it's a bit outdated compared to the newer snapshots that are mentioned above. For more info, look here.
    • For those looking into a path-based archive, you’re looking at around ~1.9-2.0 TB to keep full history. Just a heads up, if you need to use eth_getProof beyond the recent window, this might not be the best option. If you go for a hash-based archive, be prepared--it can easily go over 12-20 TB. You can find more details here.
  • Nethermind

    • If you're running a full node on the mainnet, expect the database size to be around 0.9-1.1 TB on average. But with pre-Merge expiry enabled, users are seeing it drop to about ~740 GiB. For the best performance, Nethermind suggests using a fast SSD or NVMe drive with at least 10,000 IOPS--2 TB is a solid pick if you want to avoid any issues. (docs.nethermind.io)
    • They also recommend setting up automatic full-pruning triggers. Just make sure to keep at least 250 GB free so the pruning process can wrap up smoothly. (docs.nethermind.io)
  • Erigon (v3)

    • Full: ~920 GB; Minimal: ~350 GB; Archive: ~1.77 TB (Ethereum mainnet). It’s a good idea to have 2-4 TB based on how you plan to use it. Aim for 16-64 GB of RAM, and definitely go for a solid NVMe drive. Check out the details here.
  • Besu

    • Snap (pruned) with Bonsai DB: about 805 GB; unpruned comes in at roughly 1.16 TB; and if you're looking at the archive, that's around 12 TB. (besu.hyperledger.org)
  • Reth

    • Full: about 1.2 TB; Archive: around 2.8 TB. It even comes with a neat db stats CLI that helps you check out table-level sizes. (reth.rs)

Just a heads up: according to ethereum.org, you’ll want a minimum of a 2 TB SSD, but they recommend going for 2 TB or more. Plus, plan on around 200 GB extra for consensus data. This will still be relevant in 2026, especially if you’re factoring in blob storage (check out the next section for more on that). (ethereum.org)


Consensus‑layer (CL) disk sizes, plus blobs

  • Beacon databases (no slasher):

    • Here’s the scoop on some of the clients: Teku comes in at around 84 GiB, while both Lighthouse and Prysm are about 130 GiB each. Nimbus is the biggest player at around 170 GiB. Keep in mind these numbers are based on recent snapshots; client developers are continuously optimizing and pruning over time. You can check out more details on this here.
    • Nimbus suggests that you should plan for around 200 GB for your beacon DB. They also recommend going with a 2 TB SSD as a good baseline if you're running both EL and CL on the same machine. More info can be found here.
  • Blobs (EIP‑4844):

    • By default, the retention period for blobs is set to 4,096 epochs, which is roughly 18 days. The actual storage needed will vary based on how much blob data you handle, so it’s smart to budget about 100-150 GB. If you're using Prysm, you can easily adjust the path and retention settings with --blob-path and --blob-retention-epochs. Check out more details here.
  • Slasher (optional):

    • This one’s a bit of a resource hog. If you’re going with Prysm’s slasher, you might find it pushing close to 1 TB on the mainnet, so it’s probably not the best option for most home or single-box setups. You can read more about it here.

What a “typical” 2026 full node looks like (single box)

  • Execution client (Geth/Nethermind/Erigon/Besu): You’re looking at around 0.8-1.3 TB here, but it can vary a bit based on the client you choose and whether the pre-Merge history is still hanging around.
  • Consensus client: This usually takes up about 80-200 GB.
  • Blobs: Expect around 100-150 GB for blob storage with the default retention.
  • OS, logs, and some extra space for compactions/pruning: You’ll want to keep 200-300 GB free for this stuff.

Actionable Sizing

For validators and light RPC setups, a 2 TB TLC NVMe drive with DRAM cache is still a solid choice. However, if you’re dealing with heavy RPC tasks, looking for some extra breathing room for the next few years without needing to prune regularly, or planning to dive into those archive features, then you’ll want to bump it up to a 4 TB drive. Check out more details on this over at ethereum.org.


Enabling and operating history expiry (PHE) today

  • Geth

    • If you're already running a node, just stop geth, then run geth prune‑history --datadir and restart it. If you're setting up a new node, you can jump right into the game by starting with the right history option based on the docs. You can expect to reclaim hundreds of GB! (geth.ethereum.org)
  • Nethermind

    • Good news: history expiry is turned on by default for supported networks. If you really need to keep all your history, you can turn it off by setting Sync.AncientBodiesBarrier=0 and Sync.AncientReceiptsBarrier=0. If you’re looking for old data beyond your live DB, check out the Era1 archives. (docs.nethermind.io)
  • Besu

    • Besu offers both online and offline prune options for pre‑Merge. For the offline method, just run besu ... storage prune‑pre‑merge‑blocks, followed by a one-time --history‑expiry‑prune. (blog.ethereum.org)
  • Ecosystem context

    • Partial history expiry is just the beginning, paving the way for EIP‑4444. There’s a plan for a future “rolling window,” but it’s still in the works. Make sure to budget your storage with pre‑Merge expiry in mind for now, and be ready for that rolling window when it arrives! (eips.ethereum.org)

Pruning and housekeeping that actually moves the needle

  • Geth State Pruning

    • It’s a good idea to run the offline prune every now and then using geth snapshot prune‑state. Try to do this before your disk gets over 80% full, and make sure to keep at least 40-50 GB of space free while it’s running. You can find more details here.
  • Nethermind Full Pruning

    • For Nethermind, set the pruning mode to Hybrid and kick it off based on the free space or the size of the state DB. Aim to keep about 250 GB free so that the full pruning can wrap up without a hitch. If you don’t, you might have to resync, which isn’t ideal. More info is available here.
  • Erigon Modes

    • If you’re just handling Execution Layer tasks and want to save some disk space, consider using the Minimal mode (about 350 GB). For general RPC usage, the Full mode (around 920 GB) is a solid choice. And only go for Archive mode (which is roughly 1.77 TB) if you really need to dig into historical state queries. Check out the specifics here.
  • Reth Visibility

    • To get a clearer picture of what tables are taking up the most space in your database, use the reth db stats command. This will help you confirm how effective your pruning efforts have been. More details are available here.

Disk type: what works vs. what pages you at 3 a.m.

  • Do

    • Go for TLC NVMe SSDs with DRAM cache. Make sure to keep those drives nice and cool, and shoot for a baseline of at least 10k IOPS for your EL. This isn’t just theory; it’s backed by client documentation and what we’ve seen out in the field. Slow disks can really hold up syncs and create a lot of headaches during reorgs. Check out the details here.
  • Avoid

    • Steer clear of DRAM-less and QLC consumer SSDs, along with SATA SSDs for your production EL after Dencun. Many operators have run into issues with sluggish syncs and premature wear on these types. The community keeps track of which models perform better, so take a look at this list for some guidance.

Cloud specifics: provision IOPS and throughput, not just GB

  • AWS EBS gp3 has upped the ante, now allowing you to scale up to a whopping 80k IOPS and 2,000 MiB/s per volume as of September 26, 2025. If you’re dealing with those frustrating “why is my node slow?” tickets, under-provisioned IOPS might just be the culprit! (aws.amazon.com)
  • Over on Google Cloud, the Persistent Disk SSD can hit some impressive speeds--think 100k read and between 50k to 100k mixed IOPS per VM, depending on your vCPU count. Just make sure your machine type can actually handle the disk! (docs.cloud.google.com)
  • Last but not least, Azure’s Premium SSD v2 can deliver up to 80k IOPS and 1,200 MB/s per disk. Plus, you get a baseline of 3,000 IOPS and 125 MB/s for free! Just remember to align your VM limits with what your disk can handle. (azure.microsoft.com)

Practical tip: If you’re running validators and RPC on a single VM, kick things off with around 16k to 32k provisioned IOPS for the EL volume. You can tweak it as needed during the first week of syncing.


Concrete sizing recipes (January 2026)

  • Validator + light JSON‑RPC on a single machine

    • You’ll need a setup with 8 cores, 32 GB of RAM, and a 2 TB TLC NVMe drive with DRAM, connected via a wired network. For your expectations, aim for an EL of about 0.8-1.2 TB, a CL around 80-150 GB, and blobs in the range of ~100-150 GB, plus some extra space just in case. Don’t forget to prune your EL monthly or set up automatic pruning where it’s supported. (ethereum.org)
  • Read‑heavy RPC (no tracing)

    • It’s a good move to separate your EL and CL volumes or even hosts. For the EL, consider going with Erigon Full (around ~920 GB) or Nethermind/Geth with PHE. A 4 TB NVMe drive is recommended here for that extra headroom and to handle compactions when things get busy. It’s also smart to set up a proxy and cache those hot methods. (docs.erigon.tech)
  • Historical/analytics workloads

    • If you’re not too concerned about needing historical eth_getProof, stick with the Geth path-archive (about ~2 TB). But if you do need that, plan for legacy archives, which will be in the ballpark of 12-20 TB, or think about using a managed archive provider. You might also want to look into sharded “time-sliced” archives for better efficiency. (geth.world)

Configuration snippets operators actually use

  • Geth: Prune Pre‑Merge History on Your Node

    • First, stop your geth instance, then run: geth prune‑history --datadir=/path/to/datadir. You can find more info here.
  • Geth: Offline State Pruning

    • Use the command: geth snapshot prune‑state. Just make sure you have at least 40-50 GB of free space and don’t interrupt the process. Check out the details here.
  • Nethermind: Enable Hybrid Pruning Based on Free Space

    • To set this up, use: --Pruning.Mode Hybrid --Pruning.FullPruningTrigger VolumeFreeSpace --Pruning.FullPruningThresholdMb 256000. More on this can be found here.
  • Prysm: Direct Blob Storage to a Separate Volume and Adjust Retention

    • You can do this with: --blob‑path=/mnt/blobs --blob‑retention‑epochs=6000 (the default is 4096; going higher means you’ll need more disk space). Find the details here.
  • Reth: Audit DB Sizes

    • For a detailed look at your database sizes, run: reth db stats --detailed‑sizes. More info can be seen here.

Growth and headroom planning

  • You can expect the EL DB to grow at a pretty steady rate of high single-digit GiB per week with your current clients. It's a good idea to leave some headroom for compactions, pruning, and the occasional variations in blob storage. If you keep a 2 TB volume around 60-70% full, you'll have enough space for both pruning and any unexpected surges. Check out ethdocker.com for more details.
  • If you're using Nethermind, make sure to set your pruning thresholds so that full pruning kicks in before you drop below about 250 GB free. Otherwise, you might find that a resync is your only way out. For more info, head over to docs.nethermind.io.

Common pitfalls we still see in 2026

  • Mixing archive expectations with full-node hardware: If your project needs historical proofs or in-depth tracing, a regular full node just won’t cut it. You should look into path-archive or legacy archives, or consider getting archive access. Check out more info here: (geth.world).
  • Assuming consensus clients are “disk-free”: Be careful with this one! Beacons can take up 80-200 GB, and when you add blob retention, that’s another 100-150 GB. Just cleaning up the Execution Layer (EL) won’t be enough to fix a disk packed with Consensus Layer (CL) data. Get the details here: (ethdocker.com).
  • Choosing SATA SSDs or DRAM-less/QLC NVMe for the Execution Layer: After the Dencun update, the write patterns can really highlight the vulnerabilities of these drives. You might end up facing sync stalls and faster wear and tear. More info can be found here: (gist.github.com).
  • Under-provisioning cloud IOPS: If you’re running on gp3 at baseline levels (that’s 3k IOPS/125 MB/s), you might find your initial sync crawling along or even failing. It’s a good idea to provision higher and then scale back after you’ve synced. Get the scoop here: (aws.amazon.com).

Outlook for 2026-2027

  • Execution-layer history: After we hit the PHE milestone in July 2025, client teams are gearing up for those rolling history windows we talked about with EIP-4444. We should see defaults getting even closer together, so let’s plan on automating things like pruning and snapshots. (eips.ethereum.org)
  • Archives: Geth’s path-based archives have made it possible to manage “archive” on a 2 TB setup, but there’s a little catch with eth_getProof. If you need proofs from arbitrary heights, you’ve got two choices: either hold onto a legacy hash-based archive (which can take up 12-20 TB) or share the load across different nodes. (geth.world)

Bottom line: what to buy and deploy in January 2026

  • For most validators and those doing light to moderate RPC work, a single 2 TB TLC NVMe drive with DRAM cache, along with a modern 8-core CPU and 32 GB of RAM, really hits that sweet spot. Just make sure to enable PHE, schedule some EL pruning, and keep at least 20% of your space free. Check out more details on ethereum.org.
  • If you’re looking for heavier RPC demands or need some extra headroom, you might want to upgrade to a 4 TB NVMe setup. Also, consider going with Erigon Full or Geth/Nethermind with some aggressive pruning. If you’re in the cloud, kick things off at around 16k-32k IOPS for the EL volume and then scale down once you’re in sync. For further insights, head over to docs.erigon.tech.
  • Got a need for historical state at scale? Then you’ll want to use the Geth path-archive on around 2 TB if historical proofs aren’t a must. If they are, plan on needing double-digit TB or think about using a managed archive provider. More info can be found at geth.world.

Sources and further reading

  • Check out the official client docs and hardware pages for all the nitty-gritty details on disk sizes, pruning, history expiry, and archive modes: Geth, Nethermind, Erigon, Besu, Reth, and ethereum.org. You can find everything you need right here: (geth.ethereum.org).
  • For insights on consensus clients and blob storage, take a look at the eth‑docker resource usage matrix, the Prysm blobs guide, and Nimbus requirements. It’s all laid out for you here: (ethdocker.com).
  • Want the latest on partial history expiry? Check out the EF blog, EIP‑4444, and their implementation plans to stay in the loop. Here's the link: (blog.ethereum.org).
  • Curious about SSD models and real-world operator experiences? There’s a community-maintained list of “Great and less great SSDs for Ethereum nodes” that’s definitely worth a look. You can find it here: (gist.github.com).
  • Lastly, to help prevent slow syncs, make sure to check the cloud IOPS/throughput baselines for AWS, GCP, and Azure. You can dive into the docs here: (aws.amazon.com).

7Block Labs: Your Go-To for Client Mix and Storage Solutions

At 7Block Labs, we’ve got your back when it comes to finding the perfect mix of clients for your team. We also make sure pruning and snapshots are a breeze, and we’ll help you design storage topologies that won’t leave you tossing and turning at 3 a.m.

Thinking about your 2026 node plan? We’d love to take a look at your configs and whip up a sizing BOM tailored to your workload. Just let us know!

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.