7Block Labs
Blockchain Technology

ByAUJay

Nethermind Hardware Requirements 2026, Aztec Node Requirements, and Base Node Requirements Explained

Your Go-To Guide for Sizing, Tuning, and Running Nethermind (Ethereum EL), Aztec Nodes, and Base (OP Stack) Nodes as of January 7, 2026

Here’s a straightforward guide for setting up and running Nethermind (Ethereum EL), Aztec full/sequencer/prover nodes, and Base (OP Stack) nodes. We’ll dive into the nuts and bolts of hardware options, storage requirements, and some operational hiccups you might run into. Let's unpack the exact specs, client choices, and the fastest practices that teams are currently leaning on.

1. Nethermind (Ethereum EL) Nodes

Hardware Recommendations

  • CPU: Aim for at least a 6-core processor. Something like the AMD Ryzen 5 or Intel i5 should do the trick.
  • RAM: You’ll need a minimum of 16 GB. More is better if you're looking to future-proof your setup.
  • Storage: SSD is a must for speed. Plan for around 1 TB for the database and logs. If you’re running multiple nodes, consider scaling up.
  • Network: A stable and fast internet connection is non-negotiable.

Storage Math

  • State Size: Expect around 100 GB at launch, but keep an eye on growth.
  • Historical Data: Plan for additional storage if you want to keep transaction history.

Operational Gotchas

  • Ensure you’ve got monitoring tools in place. It can be tricky to troubleshoot without them.
  • Regularly update your client to the latest version for performance boosts and security fixes.

2. Aztec Nodes

Node Types

  • Full Node: Good for those wanting to validate their own transactions.
  • Sequencer Node: If you're looking to optimize transaction processing, this is your best bet.
  • Prover Node: Ideal for those handling a lot of zk-SNARK computations.

Hardware Choices

  • CPU: A solid 8-core processor should handle the workload well.
  • RAM: 32 GB is recommended for smooth operation, especially for the Prover Node.
  • Storage: Go for an SSD again, with at least 1 TB. Historical data can pile up quickly!

Network Needs

  • Low latency is key here. Aim for a dedicated connection to minimize downtime.

Operational Tips

  • Consistently check on the state of your node. Automated alerts can save you from unexpected outages.
  • Consider using Docker for easier deployment and management of your nodes.

3. Base (OP Stack) Nodes

Node Specifications

  • CPU: A 6-core processor should suffice, but more cores are better for heavy loads.
  • RAM: Stick with 16 GB as a baseline; higher specs will help with scaling.
  • Storage: At least 500 GB on a fast SSD to accommodate growth.

Important Considerations

  • Network: A reliable connection is crucial. Packet loss can lead to issues in transaction processing.
  • Monitoring: Use tools like Grafana or Prometheus to keep tabs on performance.

Conclusion

Setting up and tuning Nethermind, Aztec, and Base nodes takes some effort, but with this guide in hand, you’ll be on the right track. Just keep an eye on your hardware choices, stay updated with the latest practices, and you’ll find that running these nodes can be a rewarding experience. Happy node running!


TL;DR (1-2 sentence summary)

For 2026, you’ll want to aim for about 16-32 GB of RAM, some slick modern multi-core CPUs, and speedy TLC NVMe storage with anywhere from 2 to over 6 TB, depending on what you’re doing. Here’s a quick rundown:

  • Nethermind full: at least 2 TB
  • Aztec full: at least 1 TB plus some reliable L1 endpoints
  • Base full: 2 TB (if you can, go for Reth; you might need around 4+ TB for archiving)

Make sure to use local NVMe or io2 Block Express. Don’t forget about settings for pruning/history where it makes sense, and snapshsots can really help cut down sync time from days to just hours--or even minutes! Check it out in more detail at (docs.nethermind.io).


Why this matters for 2026 decision‑makers

  • Nethermind is still a leading player in the Ethereum execution client scene, and it's also a solid choice for OP Stack chains. Just remember, getting the sizing and pruning right is key to keeping things stable and cost-effective. Check it out here.
  • Aztec is in the process of launching a decentralized, privacy-focused L2 that features unique roles like full, sequencer/validator, and prover. Each role comes with its own hardware requirements, which is pretty interesting! You can find more details here.
  • Base is all about Reth these days, especially when it comes to boosting performance and managing archive functions. Their latest documentation now includes specific guidance on instances and storage, which is super helpful! Check out the details here.

Part I -- Nethermind hardware requirements in 2026

Baseline specs and OS support

  • Memory and CPU (Ethereum Mainnet):
    • Full node: 16 GB RAM, 4 cores
    • Archive node: 128 GB RAM, 8 cores
  • Supported OS: modern 64‑bit Linux, Windows, macOS (current LTS releases). (docs.nethermind.io)

What this means in practice:

  • Full nodes run smoothly on cloud instances with 8-16 vCPUs and 16-32 GB of RAM, as long as the storage setup is appropriate (details below). (docs.nethermind.io)

Disk and IOPS you actually need

  • Full node disk: You’ll want a budget of at least 2 TB for a fast SSD or NVMe. As of 2024, Nethermind’s database sits at about 1 TB right after a fresh sync, so keep in mind it’ll grow and you’ll need room for the consensus client. Aim for at least 10,000 read/write IOPS--using slower disks can really mess with your sync and impact those validator rewards. (docs.base.org)
  • Archive node disk: You’re looking at a minimum of 14 TB as of mid-2023, and it’s piling on about 60 GB each week. Go this route only if you absolutely need on-prem historical state. (docs.base.org)

Practical Storage Picks for 2026:

  • When it comes to local storage, go for TLC NVMe drives. They totally outshine network disks for snapshot syncing. Just steer clear of QLC drives--they tend to throttle down to around ~0.5 GB/s during sustained writes, which is a bummer. Also, don't forget to keep that NVMe cool to avoid any thermal throttling while you're importing states. You can find more info here.

Sync and pruning modes that save money

  • With some solid hardware, snap sync can wrap things up in about 25 minutes! Just a heads up, it's super I/O-bound, meaning your choice of SSD will really affect how long it takes overall. Check out more details here.
  • When it comes to ancient barriers, it's best to hang onto recent receipts and bodies while letting go of the really old ones. The default barrier focuses on the ETH deposit contract era (Block 11,052,984), so it's still good for validators digging through deposits. Want to learn more? Click here.
  • Rolling pruning (which keeps about a year of history by default) can be set up like this:

    • Use --History.Pruning=Rolling to stick with the default retention of around 82,125 epochs (that’s the minimum).
    • Or go for --History.Pruning=UseAncientBarriers if you want to switch to ancient-barrier mode. For more info, take a look here.
  • If you've got servers with more than 16 GB of RAM, try bumping up your memory hint, for example: --Init.MemoryHint 2000000000 (that's 2 GB) or even higher to give those caches a boost. Also, think about scaling back your peer count after syncing to speed up block processing time, like this: --Network.MaxActivePeers 20. You can find more details here.
  • Looking for quicker block processing and don’t mind sacrificing some disk space? Go ahead and use --Db.StateDbDisableCompression true--you can expect about 3-5% boost in execution speed, but it’ll take up more disk. Check out the full scoop here.

Example 2026 build sheets

  • Affordable mainnet full + validator setup:

    • CPU: 8 vCPUs
    • RAM: Between 16 and 32 GB
    • Disk: 2 TB TLC NVMe (at least 10k IOPS)
    • Additional info: Make sure to enable snap sync, rolling pruning, and have the consensus client co-located (allocate around 200 GB for this). (docs.nethermind.io)
  • Research/archive:

    • CPU: At least 16 vCPUs
    • RAM: 128 GB
    • Disk: Aim for 16-20 TB using TLC NVMe or a mix of SSD and HDD, but make sure to fine-tune it for optimal performance; anticipate consistent growth. (docs.base.org)

Part II -- Aztec node requirements (full, sequencer/validator, prover)

Aztec is a privacy-first zkRollup aiming for a fully decentralized network. Operators can take on various roles, each with its own hardware and networking requirements. Check it out at (testnet.aztec.network).

Full node (most teams start here)

Minimum Hardware Requirements

Whether you're setting up for mainnet or testnet, the hardware specs are pretty much the same right now:

  • 8 cores / 16 vCPU (2015 or newer)
  • 16 GB RAM
  • 1 TB NVMe SSD
  • 25 Mbps network speed

Make sure to run your setup via Docker Compose, and don't forget to keep your images updated and stick to the default network flags. For more details, check out the official docs.

Operational Prerequisites You Might Overlook

Here are a couple of must-haves that can slip under the radar:

  • First up, you really need reliable Ethereum L1 endpoints--both for execution and consensus. It's a good idea to run your own L1 node to steer clear of any throttling or latency issues. If you’re leaning towards a third-party provider, just make sure they support Beacon APIs. You can find more details in the Aztec documentation.
  • Ports to Keep in Mind:

    • P2P: 40400/tcp and 40400/udp (for discovery)
    • Public Aztec RPC: usually 8080
    • Admin API: 8880 (just a heads up, this one’s not exposed to the host; you'll want to use docker exec for any local admin calls) Check this out for more info: Web3 Creed Gitbook.

L1 Endpoint Examples (For Self-Hosting with Aztec)

  • Execution RPC on 8545 (like Geth or Nethermind)
  • Beacon API on 3500 (using Prysm, Lighthouse, Teku, or Nimbus)

Check out more details here.

Sequencer/validator nodes

  • The hardware setup is pretty much the same as what you'd use for full nodes during the testnet and early mainnet phases: think 8 to 16 cores, at least 16 GB of RAM, and some speedy NVMe storage.
  • Don't forget, you'll need BLS keys and the validator configuration too. Recent upgrades to the testnet rolled out BLS aggregation support along with a revamped slashing system. Check it out here: (aztec.network)

Operational realities from the 2025-2026 testnets:

  • Slashing is in play, but it’s set up so that it won’t hit you for brief home-staker hiccups. However, if there are long stretches of downtime or any bad behavior, you’ll face penalties. So, make sure your machine stays stable and under watch! (aztec.network)

Prover nodes (data‑center scale)

  • Get ready for some serious data-center-level capacity: Aztec has made it clear that to handle target workloads, provers need about 40 machines, each packing 16 cores and 128 GB of RAM. That's why they're intentionally keeping the public testnet's TPS low (around ~0.2 TPS) without any economic incentives to push it higher. Just a heads up, this isn’t something you’d set up in a home lab. (aztec.network)

Practical Aztec deployment checklist (full/sequencer)

  • Disk: Start off with a 1 TB TLC NVMe. Keep an eye on growth and logs, and make sure there’s enough thermal headroom for the NVMe, especially during those busy proving traffic times--even if you’re not actually running a prover. Check out the details here.
  • L1: It’s best to go with a self-hosted L1 EL+CL setup; if that's not your thing, choose a provider that offers Beacon endpoints and doesn’t throttle your requests per second for what you need. More info can be found here.
  • Networking: Make sure to open up 40400/tcp+udp and your RPC port. Just a heads-up, keep the Admin port (8880) closed off from exposure. For additional guidance, visit this page.
  • Upgrades: Stick to image tags that align with the current network versions (like v2.1.x); it’s a good idea to follow the versioned pages on the docs for “Ignition/Testnet.” You can dive into that here.

Part III -- Base node requirements explained (OP Stack, 2026 edition)

Base nodes consist of two main parts: the op-node (which handles consensus and derivation) and an execution client. The Base team is moving towards using Reth because of its strong performance and archive capabilities, while Geth is being less emphasized for tasks involving archive workloads. Nethermind is still in the mix too. You can find more details here.

Minimums vs. production‑grade hardware

  • Minimum specs to kick things off:

    • CPU: 8 cores
    • RAM: at least 16 GB (32 GB is better)
    • Storage: local NVMe SSD; figure out the capacity using this formula: (2 × current chain size) + snapshot size + 20% buffer. (docs.base.org)
  • Here are some production examples from Base:

    • Reth archive node: Go with an AWS i7i.12xlarge or bigger, and set up RAID0 across local NVMe using ext4.
    • Geth full node: Same recommendation here--AWS i7i.12xlarge or larger, RAID0 on local NVMe, ext4.
    • If you really need to use EBS, opt for io2 Block Express. Just make sure your buffered reads can keep up during the initial sync; local NVMe is still the way to go. (docs.base.org)
  • Client guidance:

    • Reth is now the go-to execution client! Base is making the switch and fine-tuning everything to work mainly with Reth. Just a heads up, Geth is not supported for archive snapshots anymore. Check out the details here: (docs.base.org)

How much disk do you actually need?

  • Reth system requirements (as of June 23, 2025):

    • Base full node: at least ~2 TB
    • Base archive: at least ~4.1 TB Keep in mind that these are real-time, chain-specific numbers that will keep increasing. Always use the Base docs' storage formula to make sure you factor in some room for snapshot decompression. Check it out here: (reth.rs)
  • Snapshots to speed up your initial sync:

    • We’ve got some official snapshot endpoints out there (Reth archive mainnet, Geth full, and testnet versions). If you grab a recent snapshot, you can seriously cut down on sync time--just remember to have enough space for both the compressed archive and the extracted files. Check it out here: (docs.base.org)

OP Stack interoperability and client diversity

  • According to OP Stack’s operator documentation, you can choose to run either op-geth or nethermind as the execution client in your rollup node. Plus, Base is also all in on Reth through its node repository. Check it out here: (docs.optimism.io).
  • Over on Base’s engineering blog, they dive into how Reth helps cut down on outages and boosts performance for their throughput needs. It’s clear why they’re making a push for Reth in their archive efforts. Take a look: (blog.base.dev).

Example 2026 Base builds

  • Full node (Reth/Geth):

    • CPU: 8-16 vCPU
    • RAM: 32-64 GB
    • Disk: 2-4 TB local TLC NVMe (go for RAID0 if you're using multiple devices), formatted as ext4. Make sure to use snapshots for your initial sync, and don't forget to have an L1 RPC and Beacon endpoint ready and synced up. Check out the details in the docs.base.org!
  • Archive (Reth):

    • CPU: 16-32 vCPU
    • RAM: 64 GB+
    • Disk: 4-8 TB local TLC NVMe; check out Reth/Base guidance. (reth.rs)

Emerging best practices that cut time and incidents

  1. If you're kicking off your sync, go for local TLC NVMe instead of network storage--trust me, it’ll give you better performance in the long run. If you're on AWS and have to use EBS, definitely opt for io2 Block Express and keep an eye on your read latency while you're catching up. (docs.base.org)
  2. Want higher throughput? Consider striping multiple NVMe devices with RAID0--that’s what Base uses in production--and format them to ext4. Just make sure you’re monitoring device temperatures and keeping tabs on those SMART stats. (docs.base.org)
  3. When you’re planning storage, stick to hard numbers--not just wishful thinking:

    • For Base, you’ll need a disk size of at least (2 × current chain size) + snapshot size + a 20% buffer. (docs.base.org)
    • For Nethermind, aim for a full size of at least 2 TB and an archive of 14 TB and growing; don’t forget to keep it at 10k IOPS. (docs.nethermind.io)
    • Aztec suggests a baseline of 1 TB for a full node, plus extra space for logs and updates; just remember that your L1 endpoints need to do the heavy lifting. (docs.aztec.network)
  4. Snapshots are your friend for Base, and snap-sync for Nethermind will cut your time-to-ready from days down to hours or even minutes. Just make sure you’ve got enough decompression headroom. (docs.base.org)
  5. Start with conservative tuning, then iterate:

    • For Nethermind, consider raising --Init.MemoryHint, reducing peers after sync, and if you're CPU-bound, think about state-DB no-compression if you have extra disk space. (docs.nethermind.io)
    • If you're on Base, switch to Reth for archives; if you’re still using Geth, follow their caching recommendations--just a heads-up, it’s marked as deprecated for archives in the docs. (docs.base.org)
  6. Make sure you're not skimping on the L1 endpoints for Aztec. The Beacon API and EL RPC throughput need to keep up, or your node might lag behind. (docs.aztec.network)
  7. Secure your Aztec node surfaces properly:

    • Open up port 40400/tcp+udp for P2P, expose user RPC as needed, but keep that admin port hidden away (you can use docker exec for admin tasks). (web3creed.gitbook.io)

Concrete sizing scenarios (worked examples)

1) Ethereum staking plus dApp analytics (Nethermind full)

  • Goal: We’re looking to run Nethermind EL alongside Lighthouse or Teku CL for staking, while also providing moderate RPC to our internal applications.
  • Hardware: We’re all set with an 8 vCPU setup, 32 GB of RAM, and a solid 2 TB TLC NVMe drive (making sure we hit that ≥10k IOPS mark).
  • Config:
    • We’ll be running Nethermind with snap sync enabled; don't forget to set --History.Pruning=Rolling. Plus, let’s tune --Init.MemoryHint to just over 2 GB, and cap our peers at around 20 after sync.
    • The consensus client will be on the same machine, so we should reserve about 200-300 GB for the Beacon data.
  • Why it works: This setup keeps us within the full-node limits for Nethermind, all while cutting down on read/write pressure and preventing database bloat thanks to rolling pruning. Check out more details in the Nethermind docs.

2) Aztec full node + sequencer candidate (testnet/mainnet)

  • Goal: We’re aiming for a privacy-first approach to the user experience for our internal apps and the join sequencer set.
  • Hardware: You’ll need 16 vCPUs, 32 GB of RAM, and either 1 or 2 TB TLC NVMe storage.
  • Network: Make sure port 40400/tcp+udp is open; use 8080 for RPC, and let's keep 8880 as internal.
  • Dependencies: You’ll need to run your own Geth/Nethermind along with Prysm or Lighthouse, and feed the EL/CL URLs into Aztec. If you’re not self-hosting, double-check that your provider supports the Beacon API.
  • Why it works: This setup meets Aztec's minimum requirements and aligns with the networking model. Typically, the bottleneck is at the L1 endpoints, not your local CPU. (docs.aztec.network)

3) Base full node for production workloads

  • Goal: We want a solid RPC setup for our internal microservices that can handle a lot of read requests per second and allows for quick re-synchronizations.
  • Hardware: We're working with 16 vCPUs, 64 GB of RAM, and a speedy 4 TB TLC NVMe set up on RAID0 with ext4.
  • Software: We’re using Reth execution on our op-node; restoring from the newest official snapshot. Plus, we’ll keep both an Ethereum L1 RPC and a Beacon endpoint in sync and always available.
  • Why it works: This setup lines up perfectly with what we’ve seen in Base’s production examples and their storage calculations. Reth is definitely the go-to for durable archiving and high-throughput situations. (docs.base.org)

Cost, risk, and vendor choices

  • Cloud vs. bare‑metal: If you’re latency‑sensitive or snapshot‑heavy, local NVMe on bare‑metal often beats networked storage. If you must be in AWS, i7i.12xlarge with local NVMe (RAID0) matches Base’s own production pattern; io2 Block Express is the only EBS tier we recommend for initial syncs. (docs.base.org)
  • Client diversity: For OP Stack (incl. Base), keep diversity in mind (Reth + another client) to avoid single‑client failures. For Ethereum mainnet, mixing Nethermind with other ELs improves resilience. (docs.optimism.io)
  • Archive vs. on‑demand data: Many teams over‑buy disk for “archive” when an indexer or data provider would be cheaper; a Reth archive on Base is still multi‑terabyte and growing. Validate your retrieval SLAs first. (reth.rs)

Operational playbook you can standardize on

  • Monitoring:

    • Keep an eye on client metrics; if you’re using Nethermind, definitely check out the Grafana/Seq dashboards in the docs. Also, make sure to track NVMe temperature/SMART and latency for each device. (docs.nethermind.io)
  • Backup/restore:

    • Rely on snapshots for quick rebuilds; be sure to note the exact snapshot source, block height, and checksum in your runbooks. This will save you a ton of hassle down the line. (docs.base.org)
  • Change management:

    • Pin those docker image tags to specific versions (think Aztec Ignition/Testnet versions), and make sure to roll forward during your maintenance windows. And don't forget to keep a record of any config differences. (docs.aztec.network)
  • Network hygiene:

    • For Aztec, double-check that port 40400/tcp+udp is open on both your OS firewall and provider console. If you're on Base/OP nodes, make sure the L1 RPC/Beacon URLs are accessible and not running into any rate-limiting issues. (web3creed.gitbook.io)

Key takeaways

  • Nethermind in 2026: You'll want a budget of around 2 TB TLC NVMe and at least 10k IOPS for those full nodes. Plus, using snap sync and rolling/ancient-barrier pruning will help keep things lean and speedy. Check out the details here.
  • Aztec: Good news! Full nodes are pretty lightweight compared to provers. Just make sure your L1 endpoints are reliable. Oh, and don’t forget to secure those ports properly--keep that admin API under wraps. More info can be found here.
  • Base: It’s time to switch to Reth. Aim for full nodes around 2+ TB and archives at about 4+ TB. If you can, go for local NVMe RAID0--it’s the way to go! And thanks to snapshots, rebuilding your setup will be much easier. Get all the specifics here.

Need a reference architecture or a turnkey build?

7Block Labs is all about creating, setting up, and managing these stacks for both startups and big companies--whether you're working with bare metal or the big cloud players. We offer SLO-driven monitoring, snapshot pipelines, and failover solutions that cater to diverse clients. If you're interested in our ready-to-go Terraform + Docker bundles (like Nethermind+CL, Aztec full/sequencer, and Base Reth full/archive) tailored to fit your traffic and compliance requirements, just give us a shout!

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.