ByAUJay
Ethereum Node Hardware Requirements, Ethereum Node Requirements, and Ethereum RPC Node Requirements
Summary: If you're looking to get stable, production-ready Ethereum nodes by 2026, the quickest route is by going with NVMe-first storage, using client combos suited to your workload (like full, archive, or tracing), and toughening up your RPC behind a proxy. This guide breaks down the latest baselines and tuning details for EL/CL clients, dives into the storage and bandwidth realities of the blob era, and shares practical deployment strategies for both startups and enterprises.
Who this guide is for
Decision-makers looking into whether to set up their own Ethereum infrastructure or go with managed providers, along with architects figuring out how to size machines for production EL/CL nodes and high-throughput RPC, should pay attention. We're diving into real, up-to-date numbers and best practices as of January 2026--no vague “it depends” here. Check out more on this at (ethereum.org).
What changed since 2024: the blob era and why it matters for hardware
- EIP‑4844, also known as “proto‑danksharding,” made its debut in the Dencun upgrade on March 13, 2024. This update rolled out data "blobs" that stick around for a short time, about 18 days or so (roughly 4096 epochs). Now, EL nodes aren’t hanging onto these blobs for the long haul, while CL nodes keep sidecars around just a little bit longer. This change helps keep long-term disk space in check, though it does ramp up short-term bandwidth and a few storage needs for CL nodes. You can read more about it here.
- The spec has set a cap on the maximum blob overhead per block at around 0.75 MB, which means block propagation stays manageable on modern connections. To prepare for this, several CL clients have started to offer temporary blob storage. For instance, Teku is advising operators to plan for about 50 GB, with a maximum of about 100 GB in worst-case scenarios for blob files, and thankfully, these aren’t going to balloon endlessly. Check out more details here.
Implication: Make sure to prepare for solid bandwidth and keep a little extra room for CL disk space, but there’s no need to go overboard with “blob-sized archives.”
Baseline Ethereum node hardware: credible, current numbers
- General guidance (running both EL and CL on a single host):
- Minimum requirements: At least 2 TB SSD, 8 GB RAM, and a connection speed of 10+ Mbit/s.
- Recommended specs: A speedy 2+ TB SSD, 16+ GB RAM, and a connection speed of 25+ Mbit/s for better performance.
- Don't forget to add around 200 GB for consensus (beacon) data, which can vary based on the client and features you’re using. Check out more details on ethereum.org.
- Execution clients (disk footprints and modes):
- Geth: If you’re running a snap-synced full node, you’re probably looking at a hefty disk footprint of over 500 to 650 GB historically, with plans pushing toward about 2 TB for practical use. There's a new archive mode coming along that will need around 2 TB to keep full history, but just a heads-up--it doesn’t currently support historical eth_getProof (those Merkle proofs). So, for that, you’ll still need the hash-based archive option, which is a whole different beast at 20+ TB. Check out more on their official site.
- Erigon v3: For pruned “full” nodes, expect roughly 920 GB, and if you go for the “minimal” setup, that's more like 350 GB. As for the archive, you're looking at about 1.77 TB on the mainnet based on measurements from September 2025. More details can be found on their documentation.
- Nethermind: If you’re diving into the mainnet full setup, a fast disk with 2 TB NVMe is your best bet. They recommend a baseline of 16 GB RAM and 4 cores, while the archive mode bumps that up to about 128 GB RAM and 8 cores. You can get all the specifics in their system requirements.
- Reth: If you’re considering this one, note that a full node will take about 1.2 TB, while the archive version is around 2.8 TB. They recommend a stable internet connection of 24+ Mbps and really emphasize the importance of TLC NVMe drives. Want to dig deeper? Check out their site.
- Consensus clients (CL) resource snapshots:
- If you're running Teku as a full node and validator, you'll want at least 4 cores, 16 GB of RAM, and a 2 TB SSD. Just a heads up, the real-world size of the beacon DB can change depending on the client, and it typically falls in the 80-170 GB range based on what the community has measured. (docs.teku.consensys.net)
- Bandwidth: If you're looking to keep those peer counts and validators in good shape, try to hit at least 50 Mbps. For non-staking nodes, around 25 Mbps should do the trick (just following Erigon's advice here). Also, the max blob load for EIP-4844 still works well with most standard links. (docs.erigon.tech)
Storage that actually syncs: NVMe, not wishful thinking
- When choosing an SSD for mainnet EL databases, go for a TLC NVMe that has DRAM and delivers good sustained low-latency IOPS. Steer clear of DRAM-less or QLC drives. The community has put together a "hall-of-fame" and "hall-of-blame" list that clearly shows budget SSDs just can’t keep up with state I/O. Check it out here.
- If you're looking for recommendations, consider the WD Black SN850X, Seagate FireCuda 530, or KC3000. And don’t forget about enterprise NVMe options with Power Loss Protection (PLP). Keep those SSD temps below 50°C, and mount your DB filesystem with
noatimeto help reduce write amplification. More details can be found here. - Watch out for some cloud surprises: elastic network volumes (like gp3) might boast "headline IOPS," but they can still have higher write latency. For EL DBs, local NVMe or a RAID0 setup of local NVMe is the way to go. The production notes from Base suggest RAID0 of local NVMe with ext4. More info is available here.
Node types and when you need them
- Full/Pruned Node (this is the default for Geth/Nethermind and pruned/full for Erigon):
- This kind of node keeps the recent state handy, which makes it great for current dapp operations, recent historical queries, and logging. It’s also the quickest to sync, thanks to either snap or staged sync. You can dive into more details here.
- Archive Node:
- If you need speedy random access to historical state at various old blocks or for some deeper analysis, this one's essential. With Geth, you can go for:
- Hash-based Archive (legacy): This option stores complete historical tries and allows for full eth_getProof at any block, but it takes up a whopping 20+ TB.
- Path-based Archive (recommended): A more manageable choice at around 2 TB, though it currently doesn’t support historic eth_getProof (but stay tuned!). Check out the details here.
- For Erigon, the archive node is about ~1.77 TB, and it shines in heavy historical scans or traces through rpcdaemon. You can learn more about it here.
- If you need speedy random access to historical state at various old blocks or for some deeper analysis, this one's essential. With Geth, you can go for:
- Tracing Node:
- This node is ideal for those parity-style trace_* or deep debug_* tasks. Both Erigon and Nethermind offer trace_*, while Geth has debug_* (the semantics aren’t quite the same). It's best to run tracing on a dedicated node to keep things from messing up the write path. More info can be found here.
RPC node requirements that matter in production
- Method coverage vs. client:
- Are you looking for
trace_replayTransactionortrace_filter? If so, check out Erigon or Nethermind (they’ve got the trace_* features). Need something likedebug_traceTransaction? Geth has got your back with its debug_* support. Many teams actually run both types behind a proxy to handle all their workloads like pros. (docs.nethermind.io)
- Are you looking for
- Historical proofs:
- If you've got to serve
eth_getProofat random historical blocks, you’ll need Geth’s hash-based archive or another system that keeps historical tries. Just a heads-up: Geth's path-based archive isn’t going to cut it for this. (geth.ethereum.org)
- If you've got to serve
- Concurrency knobs that actually help:
- Erigon rpcdaemon: you can tweak
--rpc.batch.concurrency,--rpc.batch.limit, and--db.read.concurrency. And hey, disabling HTTP/WS compression can really boost your raw throughput. It’s a good idea to run rpcdaemon out-of-process and lock it to dedicated cores. (github.com) - Nethermind: their performance guide dives into pre-warming, peer connection rates, and high-RAM RocksDB options for RPC workloads. Just be careful--these adjustments can balloon your DB size and CPU usage in exchange for faster speeds. (docs.nethermind.io)
- Geth: since version 1.13+, the cache flags don’t really affect pruning or DB size under the new path schema. So don’t think cranking
--cachewill magically solve growth issues or out-of-memory errors. (blog.ethereum.org)
- Erigon rpcdaemon: you can tweak
- Security:
- Keep your HTTP/WS RPC bound to localhost, okay? Only expose it through a reverse proxy with auth/TLS and method allow-lists. And for the love of all things secure, never make the Engine API (8551) publicly accessible; it should be JWT-authenticated and kept private to the client layer. (geth.ethereum.org)
Ports and networking you must get right
- Execution P2P: You’ll want to use TCP/UDP ports 30303 for Geth, Besu, and Nethermind, and usually 30304 for Erigon’s sentry. Make sure to open or forward these ports for some good peering action. For Consensus P2P, you’ll need 9000 TCP/UDP for Lighthouse, Nimbus, and Lodestar, and 13000/TCP + 12000/UDP for Prysm. For more details, check out the docs.ethstaker.org.
- As for the RPC defaults, you'll find 8545 for HTTP and 8546 for WebSocket. The Engine API runs on 8551 with JWT authentication (private). If you're using Lighthouse REST, it’s set to 5052 by default. Just a heads up: Keep your CL/EL APIs private unless you really mean to share them. More info can be found at setup-guide.web3pi.io.
- Bandwidth targets are pretty important too:
- If you’re running non-staking nodes, aim for at least 25 Mbps; for validators, it’s best to have around 50 Mbps. With EIP-4844, you can handle more peak payload, but you’ll still want to stick within these ranges. For hardware requirements, check out docs.erigon.tech.
Fast, safe syncing in 2026
- Execution layer:
- Go for snap sync if you're using Geth, or check out staged sync if you’re on Erigon or Reth. Reth’s staged sync is pretty cool because it grabs headers and bodies online, but you’ll do most of the state processing offline. Just keep in mind you’ll need a few hours online, followed by some CPU/disk-heavy work. (geth.ethereum.org)
- Consensus layer:
- Try using checkpoint sync (think weak-subjectivity) from a bunch of trusted endpoints. This lets you validate from a recent finalized checkpoint, which can cut your sync time down to just minutes. There are some handy community tools out there, like checkpointz and quorum checkers, so make sure to verify across several providers. (docs.ethstaker.org)
- Note: By the way, Geth also has “blsync,” which is a beacon light client built right into Geth for non-validator tasks. Just a heads up though: it’s not suitable for any production money-handling or validators because it offers weaker guarantees. (geth.ethereum.org)
Practical hardware profiles (2026, mainnet)
- Solo validator + light RPC on one box (cost-efficient)
- You'll need an 8-core setup (think modern Xeon/EPYC or Ryzen), 32 GB of RAM, and a 2 TB TLC NVMe drive (make sure it has a heatsink!). A solid Internet speed of 100/50 Mbps+ and a UPS are a must.
- For your execution layer (EL), go with Geth or Nethermind; for the consensus layer (CL), you can choose from Lighthouse, Prysm, or Teku. Don't forget to add about 50 GB of headroom for those blobs. Check out more details here.
- Read-heavy RPC node (non-archive)
- This one requires a beefier setup with 16 cores and 64 GB of RAM, plus a 2-4 TB TLC NVMe setup (consider RAID0 if you’re using cloud local NVMe), and a 1 Gbps NIC.
- For the execution layer, you’ll want Erigon “full” with a separate rpcdaemon that’s optimized for batch concurrency; and for the consensus layer, a lightweight client like Lighthouse works well. Make sure to run your RPC behind NGINX or HAProxy with some rate limiting and IP allow-lists to keep it secure. More info can be found here.
- Archive + tracing node (analytics/explorer)
- For this setup, aim for 16-24 cores, 64-128 GB of RAM, and at least 4 TB of TLC NVMe (or more if you can swing it) with a 1 Gbps or faster connection.
- Use Erigon archive for tracing and historical scans; if you need to handle historical
eth_getProofat arbitrary blocks, add a Geth hash-archive as well. You can dive deeper into the requirements here.
Example: production‑ready single‑host mainnet node
- OS/filesystem:
- Go with Ubuntu 22.04 LTS and use ext4 on your TLC NVMe drive. Make sure to mount it with noatime. Keep an eye on those SSD temperatures; try to keep them below 50°C. (ethdocker.com)
- EL (Erigon full + rpcdaemon):
- Start Erigon with this command:
erigon --prune.mode=full --http=false - For rpcdaemon, use:
rpcdaemon --http --http.api eth,net,debug,trace,web3,txpool --rpc.batch.concurrency=64 --db.read.concurrency=64 - It’s a good idea to place rpcdaemon behind NGINX for added security (TLS, authentication, and rate limits). (github.com)
- Start Erigon with this command:
- CL (Lighthouse):
- Launch Lighthouse with:
lighthouse beacon --http --http-address 127.0.0.1 --http-port 5052(keep this private); don’t forget to enable checkpoint sync on the first boot. - You’ll also want to open the P2P port 9000 for both TCP and UDP. (lighthouse-book.sigmaprime.io)
- Launch Lighthouse with:
- Networking:
- Forward ports 30303 (and 30304 if you’re using Erigon as a sentry) for TCP/UDP, and also 9000 for TCP/UDP from your router or firewall. Keep ports 8545, 8546, and 8551 internal for security. (docs.ethstaker.org)
- Monitoring:
- Enable metrics for Geth/Erigon or set up Prometheus endpoints, and import the standard Grafana dashboards. By default, Geth serves metrics at 127.0.0.1:6060. (geth.ethereum.org)
RPC hardening checklist (what we implement for clients)
- Bind your EL/CL APIs to localhost, and make sure to publish only through a reverse proxy with TLS, authentication, and IP allow-lists. It’s super important never to expose the Engine API (8551) publicly. You can check out more about this here.
- Only whitelist the JSON-RPC namespaces that you really need, like eth, net, and web3. It’s best to steer clear of exposing debug over public HTTP. If you need to use WebSockets, limit that to subscriptions that absolutely require it. For more details, visit this link.
- It’s a good idea to separate tracing and archive workloads onto their own dedicated nodes. You can utilize client features like Erigon’s rpcdaemon or Nethermind’s tuning to keep CPU and disk usage steady during those spike times. Take a peek at the specifics here.
- Don’t forget to rate limit and cap batch sizes at the proxy level. Keeping batch requests within server limits will save you a lot of hassle down the line. For more info, check this link.
EL/CL client combinations: choose by workload
- High‑throughput reads and deep history:
- Set up Erigon archive along with rpcdaemon for trace_* and log scans. If you want to keep the historic eth_getProof users happy, consider adding the Geth hash‑archive. Check out the details here.
- “Typical dapp” RPC and staking on one host:
- You can run a Geth or Nethermind full node along with Lighthouse, Prysm, or Teku. Just make sure to use checkpoint sync on the Consensus Layer (CL) and snap/staged sync on the Execution Layer (EL). More info can be found here.
- Fast sync and efficient steady‑state:
- Consider using Reth full for a speedy staged sync and a quick response time for eth_call/logs. Just pair it up with a widely-used Consensus Layer. Get all the specifics here.
Client diversity remains an important goal for network health. Make sure to think about minority clients when they fit your criteria. (ethereum.org)
Testnets in 2026: where to practice at scale
- Sepolia is still the go-to testnet for application testing. Holesky is being phased out after the Pectra testing wraps up, and Hoodi kicked off in March 2025 for validator and infrastructure testing. So, if you’re still depending on Holesky, make sure to plan your migrations! (blog.ethereum.org)
Cost/performance tuning that moves the needle
- Disk first, then CPU:
- When it comes to EL block processing, you’ll usually find that it’s more I/O-bound than CPU-bound. So, it’s smart to focus on NVMe latency and consistency rather than just cranking up the vCPU count. Make sure to pre-warm and tune your databases only if you really get the trade-offs involved (remember, a bigger database could mean needing more CPU). (docs.nethermind.io)
- Filesystem and kernel:
- Using ext4 with the noatime option, setting decent open-files limits, and steering clear of CoW filesystems for EL databases often leads to fewer headaches compared to trying out more complicated setups. (ethdocker.com)
- Cloud layout:
- Go for instances that have local NVMe drives; you can stripe them using RAID0 for better bandwidth. For a solid production setup, look at Base’s examples: they use RAID0 local NVMe on i7i.* with ext4 for both the Reth archive and Geth full. (docs.base.org)
Quick decision matrix
- You'll mostly need…
- For the current state, logs, and submitting transactions → A full or pruned node on a speedy TLC NVMe will do the trick; 2 TB should cover you for now, but if you want to be set for the long haul, aim for 4 TB. Check out the details on ethereum.org.
- If you want the historical state at random blocks (and need proofs) → You’ll want a Geth hash-archive, which is around 20+ TB, or go for a specialized historical store. Otherwise, Erigon or Reth archives can do most of the analytics for you without the proofs. More info can be found on geth.ethereum.org.
- For debug/trace introspection at scale → You're looking at Erigon or Nethermind with trace_* on a dedicated tracing node, plus you might need to fine-tune rpcdaemon/DB. Take a look at the specifics on github.com.
- For validator operations → You can use any popular CL that supports checkpoint sync, just make sure you have at least 50 Mbps and an extra ~50 GB for blob headroom. Also, keep your Engine API private and remember to use a UPS and monitoring. Check out more details on docs.teku.consensys.net.
Implementation snippets
- Geth + CL (Secure Engine API):
- You can start Geth with this command:
geth --authrpc.addr localhost --authrpc.port 8551 --authrpc.vhosts localhost --authrpc.jwtsecret /path/jwt --http --http.api eth,net,web3 - Make sure your CL is set up to connect to
http://localhost:8551using the same JWT secret. Keep the ports 8545 and 8551 internal. For more info, check out geth.ethereum.org.
- You can start Geth with this command:
- Erigon RPC Separation:
- Run Erigon with
erigon --prune.mode=full …(don't enable HTTP here). - For the RPC part, start it with
rpcdaemon --http --http.api eth,net,debug,trace,web3 --rpc.batch.concurrency=64. - It’s a good idea to put rpcdaemon behind NGINX, add TLS/auth, and set reasonable rate limits per IP. More details can be found on github.com.
- Run Erigon with
- Lighthouse REST (Local Only):
- Start Lighthouse with:
lighthouse beacon --http --http-address 127.0.0.1 --http-port 5052. If needed, you can expose this through a reverse proxy. You can read more about it here: lighthouse-book.sigmaprime.io.
- Start Lighthouse with:
Takeaways (TL;DR)
- First up, let's talk about disk size. For most full nodes, a 2 TB TLC NVMe will do the job just fine today; however, if you're thinking long-term, shoot for a 4 TB to give yourself some extra breathing room. When it comes to storage, Erigon and Reth are more efficient with their archives (about 1.8-2.8 TB) compared to the older Geth hash-archive, which is a hefty 20+ TB. Just keep in mind that you’ll still need the legacy archive for historical
eth_getProof. (docs.erigon.tech) - Next, let’s keep that RPC private and minimal. It’s best to publish it only behind a proxy that has authentication and TLS. Also, make sure to separate tracing and analytics from the actual write operations. (geth.ethereum.org)
- When it comes to costs, don’t forget to factor in some disk and bandwidth overhead from blobs, which have about an 18-day retention. Budget around 50 GB for transient CL storage and make sure you’ve got stable bandwidth around 25-50+ Mbps. (newreleases.io)
- For those looking to boost throughput, Erigon’s
rpcdaemontuning, paired with Reth’s staged architecture, can really make a difference, especially when you connect them with fast NVMe drives and reasonable proxy limits. (github.com)
If you need a customized bill of materials and a high-availability (HA) topology that fits your workload--like traffic shape, trace depth, retention, and multi-region setups--we're more than ready to help you create a blueprint for it.
Like what you're reading? Let's build together.
Get a free 30-minute consultation with our engineering team.
Related Posts
ByAUJay
Best Practices for Ethereum Smart Contracts in 2026: How to Implement Them Effectively
**Summary:** By 2026, “good” Ethereum smart contracts have transformed quite a bit. Pectra (May 7, 2025) boosted blob throughput, which made calldata-heavy transactions pricier, and rolled out EIP‑7702 smart EOAs. Plus, we've got Dencun-era opcodes like MCOPY and transient making waves in the space.
ByAUJay
Getting a Grip on EIP-7691 Cost Curves: What App Teams Need to Know About Blob Fee Changes
> Summary: EIP-7691 has increased Ethereum’s blob capacity and revamped the fee update curve, making the way blob base fees change from block to block a bit different. This guide is designed to help decision-makers wrap their heads around the new cost dynamics and effectively gauge volatility.
ByAUJay
Optimizing Your Ethereum Node: Hardware Needs and API Performance Tips
**Summary:** Discover what it takes to run high-reliability Ethereum nodes in late 2025 and dive into the specific tweaks that can boost JSON-RPC performance. This guide breaks down the current requirements for disk, CPU, and RAM for different clients, plus it shows you exactly how to fine-tune your queries for optimal results.

