ByAUJay
Geth Requirements, Geth Full Node Disk Size 2026, and HSM PQC Considerations for Validators
A Practical 2026 Guide for CTOs and Infra Leads
Hey there, CTOs and infra leads! This guide is here to help you figure out what hardware you should be snagging for Geth, how much disk space you'll really need after history expiry, and how to get your head around HSM and post-quantum cryptography (PQC) for managing validator keys and remote signing. Plus, we've got some handy commands and migration steps for you.
We've pulled info from the latest client documentation, Ethereum Foundation updates, and the guidance from NIST/IETF on PQC to make sure you’re all set.
What Hardware to Buy for Geth
When you're looking to run Geth efficiently, consider the following specs:
- CPU: Go for a multi-core processor (at least 4 cores). More cores can handle multiple operations smoothly.
- RAM: Aim for 16GB or more. This ensures you have enough memory for smooth operations.
- Disk Type: SSDs are a must. They drastically improve read/write speeds compared to HDDs.
- Disk Space: Depending on your usage, plan for at least 1TB to ensure you don't run out of space too soon.
How Much Disk Space Do You Need Post-History-Expiry?
Once you've hit the history expiry and pruned your node, you can significantly reduce the storage requirements:
- After pruning, you might only need around 200GB-500GB depending on your applications and how much transaction data you're planning to keep.
- Regular monitoring will help you adjust your storage needs as blockchain data grows.
Planning HSM and Post-Quantum Cryptography (PQC)
When it comes to validator key management and remote signing, it's crucial to incorporate HSM and PQC:
- Choose Your HSM: Go for options that support high availability and are compliant with standards (like FIPS 140-2).
- Implement Key Management: Use secure key generation and storage methods.
- Integrate PQC Algorithms: As quantum computers come into play, ensure your cryptographic framework is ready. This includes algorithms like:
- Lattice-based: Good for resisting quantum attacks.
- Hash-based: Simple yet effective for signature schemes.
Concrete Commands
Here are some commands to help you get started:
# To install Geth
sudo add-apt-repository ppa:ethereum/ethereum
sudo apt-get update
sudo apt-get install geth
# To start Geth with specified disk options
geth --syncmode "fast" --rpc --rpcaddr "localhost" --rpcport "8545" --datadir "/your/data/directory"
Migration Steps
If you're working on migrating your existing setup to an updated architecture, follow these steps:
- Backup Your Current Data: Always start with a data backup.
- Install New Hardware: Physically set up your new machines.
- Install Required Software: Use the commands provided earlier to get Geth up and running.
- Restore Data: Migrate your data over to the new setup.
- Test: Once everything is set up, run some tests to ensure everything's functioning as expected.
Keep these pointers in mind and adapt them to your specific needs, and you’ll be navigating the 2026 landscape like a pro!
Who this is for
- Startup teams transitioning from hosted RPC to managing their own execution clients (like Geth) and exploring the ins and outs of running validator operations.
- Enterprise infrastructure and security architects crafting robust, compliant validator and RPC setups.
TL;DR (exec summary)
- By 2026, a production Geth full node will fit nicely on a 2 TB TLC NVMe drive. You should budget for over 500 GB for Geth (snap/full), around 12 TB for legacy archives, and about 1.9 TB for the new path-based archive state. Don’t forget to factor in an extra ~200 GB for consensus data. To keep things running smoothly, aim for a sustained bandwidth of at least 25 Mbit/s. (ethereum.org)
- In July 2025, Partial History Expiry (PHE) was rolled out across all execution clients. This means you can prune pre-Merge block bodies and receipts, saving around 300-500 GB of space, which helps more nodes stick to those 2 TB drives. Plus, Geth v1.16+ comes with a handy one-shot prune command and “era1” history retrieval for targeted data restoration. (blog.ethereum.org)
- PQC has now been standardized (FIPS 203/204/205) and is making its way into HSM firmware (ML‑KEM, ML‑DSA, SLH‑DSA). While Ethereum continues using BLS12‑381 for signing, it’s a smart move to apply PQC to your transport, control plane, and certificate PKI. As you strategize your HSM rollouts, make sure they can handle PQC for TLS and code signing while still using remote signers for BLS. (nist.gov)
Section 1 -- Geth requirements in 2026: what actually matters
Recommended Hardware Budget Lines for Mainnet
If you're looking to set up a single-box EL+CL with light RPC for the mainnet, here are some hardware budget lines you might want to consider:
- CPU: Aim for 4 to 8 modern cores running at 3.0+ GHz. More cores help with those RPC bursts and compaction, while a higher clock speed keeps sync nice and smooth. (ethereum.org)
- RAM: You’ll want around 16 to 32 GB. Geth itself isn’t too demanding, but when you factor in the consensus layer, operating system, monitoring tools, and database caches, having that extra headroom really helps. (ethereum.org)
- Disk:
- Primary: Go for a 2 TB TLC NVMe SSD with DRAM cache (IOPS over 100k). The latency of NVMe is key for sync and RPC performance. (ethereum.org)
- Optional secondary: You can use a budget-friendly HDD or SATA SSD for “ancients”/cold storage by using
--datadir.ancientto offload older data. (geth.ethereum.org)
- Network: A solid ≥25 Mbit/s symmetric connection is ideal, and it’s best if it’s unmetered. Validators really can’t afford downtime or bandwidth limits. (ethereum.org)
- OS/filesystem basics: Stick with ext4 or xfs, make sure to use noatime, perform periodic TRIM, and keep your NVMe firmware updated. And while you should keep swap minimal, don’t forget to have it set up just in case.
Why these picks
So, Ethereum.org’s run-a-node page currently recommends having a 2 TB SSD and 16 GB of RAM for your execution clients, plus an extra ~200 GB for the consensus beacon data. This setup gives you some breathing space for historical growth between prunes and accounts for log and index overhead. You can check it out for more details at (ethereum.org).
Key Geth Storage Concepts You Should Exploit:
When diving into Geth (Go Ethereum), understanding how storage works is super important. Here are some key concepts that you should definitely keep in mind:
1. State Database
Geth uses a trie structure to manage the state of the Ethereum network. Basically, this means that every time there's a change--like a transaction or smart contract execution--it gets updated in this database. The state database is where all the current balances, smart contract states, and other info are stored.
2. Block Storage
Blocks in Ethereum are stored in a way that makes them easy to access and verify. Geth organizes blocks in a database, allowing you to quickly retrieve any block by its number or hash. This efficient retrieval helps in syncing your node with the network.
3. Transaction Pool
Geth has a transaction pool (or tx pool) that holds all the transactions that are pending confirmation. When you send a transaction, it gets picked up and put into this pool until it's included in a block. Keeping an eye on the transaction pool can give you insights into network activity and pending transactions.
4. Key Management
Managing your keys securely is crucial. Geth provides a way to create and manage accounts, including generating new keys and handling existing ones. Make sure to back up your key files and consider using a hardware wallet for added security.
5. Light Clients
For those who don’t need to run a full node, Geth supports light clients. These are much lighter versions of the full node and they don’t store the entire blockchain. Instead, they only download the headers of the blocks and request data as needed. This is a great option if you’re low on resources but still want to interact with the Ethereum network.
6. Pruning
If you're running a full node and space is getting tight, Geth offers a pruning feature that helps manage storage. It can be configured to delete old block data, keeping only what’s necessary to remain in sync with the blockchain.
7. Database Backends
Geth supports different database backends, including LevelDB and RocksDB. You can choose what suits your needs best. Each has its pros and cons, so think about your specific requirements before settling on one.
8. Configuring Storage Options
Geth allows you to tweak storage options in its configuration file, so you can optimize performance based on your hardware and network conditions. Take some time to explore these settings to get the most out of your Geth setup.
9. Event Logs
When smart contracts emit events, Geth stores these logs, which can be really helpful for tracking contract activity. You can query these logs to get insights on past interactions with contracts, and they can be particularly useful for debugging.
Conclusion
Getting familiar with these storage concepts will definitely give you an edge when working with Geth. Whether you're running a full node, interacting with contracts, or monitoring network activity, understanding these key features will help you navigate the Ethereum landscape more effectively. Happy exploring!
- Freezer/Ancients: The Geth client stores old block bodies and receipts--what we call "ancients"--in a separate append-only area. You can easily move this storage around using the
--datadir.ancientoption, which is perfect if you're working with larger or slower disks. Check it out here. - Database engine: By default, Geth uses LevelDB, but if you’re looking for something a bit different, you can switch to Pebble with the
--db.engine=pebbleoption. It’s actively maintained and gives you a fresh take--just remember, you’ll need to resync and set up a new data directory. This can be really handy for those long-term maintainability tests. More info can be found here. - Snapshot/snap sync: This is the modern way to sync your data. Pairing it with regular pruning helps keep your disk space nice and stable. You can learn more about it here.
Production-Grade Start Command
Here’s how you can kick things off with a single box setup where the Cluster Leader (CL) is right alongside everything else, and old data is stored on HDD:
start --cluster-leader=local --data-storage=hdd
This command will get your production environment rolling smoothly! Just make sure everything is set up correctly before you hit that enter key.
geth \
--syncmode snap \
--authrpc.jwtsecret /var/lib/ethereum/jwtsecret \
--datadir /nvme/geth \
--datadir.ancient /hdd/geth-ancients \
--http --http.addr 127.0.0.1 --http.api eth,net,engine,web3 \
--ws --ws.addr 127.0.0.1 --ws.api eth,net,engine,web3 \
--metrics --pprof
Make sure to put the consensus client on the same host and direct it to the same JWT secret. Ethereum.org suggests co-locating it with the Engine API for better performance. You can check out more details here.
Section 2 -- Geth full node disk size (2026): where you’ll land and why
Here are some reference points you can use for your planning today:
- If you're running a full execution node (whether snap or full), you’re looking at a size of over 500 GB for both Geth and Nethermind. If you decide to go for an archive node, be prepared for around 12 TB, thanks to the legacy hash-based indexing. (ethereum.org)
- Geth introduced a new path-based archive in version 1.16 and beyond, which will take up about 1.9 TB for the full historical state. The catch? The
eth_getProoffunction doesn’t work for deep history yet. Luckily, you can tweak how much historical state you want to keep using--history.state=N, with a default rolling window. (chainrelease.info) - Now, let’s talk growth dynamics. Historically, hash-scheme databases grew about 14 GB a week before any pruning was done, but a periodic prune resets everything back to a baseline. After the post-PHE changes, the pressure for historical growth has eased since you can now delete pre-Merge bodies and receipts locally. (geth.ethereum.org)
What Changed in 2025 (and Why You Should Care)
In 2025, several key developments shaped our daily lives and the world around us. Here’s a quick rundown of the major changes and why they matter to you:
1. Tech Advancements
The tech world saw some remarkable innovations. We witnessed:
- 5G Connectivity: By 2025, 5G has become the standard for mobile networks, making downloads faster and improving streaming quality. If you’re into gaming or binge-watching, you’re definitely feeling the impact.
- AI Integration: Artificial intelligence is now part of our everyday tools. From smart home devices to personal assistants, AI helps us manage tasks more efficiently. Imagine chatting with your virtual assistant as if you were talking to a friend!
2. Environmental Policies
Countries ramped up efforts to combat climate change, and here's what changed:
- Carbon Emission Goals: Several nations committed to net-zero emissions by 2050, and 2025 marked significant progress toward these goals. This is important because it affects global temperatures and environmental health.
- Renewable Energy Growth: Solar and wind energy sources became even more mainstream. Lower energy bills and a greener planet? Yes, please!
3. Health Innovations
The healthcare landscape transformed significantly, including:
- Telemedicine Expansion: More people started using telehealth services. This change means you can now consult with a doctor from the comfort of your couch. Who doesn’t love that?
- Personalized Medicine: Advances in genomics led to tailored treatments for various diseases. Your health care became more about you and less about a one-size-fits-all approach.
4. Social Dynamics
Societal norms and behavior shifted as well:
- Remote Work is Here to Stay: Many companies adopted flexible work policies, allowing you to choose between working from home or the office. It’s a game-changer for work-life balance!
- Diversity and Inclusion: There’s been a big push towards creating more inclusive spaces in workplaces and communities. It’s all about making sure everyone feels they belong.
Why You Should Care
These changes aren’t just headlines; they can directly affect your lifestyle, job opportunities, and the environment you live in. Staying informed about these shifts means you can adapt, take advantage of new tools, and contribute to a better future.
So, keep an eye on these trends. They’re shaping the world we live in and could impact your day-to-day life in ways you might not even realize yet.
- Partial History Expiry: On July 8, 2025, the Ethereum Foundation shared that all execution clients now have the ability to prune pre‑Merge block bodies and receipts. This means operators can save about 300-500 GB without affecting how they validate head blocks. Geth v1.16 rolled out some cool features like prune‑history and era1 integration, allowing you to ditch that pre‑Merge history and later rehydrate specific ranges if you need to. This is a solid first step towards the EIP‑4444 rolling expiry. (blog.ethereum.org)
Practical Sizing Guidance for 2026:
Sizing can be a real puzzle, especially when you're trying to figure out what works best for you or your business. Here’s a handy guide to help you navigate the ins and outs of sizing for 2026.
Key Considerations
When sizing up for 2026, keep these things in mind:
- Market Trends: Staying on top of trends can give you insights into what sizes might be in demand.
- Consumer Preferences: Understanding what your target audience prefers in terms of fit and style will set you apart.
- Seasonal Factors: Different times of the year might call for different sizing strategies.
Sizing Strategies
Here are some practical strategies to help you size effectively:
- Data Analysis: Dive into past sales data to see what sizes sold well.
- Surveys: Get feedback directly from customers. Sometimes, they’ll tell you more than numbers can.
- Prototyping: Test out a few sizes before committing to bulk production. It’s worth it!
Resource Links
For more detailed insights, check out these resources:
Table of Recommended Sizes
Here’s a quick reference table for popular sizes in 2026:
| Size | Bust (inches) | Waist (inches) | Hip (inches) |
|---|---|---|---|
| S | 34-36 | 26-28 | 36-38 |
| M | 38-40 | 30-32 | 40-42 |
| L | 42-44 | 34-36 | 44-46 |
| XL | 46-48 | 38-40 | 48-50 |
By keeping these guidelines in mind, you’ll be well on your way to making informed sizing decisions for 2026. Happy sizing!
- If you just need a full node (no archive queries), go for a 2 TB NVMe drive. You’ll want to set aside about 500-800 GB for Geth, with some room for growth. Make sure to prune your data every few months or when you hit around 80% full. Also, keep an eye on your consensus data--it should be around 200 GB. Check out more on this over at ethereum.org.
- Looking for historical state reads but okay with some proof limitations? Then you should try a path-based archive setup with
--state.scheme pathand--gcmode archive. Aim for about 2 TB for both state and ancients. You can save some cash by using cheaper storage for the ancients with--datadir.ancient. For details, visit chainrelease.info. - If you need deep historical proofs and full indices, the legacy archive is still a hefty ~12 TB+, and it’s only going to get bigger. You might want to think about alternatives like Erigon or Reth, or even consider going with external history providers. More info can be found on ethereum.org.
Budget Example (Two-Disk Layout):
Here’s a cool look at how a two-disk layout can work for your budget.
Disk 1: Essential Expenses
- Housing: $1,200
- Utilities: $150
- Groceries: $300
- Transportation: $200
- Insurance: $100
Total for Disk 1: $1,950
Disk 2: Discretionary Spending
- Dining Out: $150
- Entertainment: $100
- Clothing: $75
- Miscellaneous: $75
Total for Disk 2: $400
Summary
- Total Essential Expenses: $1,950
- Total Discretionary Spending: $400
- Overall Budget: $2,350
This layout gives you a clear view of your must-haves versus your nice-to-haves, helping you keep everything in check!
- NVMe1 (2 TB): datadir is set to “hot” LevelDB/Pebble + OS/CL
- HDD/SATA SSD (2-4 TB): ancients accessed through --datadir.ancient
Result: You get a speedy state and economical history; for crash recovery, ancients serve as your source of truth. (geth.ethereum.org)
Section 3 -- Commands you’ll actually run (PHE and targeted history restore)
- You can do a one-time prune of pre-Merge block bodies and receipts in Geth v1.16 or later:
- First, make sure to stop Geth properly, then:
geth prune-history --datadir /nvme/geth
- First, make sure to stop Geth properly, then:
- Just restart like you usually do. You should see a huge amount of storage freed up, especially if you've been hanging onto those pre-Merge bodies and receipts. (geth.ethereum.org)
- Grab specific history later (era1 files) without having to re-sync:
- Whether you’re running it or offline:
geth download-era --server https://mainnet.era1.nimbus.team --block 100000-200000 --datadir /nvme/geth
- Whether you’re running it or offline:
- Geth checks those checksums and moves files into the ancients. It's a good idea to use mirrors maintained by the community. You can find more info here.
- Move ancients to a more affordable storage option:
- First off, stop Geth. Then, go ahead and copy your ancient folder to the new location, and after that:
geth --datadir /nvme/geth --datadir.ancient /hdd/geth-ancients - Avoid starting Geth with an invalid ancients path; seriously, it's a big no-no. (geth.ethereum.org)
- First off, stop Geth. Then, go ahead and copy your ancient folder to the new location, and after that:
- If you want, you can run it with Pebble to check out how the future DB backend might look:
geth --db.engine=pebble --datadir /nvme/geth-pebbleYou'll need a new datadir for this; Pebble is a great alternative to LevelDB that’s actively maintained. Check it out here: (geth.ethereum.org)
Section 4 -- Validator key management today: HSMs, remote signers, and what PQC changes
What Ethereum Signifies Today
Ethereum, often referred to as ETH, is more than just a cryptocurrency. It embodies a whole ecosystem that has been evolving rapidly. Here’s a closer look at what Ethereum signifies in today's tech landscape.
The Rise of Decentralized Finance (DeFi)
One of the standout features of Ethereum is its role in the Decentralized Finance (DeFi) movement. DeFi platforms are reshaping traditional financial systems, allowing users to borrow, lend, and trade without intermediaries. This is a game-changer for many, especially in underserved markets.
Key DeFi Projects on Ethereum:
- Uniswap: A popular decentralized exchange.
- Aave: A lending platform that lets you earn interest or borrow assets.
- Compound: Another great lending platform focused on crypto asset management.
Non-Fungible Tokens (NFTs)
Ethereum has also become synonymous with the NFT craze. These digital collectibles have taken the art world by storm, enabling artists to tokenize their work in unique ways.
Notable NFT Marketplaces:
- OpenSea: The largest NFT marketplace.
- Rarible: A platform that allows users to create and sell NFTs easily.
- Foundation: Focused on digital art and creativity.
Scalability and Upgrades
With the recent shift to Ethereum 2.0, scalability is finally getting the attention it deserves. The move from Proof of Work to Proof of Stake aims to improve transaction speeds and lower gas fees. This is a significant upgrade that’s been in the works for a while now.
Benefits of Ethereum 2.0:
- Faster Transactions: More transactions per second!
- Lower Fees: Say goodbye to those hefty gas fees.
- Energy Efficiency: A greener approach to blockchain technology.
Conclusion
Ethereum is not just a cryptocurrency; it’s a platform that’s transforming various sectors, from finance to art. Whether you're interested in DeFi, NFTs, or the latest upgrades, there's no denying that Ethereum is making waves and setting the stage for the future of blockchain technology.
For more information on Ethereum and its latest developments, you can check out the official Ethereum website.
- Validator keys are built on BLS12‑381, but here’s the thing: most general-purpose HSMs don’t actually handle BLS signing out of the box. The common approach is to use a remote signer, like Web3Signer, which takes care of managing those BLS keys along with a slashing-protection database. When it comes to Eth1 (secp256k1) keys, you can use either HSMs or cloud KMS, but for Eth2 (BLS), Web3Signer steps in to load the keys into memory and keep everything safe from slashing. Check out the details here: (docs.web3signer.consensys.io)
What PQC Changes (and What It Doesn’t)
When we talk about Post-Quantum Cryptography (PQC), it’s all about making our digital security stronger against the potential threats posed by quantum computers. Here’s a breakdown of what PQC really shakes up, and what it keeps the same.
What PQC Changes
- Encryption Algorithms:
- PQC introduces new encryption methods designed to withstand quantum attacks.
- Examples include:
- Lattice-based cryptography: This type uses mathematical structures that are hard for quantum computers to crack.
- Hash-based signatures: These rely on hash functions, which are tricky for quantum systems to solve.
- Key Exchange:
- Traditional key exchange methods like Diffie-Hellman and RSA are at risk with quantum tech. PQC offers alternatives, like:
- NTRU: A lattice-based scheme that’s looking pretty solid against quantum threats.
- SIDH: This uses isogenies of elliptic curves, aiming to be quantum-resistant.
- Traditional key exchange methods like Diffie-Hellman and RSA are at risk with quantum tech. PQC offers alternatives, like:
- Digital Signatures:
- In the world of PQC, digital signatures are getting a facelift too. With quantum attacks in mind, we look towards:
- SPHINCS+: A stateless signature scheme that offers strong security without the worries of quantum decryption.
- Falcon: A lattice-based signature that’s got some serious potential.
- In the world of PQC, digital signatures are getting a facelift too. With quantum attacks in mind, we look towards:
What PQC Doesn’t Change
- Current Systems:
- Most existing systems still operate on classical cryptography for now. PQC is more about adding options than replacing everything overnight.
- Security Practices:
- Best practices in security, like regular updates and strong passwords, aren’t going anywhere. PQC is an addition, not a replacement.
- Compliance Requirements:
- Regulatory standards and compliance protocols will still apply. As PQC evolves, there will need to be updates to consider these new algorithms, but the foundations of compliance remain the same.
- User Experience:
- For everyday users, the changes brought by PQC might be behind the scenes. You’re unlikely to notice a drastic difference in how you browse the web or use apps.
In a nutshell, PQC is shaking things up in the cryptography space with some cool new methods to fend off quantum threats. But it’s not a complete overhaul - many aspects of security practices and existing systems will continue as they have.
- In 2024, NIST wrapped up its work on post-quantum cryptography (PQC) standards, giving us ML‑KEM (FIPS 203), ML‑DSA (FIPS 204), and SLH‑DSA (FIPS 205). These algorithms are now officially FIPS-approved, and the tech world is starting to align around them, especially in areas like TLS, X.509, HSMs, and ACVP/CMVP. It's worth noting that Ethereum’s consensus still relies on BLS12‑381, so don't expect PQC to take over validator signatures just yet. For now, it’s a smart move to start using PQC in your “management plane” -- think things like TLS key agreement, PKI, code signing, and backups. Check out more details here.
HSM Reality in 2026
As we look ahead to 2026, the landscape of Higher Secondary Management (HSM) is gearing up for some exciting changes. Here’s a peek into what to expect:
Technological Integration
Technology is really going to drive how HSM works. We’re talking about AI and machine learning stepping into the classroom. They’ll help shape personalized learning experiences and streamline administrative tasks. Imagine having a virtual assistant to help students with their studies or manage their schedules!
Flexible Learning Models
Gone are the days of strict 9-to-5 schedules. In 2026, we’ll see more flexible learning models that cater to various lifestyles. Students will have the option to learn at their own pace, whether that’s in a traditional classroom or online. This gives everyone a chance to manage their time better and balance studies with other commitments.
Increased Focus on Mental Health
Mental health awareness is picking up steam, and it's about time! Schools will prioritize mental well-being, offering more support services and programs to help students cope with stress and challenges. We could even see dedicated staff whose job is to ensure students’ mental health is a top priority.
Global Learning Opportunities
With borders becoming less of a barrier thanks to technology, students will have more chances to connect with peers worldwide. Programs that promote cultural exchanges and collaborative projects across countries might become the norm. This will not only broaden their horizons but also enrich their educational experience.
Emphasis on Soft Skills
While technical skills are crucial, we can’t forget about soft skills. There will be a stronger emphasis on communication, teamwork, and problem-solving abilities. Schools will likely introduce more projects and activities aimed at honing these essential life skills, preparing students for the real world.
Sustainability in Education
Environmental awareness is climbing the ranks, and education systems are responding. Expect to see a more significant focus on sustainability in curriculums, teaching students about climate change, renewable energy, and how to live more sustainably. Schools might even adopt eco-friendly practices in their operations.
Conclusion
So there you have it! The future of HSM in 2026 is shaping up to be dynamic and inclusive. With technology, a focus on mental health, and a commitment to sustainability, it looks like students will be set up for success in ways we can only imagine right now. Exciting times ahead!
- Thales Luna HSM 7.9.x: This latest firmware version brings some cool features like ML‑KEM and ML‑DSA mechanisms (think PKCS#11 identifiers, key generation, signing, and wrapping). It also adds hybrid cloning ciphers and makes some nice improvements to PQC key attestation. Just a heads up, you'll need Luna Client 10.9 or higher to use it. Check it out here.
- Entrust nShield 5: In this firmware version, you'll find support for ML‑KEM, ML‑DSA, and SLH‑DSA. The vendor is reporting that they've achieved CAVP validation, and CMVP updates are also on the way. For enterprises, this means you can leverage PQC in FIPS-tracked HSMs for TLS, code signing, and PKI while still using validator BLS in remote signer workflows. More details can be found here.
PQC for Transport: Secure the Pipes Now
When it comes to transporting goods, one of the most crucial steps is ensuring your pipes are secure. Here’s how to do it effectively:
Why It’s Important
Transporting pipes might seem straightforward, but if they’re not secured properly, you could face some serious issues. Unstable loads can lead to damage, delays, or even accidents. So, let’s make sure everything’s locked down!
Steps to Secure Pipes
- Choose the Right Equipment
Make sure you’ve got the right tools for the job. This includes straps, brackets, and any other materials that will help keep the pipes in place. - Inspect the Pipes
Before you even think about loading, check the pipes for any damage. It’s better to catch any issues now rather than later. - Proper Loading Techniques
- Lay them flat: Never stack pipes haphazardly. They should be laid flat to prevent rolling.
- Use padding: Adding some padding can help protect against scratches or dents during transport.
- Secure with Straps
Use strong straps to hold the pipes down. Make sure they’re tight, but don’t overdo it. You don’t want to damage the pipes either. - Check the Load During Transit
If you’re on a longer journey, take a moment to check that everything is still secure during transit. It’s an easy way to avoid problems down the line.
Conclusion
Taking the time to secure your pipes properly will save you a lot of headaches later. By following these steps, you’ll ensure a smoother transportation process. If you have any questions or need more tips, feel free to reach out!
- Hybrid PQ TLS is here! Cloudflare has rolled out the X25519+ML-KEM hybrid setup for TLS 1.3, allowing secure connections from client to edge and edge to origin. Plus, big-name browsers are now defaulting to X25519MLKEM768. If you're using Cloudflare or if you've got your own OpenSSL 3 with the OQS provider, you can start hardening those remote signer, RPC, and admin plane connections right now. Check it out here.
- Give DIY hybrid TLS a shot: With OpenSSL 3 and the OQS provider, you can dive into ML-KEM and hybrid key exchange. Make sure to check out the guidance on the oqs-provider. If you're working on embedded or edge solutions, wolfSSL has got you covered with its PQC TLS 1.3 suites, and they've recently patched a Kyber security-level bug--so keep those libraries up to date! More info is available here.
- Keeping up with standards: The IETF TLS Working Group has been busy with drafts focusing on hybrid design and ECDHE+ML-KEM named groups. It's a good idea to follow these developments to stay in the loop on policy baselines and interoperability. You can find the drafts here.
Remote Signer Patterns That Work
When it comes to remote signing, having effective patterns in place can really make a difference. Let’s explore some of the best patterns that can enhance your remote signing experience.
1. Asynchronous Signing
This pattern allows users to sign documents at their own pace. You send out a document, and the recipient can sign it whenever they find a moment. It’s super convenient!
Benefits:
- Reduces pressure on signers
- Fits into their schedule
- Can lead to higher completion rates
2. Multistep Signing Workflow
In this setup, documents move through various steps before getting finalized. For instance, you might need approval from different people before the final sign-off.
Benefits:
- Ensures that all necessary parties review documents
- Increases accountability
- Streamlines complex processes
3. Real-time Collaboration
Think of this like a virtual meeting where everyone can sign right then and there. You can discuss changes and get instant feedback as the document is being signed.
Benefits:
- Fosters teamwork
- Reduces back-and-forth emails
- Speeds up the entire signing process
4. Signature Reminders
Setting up automated reminders helps keep signers on track. If someone forgets to sign, a gentle nudge can be sent their way to prompt action.
Benefits:
- Keeps things moving forward
- Enhances efficiency
- Reduces the risk of delays
5. Mobile Signing
With everyone on the go nowadays, mobile signing is a must. This allows users to sign documents from their phones or tablets anywhere at any time.
Benefits:
- Extremely convenient
- Saves time
- Increases accessibility
6. Integrations with Other Tools
Linking your remote signing solution with other apps (like CRM or project management tools) can create a seamless experience.
Benefits:
- Streamlines workflow
- Reduces data entry errors
- Enhances overall efficiency
Conclusion
Implementing these remote signer patterns can really elevate your document signing process. The goal is to make it easier and more efficient for everyone involved. Don't hesitate to mix and match these patterns to see what works best for your specific needs!
For more details, check out the Remote Signing Guide.
- Web3Signer Layout:
- Keep your BLS keystores either on an encrypted disk or tucked away in a vault--think AWS Secrets Manager, GCP Secret Manager, or HashiCorp Vault. This way, Web3Signer can step in and enforce slashing protection through Postgres.
- For those execution-layer secp256k1 keys, we’ve got HSM/KMS on your side. As for BLS keys, they get loaded into memory, but don’t worry--access is tightly controlled and audited. Check out the details here.
- Network Posture:
Let’s terminate PQC-hybrid TLS right at the signer. Make sure you’re using mutual TLS with short-lived certificates. Locking down source IPs is key here. Keep the signer separate from the beacon/validator clients and steer clear of public RPC. Also, set up a dedicated Postgres database for handling slashing info, ensuring you have synchronous commits on a speedy NVMe drive.
Where DVT Fits
Deep vein thrombosis (DVT) is one of those terms you might hear thrown around in the medical world, but what does it really mean, and where does it fit into the bigger picture of health? Let’s break it down.
Understanding DVT
DVT happens when a blood clot forms in a deep vein, usually in your legs. This isn’t just some random occurrence; it can lead to serious health issues like pulmonary embolism (PE), which is when a clot moves to the lungs. Yikes!
Causes of DVT
So, what causes DVT? A few common culprits include:
- Prolonged inactivity: Sitting for long periods, like during a long flight or at your desk job, can slow down blood flow.
- Medical conditions: Certain conditions can make your blood more prone to clotting.
- Injuries: Trauma to the veins can lead to clotting.
- Hormonal factors: Birth control and pregnancy can increase your risk.
Symptoms to Watch For
It’s crucial to keep an eye out for the symptoms of DVT. Sometimes they’re subtle, like:
- Swelling in one leg
- Pain or tenderness in the affected leg, which might feel like cramping
- Warmth in the area of the clot
- Changes in skin color
If you experience any of these, it’s worth reaching out to a healthcare professional.
Diagnosis and Treatment
If you suspect DVT, your doctor will likely perform a physical exam and may order tests, such as an ultrasound or blood tests, to confirm the diagnosis.
Treatment Options
Here’s a quick rundown of how DVT is typically treated:
- Blood Thinners: Medications like heparin or warfarin help prevent the clot from growing larger.
- Filters: In some cases, doctors might insert a filter into a large vein to catch any clots before they travel to the lungs.
- Compression Stockings: These can help reduce swelling and prevent more clots from forming.
Prevention Tips
Preventing DVT is totally doable! Here are a few tips to keep your blood flowing:
- Stay Active: Get moving regularly, especially during long trips.
- Hydrate: Drink plenty of water to keep your blood from getting too thick.
- Wear Compression Stockings: If you're at high risk, they can make a difference.
For more detailed info, check out CDC's DVT page.

Conclusion
DVT is a serious condition that can lead to major health problems, but with awareness and preventive measures, you can help protect yourself. Stay informed and consult with your healthcare provider if you have concerns.
- Distributed Validator Technology (like Obol/SSV) helps lower the risk of relying on one machine for key storage and boosts uptime. We're seeing a quick uptick in adoption, especially with Lido cohorts and those “super clusters” expected to pop up in 2025. This tech is becoming a solid partner to HSM/KMS for better operational resilience. (blog.lido.fi)
Section 5 -- Concrete playbooks
Playbook A -- “Fit Geth + Consensus on 2 TB and Sleep Well”
- Hardware: 8 cores, 32 GB RAM, 2 TB TLC NVMe with DRAM.
- Place ancients on a secondary disk:
- Set it up using
--datadir.ancientto point to your HDD or SATA SSD. (geth.ethereum.org)
- Set it up using
- Turn on metrics and keep an eye on the disk fill percentage; it’s a good idea to prune when you're around 80%.
geth prune-history --datadir /nvme/geth
Expect to reclaim hundreds of GB if you had the pre-Merge history hanging around. (geth.ethereum.org)
4) If you find yourself needing specific history ranges later on, just grab the era1 files as needed:
geth download-era --server https://mainnet.era1.nimbus.team --block 12000000-13000000 --datadir /nvme/geth
Playbook B -- "I Need Historical State but Not Heavy Proofs"
When you're in a situation where you want to keep track of historical states but don't need to dive into all the heavy-duty proofs, this playbook will guide you through it.
Key Concepts
Here’s a breakdown of what you need to know:
- Historical State: This refers to the way records or data looked at a certain point in the past. Keeping historical states helps in understanding how things have changed over time.
- Lightweight Proofs: Instead of detailed, formal proofs, lightweight proofs are more about quick checks or summaries that validate the data without going into heavy detail.
Steps to Follow
- Define Your Historical States:
Figure out what specific historical states matter to you. Are you tracking transactions, changes in data, or something else entirely? - Collect Data:
Grab the data you need, focusing on what's relevant for your historical states. You don’t need to collect every bit of information--just the essentials. - Use Lightweight Proof Techniques:
- Implement simple assertions or checks that can verify your data's accuracy. For example, you might use checksums or quick comparisons.
- Set up periodic snapshots of your data so you can reference them without getting lost in the details.
- Document Everything:
Keep records of your processes and the decisions you've made. This helps anyone who comes after you to understand how things were tracked and why. - Review and Adjust:
Don’t forget to periodically review your historical states and the techniques you’re using. If something isn’t working as expected, tweak it!
Tools You Might Consider
- Time Series Databases: These are great for handling historical data efficiently. Check out options like InfluxDB or TimescaleDB.
- Lightweight Proof Libraries: Look into libraries that offer lightweight proof capabilities, such as zk-SNARKs for cryptographic proofs, depending on your use case.
Conclusion
Keeping track of historical states doesn’t have to be a heavy lift. With the right approach and tools, you can effectively manage the data you need without drowning in details. Happy tracking!
- Geth version 1.16 or higher with path-based archive:
- Start with a full sync, and then turn on archive indexing:
geth --syncmode full --gcmode archive --history.state=0
- Start with a full sync, and then turn on archive indexing:
- You'll need around ~1.9 TB for historical data; just a heads up, proofs through
eth_getProofaren't available for the deeper history yet. (chainrelease.info)
- Make sure to set
--datadir.ancienton a slower disk. Keep an eye on how indexing is going before you start counting on that history. (geth.ethereum.org)
Playbook C -- “Harden Remote Signer with PQC and Enforce Slashing Protection”
When it comes to securing your remote signer, it’s crucial to roll out some solid defenses. Here’s a step-by-step guide to ensure you’re doing just that by incorporating Post-Quantum Cryptography (PQC) and implementing slashing protection.
Step 1: Understand PQC
Post-Quantum Cryptography is designed to protect against potential threats posed by quantum computers. It's a hot topic right now, and for good reason! By transitioning to PQC, you'll be better prepared for the future.
Key Points:
- Why PQC? A lot of current encryption methods could be vulnerable to quantum attacks.
- What’s the goal? Strengthen your cryptographic processes so they remain secure.
Step 2: Upgrade Your Remote Signer
Get started by making sure your remote signer supports PQC. This is about future-proofing your signing operations.
Actions you need to take:
- Check Existing Systems: See if your current remote signer is compatible with PQC algorithms.
- Upgrade if Necessary: If it’s not, look into options that are. You want a signer that’s ready for the quantum age!
Step 3: Implement Slashing Protection
Slashing can really put a dent in your operations, so protecting against it is a must. Here are a few strategies to keep in mind:
Strategies for Slashing Protection:
- Set Up Monitoring Systems: Keep a close eye on signing activities. Look for any anomalies that might suggest slashing is happening.
- Use Multi-Sig Requirements: Go for multiple signatures on critical transactions. This adds an extra layer of protection.
- Educate Your Team: Make sure everyone understands what slashing is and how to prevent it. Awareness is key!
Step 4: Testing & Validation
Before you roll out these changes, you need to put them through their paces. Testing is essential!
Testing Checklist:
- Conduct Simulations: Test your new PQC methods and slashing protections in a controlled environment.
- Gather Feedback: Get input from your team. They might catch something you missed!
- Adjust as Needed: Don’t be afraid to tweak your setup until it’s just right.
Step 5: Stay Updated
The world of cryptography is always evolving. Make it a habit to stay informed about the latest developments in PQC and other security measures.
Tips to Stay Current:
- Follow Relevant Blogs and Forums: There are plenty of resources out there. Dive into discussions and updates!
- Attend Conferences/Webinars: Networking with others in the field can provide valuable insights and knowledge.
- Join Professional Groups: Connect with like-minded individuals who are also focused on security.
By following these steps, you’ll be well on your way to hardening your remote signer with PQC and ensuring strong slashing protection. Stay proactive and keep that security tight!
- Use Web3Signer for your validators and set up Postgres as your slashing database.
- Keep your BLS keys safe in AWS Secrets Manager or Vault. Make sure to configure the signer to fetch keys from the vault, and don’t forget to set up those slashing locks. Check out the details here.
- For the signer, go ahead and terminate hybrid TLS:
- On the cloud edge, you can either enable Cloudflare’s PQC on TLS 1.3 or deploy OpenSSL 3 with the oqs-provider on your ingress proxy. More info can be found here.
- Keep an eye out for “slashable” event attempts by setting up audits and alerts. And don’t forget to back up your slashing database regularly!
Playbook D -- “Plan HSM and PQC Rollout Without Breaking BLS”
When it comes to rolling out HSM (Hardware Security Module) and PQC (Post-Quantum Cryptography), we need a solid plan that ensures we don’t disrupt the BLS (Boneh-Lynn-Shacham) signature scheme. Here’s how we can tackle this rollout smoothly, step by step.
Step 1: Assess Current Systems
First up, let's take a good look at our existing systems. It’s essential to understand how BLS is being used right now. This assessment will help us pinpoint any potential issues that might pop up during the transition.
- Identify critical services using BLS.
- Analyze dependencies on BLS signatures in your applications.
- Document current usage patterns to track what needs to be preserved.
Step 2: Develop a Compatibility Strategy
Next, we need to ensure that HSM and PQC can work alongside BLS without causing any hiccups. A compatibility strategy will be key here.
- Explore hybrid approaches where necessary.
- Ensure backward compatibility for existing applications that rely on BLS.
- Test integrations in a controlled environment before the full rollout.
Step 3: Create a Rollout Timeline
Now, let’s get a timeline in place. A well-structured rollout plan will help keep everything organized and minimize the risk of interruptions.
- Set milestones for each phase of the rollout.
- Don’t forget to allow buffer time for unexpected challenges.
- Communicate timelines clearly to all stakeholders.
Step 4: Implement Gradual Changes
Instead of going all-in right away, it’s wiser to introduce changes gradually. This way, we can monitor the impact and make adjustments as needed.
- Start with non-critical systems for initial testing.
- Use feature flags to manage the rollout of new features.
- Gather feedback during each phase to ensure everything runs smoothly.
Step 5: Monitor and Optimize
Once we’ve rolled out HSM and PQC, it’s not time to sit back just yet. We need to keep a close eye on how things are performing.
- Set up monitoring tools to watch for any issues.
- Collect user feedback to identify areas for improvement.
- Be ready to iterate and optimize based on what you learn.
Conclusion
Rolling out HSM and PQC while keeping BLS intact might seem like a tall order, but with a solid plan in place, we can make it work. By assessing our current systems, developing a compatibility strategy, creating a timeline, making gradual changes, and continuously monitoring, we’ll navigate this process like pros. Let's get started!
- Near-term (2026): Start using HSM for your organization's PKI, code-signing, and EL secp256k1 keys. It's also a good idea to jump on the PQC algorithms now - think ML-KEM for key establishment and ML-DSA/SLH-DSA for signatures - wherever your firmware (like Thales/Entrust) supports it. Check it out here.
- Transport: Make sure to require hybrid PQ TLS for your RPC/admin/remote-signer channels. Keep an eye on IETF drafts and the vendor TLS stacks to ensure they’re supporting the named-groups. More info can be found here.
- Mid-term: Stay tuned to client roadmaps for any upcoming BLS-in-HSM or enclave projects. As of now, enterprise-grade slashing protection and process segregation help close a lot of the BLS-in-HSM gap. You can learn more about that here.
Section 6 -- Emerging practices we recommend (and why)
- It’s a good idea to co-locate your EL and CL on the same box for the Engine API and stick with a local JWT secret. Try to steer clear of cross-host Engine API unless you have a really solid reason for it. (ethereum.org)
- Keep those ancients low-cost: always turn on
--datadir.ancientto move the freezer to non-NVMe storage. This setup is specifically made for O(1) reads from slower disks. (geth.ethereum.org) - Get onboard with PHE now: prune once when you upgrade your clients to free up hundreds of GB, and then set aside a quarterly maintenance window. When you need to, restore specific ranges using era1. (blog.ethereum.org)
- Choose your archive style wisely: if you’re after “all historical state,” go for the new path-based archive (it’s around 1.9 TB) but be ready to accept the current limitations with eth_getProof. If that’s not the route for you, consider teaming up with community mirrors or a data provider for the heavy lifting. (chainrelease.info)
- Keep PQC in mind where it matters:
- For TLS: use hybrid X25519MLKEM768; you can verify using Cloudflare’s tools or your own OpenSSL setup. (developers.cloudflare.com)
- In PKI: start rolling out dual-stack or PQC-ready certs wherever you can; keep an eye on LAMPS WG drafts for ML-DSA in X.509. (datatracker.ietf.org)
- For HSM: make sure you’re planning your firmware and CMVP timelines; it’s smart to test pilots on non-critical services first (think code signing, internal APIs). (entrust.com)
- Boost your resilience with DVT: whenever possible, use Obol/SSV for production validator sets to minimize single-box risks as you get your HSM and PQC strategies up to speed. (blog.lido.fi)
Section 7 -- Decision checklists
Buy Sheet for a First Production Node (EL+CL, Light RPC)
Overview
This buy sheet outlines the key components and requirements for setting up a first production node. We're focusing on EL (Event Logging) and CL (Control Logging) while keeping the RPC (Remote Procedure Call) light.
Requirements
Hardware
- Server Specifications
- CPU: Minimum of 8 cores
- RAM: At least 32GB
- Storage: SSD with 1TB capacity
- Network
- 1 Gbps Ethernet connection
- Power Supply
- Redundant power supplies recommended
Software
- Operating System:
- Ubuntu 20.04 LTS or later
- Dependencies and Libraries:
- PostgreSQL
- Redis
- RabbitMQ
- Docker (for containerization)
Services
- Event Logging Service
- Responsible for capturing and storing event data.
- Control Logging Service
- Manages and logs control activities across nodes.
Budget
| Item | Cost Estimate |
|---|---|
| Hardware | $5,000 |
| Software Licenses | $2,000 |
| Networking | $1,000 |
| Miscellaneous | $500 |
| Total | $8,500 |
Timeline
- Phase 1: Hardware Procurement: 2 weeks
- Phase 2: Software Installation: 1 week
- Phase 3: Testing and Validation: 1 week
- Total Project Duration: 4 weeks
Next Steps
- Get approvals for the budget.
- Order hardware components.
- Set up a meeting to discuss software installation.
Feel free to reach out if you have any questions or need more details!
- 8 cores and 32 GB of RAM; 2 TB TLC NVMe for the datadir, plus 2-4 TB HDD/SATA SSD for the ancients
- Geth v1.16 or newer; prune the pre‑Merge history just once; and make sure to enable metrics
- Have the consensus client running on the same machine; share the JWT secret; keep the Engine API local only
- If you're using a remote signer, set it up on a dedicated host or VPC with PQC TLS, and make sure the slashing DB is on NVMe
- Don’t forget backups for your keystores and slashing DB; plus, make sure you’ve tested your restore runbooks
HSM/PQC Rollout (12-Month Plan)
Here's what we've got lined up for the rollout of the HSM/PQC over the next year:
Month 1-3: Planning & Preparation
- Conduct a thorough assessment of current systems and infrastructure.
- Define the project scope, timeline, and key milestones.
- Assemble a project team with relevant skills and expertise.
- Kick-off meeting to align all stakeholders and set expectations.
Month 4-6: Development & Testing
- Start developing the HSM/PQC solution based on the requirements gathered.
- Set up a testing environment and begin initial tests.
- Gather feedback from a small group of users and refine the solution.
- Document all processes and make necessary adjustments.
Month 7-9: Deployment & Training
- Roll out the HSM/PQC solution to a wider audience.
- Provide training sessions for users to ensure they’re comfortable with the new system.
- Monitor the deployment process for any issues that might pop up.
- Address feedback and iterate on the solution as needed.
Month 10-12: Evaluation & Optimization
- Conduct a performance review of the HSM/PQC solution.
- Gather feedback from all users to understand the system’s impact.
- Identify any areas for improvement and plan future updates.
- Create a final report on the project’s success and lessons learned.
Looking forward to making this rollout a success!
- Check out the TLS endpoints and get hybrid PQ TLS set up using either Cloudflare or OpenSSL OQS.
- Update the HSM firmware for ML‑KEM/ML‑DSA/SLH‑DSA wherever possible; make sure to validate with CAVP test vectors if it applies.
- Generate some PQC‑capable test certificates and keep an eye on the latest IETF LAMPS/TLS drafts.
- Maintain the validator BLS with a remote signer and slashing protection; don’t forget to review HSM options every year.
Appendix -- Reference links and context
- Take a look at the hardware requirements for running an Ethereum node, including a handy snapshot table that breaks down client sizes and EL+CL disk budgets. You can find it all at ethereum.org.
- If you're diving into Geth, check out their storage and pruning options. They've got guides on freezer/ancients, offline pruning, history pruning, Pebble databases, and even a path-based archive how-to. More info can be found at geth.ethereum.org.
- Don’t miss the announcement about PHE and its ecosystem history mirrors from era1 over on blog.ethereum.org.
- Curious about EIP-4444 and its history expiry details? Check it out at eips.ethereum.org.
- You’ll want to catch the highlights from the Geth v1.16 release, especially regarding history mode and era1. Read more on chainrelease.info.
- For those looking into post-quantum cryptography, NIST recently announced the finalized PQC standards (FIPS 203/204/205). Dive into the details at nist.gov.
- There are also some interesting drafts on TLS hybrid approaches and vendor deployments, including notes from Cloudflare and AWS. Check out the drafts at datatracker.ietf.org.
- If you’re looking at HSM vendor firmware and PQC support, be sure to look at Thales Luna 7.9.x and Entrust nShield 5 CAVP for specifics. More details are available at thalesdocs.com.
- Finally, for insights on remote signer architecture and key storage, check out Web3Signer. Their documentation has got you covered at docs.web3signer.consensys.io.
7Block Labs Note
Hey there! If you're on the lookout for a customized BOM and runbook for your workloads--whether that's a mix of RPC, indexing, validators with DVT, or even keeping compliance in check--we’ve got you covered. We’ll analyze disk growth, pruning schedules, and PQC/HSM timelines to make sure everything aligns with your internal SLAs and any change-control windows you have in place.
Like what you're reading? Let's build together.
Get a free 30-minute consultation with our engineering team.
Related Posts
ByAUJay
Keeping an Eye on Bridge Health: Spotting De-pegs Early
Keeping an eye on bridge health is all about managing risks, not just jumping on trends. This playbook lays out how DeFi teams can spot and respond to de-peg signals a few minutes to hours ahead of time--helping them dodge slippage, MEV, and liquidity runs that could turn a small issue into a serious financial headache.
ByAUJay
Get the Lowdown on EIP-402, EIP-7623, and EIP-1898: Why Infra Teams Should Care About These Ethereum Updates
**Description:** Here’s what every infrastructure lead should be keeping an eye on: First, we’ll explore how to boost the resilience of JSON-RPC reads against reorgs with EIP-1898. Next, we’ll chat about what EIP-7623's calldata repricing means for production after Pectra. And to wrap things up, we’ll reveal why “EIP-402” is essentially the same as HTTP 402/x402--and how it all ties together.
ByAUJay
Hardening Your Verifiable Data Feed: Strategies for Multi-Source Aggregation and Failover
### Summary: Decision-makers aren’t looking for another basic oracle lesson--they need a solid plan that keeps markets running smoothly when a data stream goes down, spots anomalies in a flash, and handles L2 outages without a hitch. This post provides clear, verifiable steps to achieve that.

