7Block Labs
Blockchain Technology

ByAUJay

Designing Rollup SDKs Around Blobs: The New Default for Posting Data

Blobs are now the preferred way for rollups to post data on Ethereum. If you're in a decision-making role or leading an engineering team, this guide is designed for you! We’ll walk you through how to create rollup SDKs that make blobspace a top priority from the get-go. We’ll cover all the essentials, including important parameters, APIs, pricing strategies, failover patterns, and migration steps that you can start implementing right away.


Why “blob‑first” is the default in 2025

  • Dencun has launched EIP‑4844 blobs, which brings a fresh transaction type (0x03) into the mix. This new data isn’t just floating around; it’s kept in the consensus layer for roughly 18 days and has its own blob gas market to keep things in check. This shift really takes the pressure off rollup data availability (DA) from EVM gas, resulting in a lasting decrease in Layer 2 posting costs. If you want to dive deeper into it, you can check it out here.
  • Pectra is really stepping up its game by doubling the blob throughput! Now, the target for blobs per block has jumped from 3 to 6, and the max has increased from 6 to 9. This upgrade not only cuts down on posting costs but also makes it way easier to manage those traffic spikes when they hit. Oh, and did you know that Etherscan is now showing live blob counts? You can keep track of everything happening in real time. Check it out here.
  • Looking ahead, the next upgrade--Fusaka--is set to drop on December 3, 2025. This is where things get interesting with the introduction of PeerDAS. Instead of downloading every single piece of blob data, nodes will be able to sample it. This is great news because it means we can look forward to much higher blob targets down the road. So, gear up for more capacity coming your way! For the full details, check it out here.

Bottom line: It’s a good idea to have your SDK configured to post to blobs by default. Just be sure it includes some clever fallbacks for those occasional times when things get a bit congested.


The exact blob primitives your SDK must understand

  • Size and units

    • Picture a blob as a chunk of data that’s made up of 4096 field elements. This translates to 32 bytes, which adds up to a total of 131,072 bytes. When it comes to blob gas, the magic number you want to remember is GAS_PER_BLOB = 2^17. Just a heads up, block limits are measured in “blob gas.” (eips.ethereum.org)
  • Lifetime

    • Blobs hang around for roughly 18 days (or 4096 epochs). After that, they’re pruned away, but no need to stress--the commitments are here to stay. Just keep in mind that your SDK should be built with archival in mind. (ethereum.org)
  • Type‑3 transactions

    • You’ll notice some fresh fields such as max_fee_per_blob_gas and blob_versioned_hashes (those are the versioned KZG commitment hashes, by the way) showing up here. (eips.ethereum.org)
  • New block header fields

    • We’ve got some new fields: blob_gas_used and excess_blob_gas. These bad boys help you calculate the blob base fee and monitor congestion more easily. If you're working with libraries like ethers v6, you’ll be happy to know that blobGasUsed is now included on Block objects. Check it out here: (eips.ethereum.org)
  • EVM opcodes and precompiles you can make use of:

    • Take a look at BLOBHASH (0x49). It’s super handy for giving contracts access to those versioned hashes.
    • Next up, we have BLOBBASEFEE (0x4a). This one tells you the current blob base fee according to EIP‑7516.
    • And don’t overlook the point-evaluation precompile at 0x0A (which costs 50k gas). It’s great for verifying KZG proofs against a versioned hash. You can find more about it here.
  • Replacement rules

    • When you're working with mempools, remember that you'll typically need to raise the base fee per blob gas by about 1.1× if you're looking to replace transactions. So, make sure to factor this into your fee escalators! (eips.ethereum.org)
  • RPCs You’ll Actually Call

    • If you’re looking to check the current blob base fee, you can use JSON-RPC's eth_blobBaseFee--most providers are on board with this. And for those needing to fetch blobs for verification or archival purposes, just head over to Beacon REST at eth/v1/beacon/blob_sidecars/{block_id}. You can find more details here.

The blob fee market and how to price posts

  • Independent fee market

    • The blob base fee is now tweaking in an EIP‑1559 style, aiming for a target of 6 after Pectra, block by block. When usage drops below that target, you'll see prices start to slide down toward the minimum. If you're curious to learn more, check it out here.
  • Concrete arithmetic you need in code

    • If you want to calculate the total blob fee for each transaction, here's the formula you’ll want to use: Total blob fee per tx = get_base_fee_per_blob_gas(header) × GAS_PER_BLOB × number_of_blobs. You can grab the base fee from the header or eth_blobBaseFee. If you need more details, take a look at this link.
  • Calldata vs blobs

    • When it comes to calldata, you’re still looking at 4/16 gas per byte, whether it’s zero or non-zero. But with new EIPs rolling out, we're seeing some higher baseline costs for those data-heavy transactions. Check out EIP‑7623, which is live thanks to Pectra, along with the proposed EIP‑7976. This shift is definitely pushing us toward blobs, so “blob-first” seems like a smart move. If you want to dive deeper, you can find more info here.
  • Observed post‑Pectra market

    • Ever since that spike on June 9th, blob usage has been chilling below our target more often than not, which leaves them at a “near minimum” level most of the time. As a result, L2 DA spending has taken a hit, with daily capacity only reaching about ~8.1 GB at its peak. If you want to dive deeper into the latest trends, check it out here.

Exciting new proposals like EIP‑7762 and EIP‑7918 are changing the game by tweaking the blob pricing limits. When you're building your SDK, keep in mind to treat these as parameters rather than static constants. You can dive deeper into the details here: (eips.ethereum.org)


Posting policy: make “blobs” the default, with guardrails

Your batcher/sequencer should really focus on these five essential behaviors:

1) Compression and Packing

  • Opt for Brotli at the highest settings (you can easily adjust the zlib/brotli flags via OP Stack; Nitroware is already handling this), and shoot for more than 90% blob utilization. Currently, the average utilization across rollups is about ~118.3 KB out of 131.1 KB, which means there's definitely room to get better. For more details, take a look here.
  • Before you start, be sure to pre-pack your batch chunks so they hit those exact blob boundaries at 131,072 bytes. It's a good idea to run a quick simulation like “should I add one more tx?” to avoid spilling over into a second blob. Also, keep tabs on utilization depending on the chain and the time of day. For more details, check this out here.
  1. Multi-blob batching: keep it reasonable
  • As per Pectra's guidelines, you can fit in up to nine blobs per block. However, cramming too many into one transaction might lead to replacement problems and make builders more finicky. It's better to stick with 1 to 3 blobs per transaction. If you've got a lot to send, think about breaking it up into multiple transactions instead of going for those “mega-blob transactions,” unless you’re on private builder routes. (Just double-check with your builders to get the best outcome!)

3) Fee Caps and Replacement Logic

  • Make sure you set both max_fee_per_gas and max_fee_per_blob_gas. It’s a good idea to create a replacement ladder that bumps both fees up by at least 1.1x and don’t forget to use those backoff timers. If you want some inspiration, check out Arbitrum’s batch-poster; they've got some useful RBF schedules you can easily mimic. For more details, swing by docs.arbitrum.io.

4) Smart Fallback When Blob Prices Spike

  • The OP Stack comes with a pretty cool feature that lets you use data-availability-type=auto, which automatically toggles between blobs and calldata. Plus, Arbitrum lets you choose whether to ignore blob prices or establish protection windows to keep things running smoothly. Here’s a quick overview of how to get a policy engine up and running:

    • Default: Stick with blobs.
    • If you find that the predicted blob fee is higher than the calldata for N polls in a row (and remember EIP‑7623 here) → go ahead and make a temporary change.
    • Hysteresis: To revert back, you'll need M blocks showing a better blob price first. For more info, take a look at the docs.optimism.io.

5) Beacon‑aware Posting Windows

  • Try to aim for around 6 blobs in each block to keep those base-fee fluctuations under control. If you spot any spikes, spread your bundles across different blocks to avoid those unexpected fee jumps. (You can check the current excess_blob_gas to get a sense of what the base fee for the next block might look like.) (eips.ethereum.org)

Read path, fraud/validity proofs, and blob archival

  • Retrieval: You can snag blob sidecars straight from a Beacon node by hitting up the endpoint eth/v1/beacon/blob_sidecars/{block_id}. You can use head, slot, or root--whatever works best for you! Just make sure to cache them locally for easier access later. There are a bunch of RPC providers out there, like Ankr and Blockdaemon, that have got your back with Beacon APIs for this. Take a look at the details here.
  • On-chain verification: To make sure your rollup inbox contracts are doing their job, you’ll want to tap into the point-evaluation precompile (0x0A) to validate those KZG proofs against the versioned hashes generated by your L2. ZK rollups like zkSync are already ahead of the game, splitting batches into various blobs and checking each one’s proofs directly in their L1 contracts. Want to learn more? Check it out here.
  • Archival strategy: Remember, blobs usually last around 18 days. So, your SDK should set up an "archiver" service to keep those raw sidecars safe. You can use S3, IPFS, or Arweave for this. Just a little reminder: once that protocol’s time frame is up, it’s on you to ensure long-term data availability, not Ethereum. If you want to dive deeper into this, you can find more info here.
  • Observability: Don't forget to display blob counts and fees on your operations dashboards. Using Etherscan’s /txsBlobs and block pages will really simplify the reconciliation process. It’s also a good idea to set up alerts for when your utilization dips below 80% or if your base fee jumps by more than X% from block to block. For more details, check it out here.

API surface: what your SDK should expose

  • Pricing

    • getBlobBaseFee(): This function gets the eth_blobBaseFee. If your provider doesn’t support that method, it’ll just calculate it using excess_blob_gas instead. For more details, you can check it out here.
  • Packing

    • packToBlobs(batch, compressor='brotli‑11'): This method handles packing your data into chunks of N × 131,072 bytes, and it also generates versioned hashes for you.
  • Posting

    • submitBlobTx(blobs[], maxFeePerBlobGas, gasCaps): This function is built to send a 0x03 transaction and allows for Replace-By-Fee (RBF), giving you the option to bump fees by at least 1.1×. If you want to dive deeper into the specifics, check it out here.
  • Fallback

    • chooseDA({calldataFloorRules, blobBaseFee}): This function will give you either 'blobs' or 'calldata', depending on the thresholds you set. Plus, it considers how Pectra behaves with calldata repricing. Get the full scoop here.
  • Retrieval & Proof

    • getBlobByBlockId(blockId): This is your best bet for grabbing data from the Beacon. Just remember, you’ll need to confirm it with KZG proofs using the precompile address 0x0A. For more details, check it out here.
  • Archival

    • archiveBlobs(blobs, policy): You can use different backends like S3, IPFS, or Arweave, all set up to match the TTL according to the dispute windows. If you want reliable, public storage, definitely check out the Arweave SDKs and gateways! You can find more details here.

Calldata is the escape valve--not your default

  • Cost reality: Alright, let's break it down: the cost of calldata depends on its size (4/16 gas). Thanks to the new post-Pectra rules, if you're dealing with a lot of data in your transactions, it’s getting more expensive to keep things running smoothly on the p2p layer. Ideally, your SDK should default to using calldata only when there’s a solid policy in place or in case of emergencies. To dive deeper into this, check it out here.
  • Operational reality: Both OP and Arbitrum recommend taking advantage of blob posting whenever you can. They provide switches and flags that let you either enforce or automatically select blobs. It’s a good idea to integrate these features into your SDK so users can rely on a consistent experience regardless of the stack they’re using. You can get all the details here.

Multi‑DA abstractions: Celestia, EigenDA, Avail--without giving up blob defaults

Your customers could really benefit from some optional DA modes to help keep their costs in line. You can set up DA as a pluggable interface like this:

  • Ethereum Blobs (default)
    This option is your best bet if you want a solid connection with Layer 1 (L1) and a straightforward trust model. It's ideal for managing settlement-critical data while keeping bridge complexity to a minimum.
  • Celestia via Blobstream (SP1)
    Thanks to Celestia’s on-chain light client on Ethereum, verifying Celestia headers using ZK proofs just got easier! Now your L1 contracts can easily check those Celestia data root commitments for inclusion proofs. Plus, you can whip up a “CelestiaDA” driver that sends data over to Celestia and passes those commitments back to L1 through Blobstream contracts. For all the juicy details, take a look here.
  • EigenDA (EigenLayer AVS)
    If you're looking to distribute and access data in your rollup node cluster, the EigenDA Proxy sidecar is your go-to. It’s designed for verifying DA certificates on L1 through the EigenDAServiceManager. Plus, it comes with handy failover signals, like HTTP 503, so if EigenDA runs into issues, your batcher can seamlessly switch back to L1 blobs. Want to know more? Check it out here.
  • Avail DA
    The mainnet DA kicked off in July 2024. If you want to integrate it, you can use their light clients and attestation bridge. Just a heads-up: make sure to keep Ethereum blobs as the “safety floor” for those frames that are crucial for settlement. For more details, check this out here.
  • NEAR DA
    You can find this in the Polygon CDK. It acts like an optional driver in your SDK for ZK rollups that are built on CDK. Just make sure to add a clear security and composability note in your UI. Dive deeper into the details here.

Implementation Example: Orbit + EigenDA

When you're diving into Arbitrum Nitro, there are some cool flags you can use to activate the EigenDA Proxy. Your SDK does a great job of automatically figuring out the right node configuration for you. Once that’s sorted, you can quickly send the certified header to the rollup inbox. For all the nitty-gritty details, take a look at the EigenDA documentation.


Contract‑level changes you should ship

  • Keep versioned hashes, skip the blobs

    • It’s a good practice for contracts to hold onto those blob versioned hashes and check KZG proofs via the point-evaluation precompile whenever you need to. Just so you know, L1 can’t directly tap into blob data. (eips.ethereum.org)
  • Fee-aware accounting

    • When you're keeping an eye on prover/sequencer fees on-chain, don’t forget to use the BLOBBASEFEE opcode for a trustless approach to blob consumption pricing. Check it out here: (eips.ethereum.org)
  • Eventing for off-chain retrieval

    • Create a mapping between batch IDs and blob versioned hashes. This way, your off-chain archiver can easily subscribe and grab sidecars from the Beacon nodes whenever you need them.

Practical configuration examples (copy/paste into runbooks)

  • OP Stack Batcher

    • When things get hectic, don’t forget to set compression-algo=brotli‑10/11 and go with data-availability-type=auto. Also, keep track of utilization and set up an alert for when it dips below 80%. You can dive into the details here.
  • Arbitrum Batch Poster

    • First off, make sure to activate EIP‑4844 posting and keep an eye on numBlobs in the logs. Set ignore-blob-price=false (that's the default setting) and have a few "force blobs" switches ready just in case things get tricky. Oh, and don't forget to configure the RBF timings with the defaults provided. If you want to dive deeper, check out this link: docs.arbitrum.io.
  • zkSync-Style Chunking

    • Split pubdata into up to six blobs for each batch. Don't forget to confirm each blob with point evaluation from Executor.sol, and generate six system logs (one log for each commitment). This setup can really come in handy for ZK chains that are adding blobs to their system. Check out more details here.

Monitoring and SRE guardrails

  • Price and Capacity

    • Keep an eye on eth_blobBaseFee to get an idea of what the fees could be for the upcoming block by checking excess_blob_gas. If you spot any jumps over 25% from one block to the next, give us a heads up. Also, let’s monitor how many blobs we're aiming for versus how many we're actually pulling in per block using Grafana. (quicknode.com)
  • Health

    • Keep an eye on the Beacon endpoints to check if blob_sidecars are available and ensure we’re linked to several providers. If the Beacon reads are down for more than a set amount of time (let's call it N), we should switch over to calldata as a backup. (ankr.com)
  • Forensics

    • Make sure to compare your posted blobs with what you see on Etherscan’s blob views. It’s crucial to highlight the utilization metrics for each blob, which tells you how many bytes you’ve used out of 131,072. Check it out here: (info.etherscan.com)
  • Archival

    • Let’s make sure our retention policy stretches past that 18-day mark by utilizing a policy engine. We should aim for 30-180 days on S3 or consider long-term storage with Arweave. And remember, it’s crucial to checksum the data against KZG commitments to keep everything safe and sound. (docs.chainstack.com)

Roadmap: design for 2026‑grade scale now

  • Pectra has already upped the blob throughput from 6 target blobs to a maximum of 9. It would be fantastic if your SDK let us tweak these settings a bit. Check out the details here.
  • Fusaka is gearing up to roll out PeerDAS on December 3, 2025. This should enable way higher blob targets without overly taxing the nodes. Keep an eye out for more “blob‑param‑only” forks that will push these limits even further--having your limiters and packers ready to adapt is definitely a smart move. You can read more about it here.

A crisp checklist for leaders

  • Product
    • Let’s set “blobs” as the default for DA and make sure we have an easy-to-find fallback option ready to go.
  • Engineering
    • Pack data into 131,072-byte chunks, use brotli-11 for improved throughput, and establish a dual-cap RBF escalator that accounts for both gas and blob gas.
  • Security
    • We need to check KZG proofs on-chain, keep those raw sidecars stored for over 18 days, and run tests for reorg and retrieval failures.
  • Ops
    • Keep track of eth_blobBaseFee, blob_gas_used, and excess_blob_gas. Let’s strategize around blob targets and double-check with Etherscan’s blob views. (quicknode.com)

How 7Block Labs can help

We’ve made some pretty cool updates to our OP Stack and Nitro deployments by switching to blob‑first posting. We’ve also rolled out some DA-pluggable SDKs--think Ethereum Blobs, Celestia, EigenDA, and Avail--which are now ready for action. On top of that, we've fine-tuned our compressors and posting policies, leading to an impressive over 95% blob utilization and smooth failover.

If you’re looking for a blob‑first SDK that’s good to go for PeerDAS, we can hook you up with that along with your observability and SLOs in just a few weeks--way quicker than the usual several months!


Sources and specs worth bookmarking

  • The EIP‑4844 spec dives into some key stuff like constants, header fields, fee calculations, BLOBHASH/point evaluation, and there's even a cool idea for a 1.1× replacement. You can check it out here.
  • Want to know about the Cancun‑Deneb (Dencun) and Pectra upgrades? This includes activation dates, blob lifespan, and that slick 6/9 throughput. Head on over to this page.
  • Make sure to look into the eth_blobBaseFee RPC and the ethers v6 fields for blobGasUsed. Everything you need is right here.
  • On the hunt for Blob Sidecars endpoints in the Beacon API? They're super useful for both retrieval and archival. Get all the details here.
  • If OP Stack batching is your thing or you’re seeking help with Arbitrum's setup after 4844, this guide is just what you need.
  • Curious about zkSync’s strategy for handling pubdata post‑4844 with a multi-blob batch design? Don’t miss the details here.
  • Lastly, if you’re exploring Celestia Blobstream (SP1) and EigenDA integration, make sure to check out the documentation here.

Design for blobs first. Your users will definitely appreciate the changes in fees and reliability; plus, your operations will run smoother with improved predictability. And hey, you'll be ready for the blobspace of 2026 that you've planned out on your roadmap!

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.