7Block Labs
Ethereum Improvement Proposals

ByAUJay

EIP-7623 in Practice: Repricing Calldata and Refactoring L2 Data Pipelines

Description: EIP-7623 launched with the Ethereum Prague/Electra (“Pectra”) mainnet upgrade on May 7, 2025. This update really changes the game for how we price calldata and brings some significant shifts in L2 batch posting and data engineering. In this guide, we’ll break down the new rules into simple rollout steps, cost models, and pipeline tweaks for L2s and companies building on Ethereum.

TL;DR for decision‑makers

  • EIP-7623 kicks in a floor price for calldata, which really ramps up the costs for those data-heavy transactions. This shift makes fallback-to-calldata strategies a lot less attractive. It landed in Pectra on May 7, 2025. You can check it out here: (blog.ethereum.org).
  • Around the same time, Pectra also decided to double the blob capacity with EIP-7691 and rolled out a formal schedule for blobs per fork through EIP-7840. This means L2 DA pipelines are now set to rely more on blobs and will only turn to calldata in urgent situations. For more details, take a look here: (eips.ethereum.org).

1) What changed on May 7, 2025

Pectra is officially live on the mainnet as of epoch 364032 (10:05:11 UTC)! 🎉 With this launch, we’ve got a robust bundle of 11 EIPs. Notably, EIP-7623 focuses on increasing calldata costs, while EIP-7691 is all about enhancing blob throughput. If you're curious, you can catch the confirmation in the Ethereum Foundation's mainnet announcement and check out the spec meta-EIP list. For all the juicy details, swing by the Ethereum blog.

  • EIP-7623 focuses on establishing a calldata floor cost, which plays a crucial role in reducing the worst-case block size and variance. This helps keep network propagation secure, especially as the blobs begin to expand. You can dive into the details here.
  • EIP-7691 increases the blob target and the maximum per block from 3/6 to 6/9. It also adjusts the blob base fee update fraction to ensure that the pricing dynamics stay aligned with the new 2:3 target:max ratio. For more details, check it out here.
  • EIP-7840 introduces a “blobSchedule” that gets added to the execution layer (EL) config files. With this change, clients will have the per-fork blob target, max, and responsiveness included directly in their config. Check out all the details here.

When you're diving into roadmaps, don't forget that Ethereum has its sights set on PeerDAS (EIP-7594), which is coming up in December 2025 and is all about Fusaka. This upgrade is designed to enhance blob DA capacity through data availability sampling, signaling that the shift to blob-centric pipelines is definitely here for the long haul--not just a short-term fix. For more details, take a look here: (eips-wg.github.io)


2) EIP‑7623: the new gas math you must internalize

The proposal is rolling out a "floor" for calldata gas, which will activate for those transactions that are super heavy on data. In other words, this comes into play when these transactions aren't utilizing enough EVM execution gas relative to their calldata. So, to break it down:

  • Let's break down tokens_in_calldata: it’s calculated using the formula zero_bytes + 4 × nonzero_bytes.
  • When you're looking at the gas used for a transaction, it kicks off at 21,000. But there’s more to consider on top of that:
    a) you have the "standard" cost, which is 4/16 gas for every zero or nonzero byte, plus any execution gas (and don't forget about the overhead for initcode if you're setting up a new contract), or
    b) there’s the minimum cost of 10 gas for each tokens_in_calldata--so that’s 10 for each zero byte and 40 for each non-zero byte.
  • If a transaction's gas limit is lower than 21,000 + 10 × tokens_in_calldata (or it doesn’t hit the intrinsic gas requirement), it’s flagged as invalid. (eips.ethereum.org)

Operationally:

When we talk about operations, we're diving into how things actually run behind the scenes. It's all about making sure everything ticks along smoothly and efficiently. Here are some key aspects to consider:

  • Processes: The steps we take to get things done. Streamlining these can save time and resources.
  • Resources: These include our team members, tools, and systems. Making sure we use them effectively is crucial.
  • Performance: Keeping an eye on how everything is performing helps us tweak and improve as we go.

For more detailed insights, you can check out this article.

In terms of specifics, here’s a quick overview of our operations:

AreaDetails
TeamsCross-functional groups collaborating
ToolsProject management software, CRM
MetricsKPIs, efficiency rates

And remember, operational excellence isn't a one-time thing. It takes constant attention and a willingness to adapt.

  • If you're diving into data-heavy transactions, like those L2 batches that make use of calldata, you'll find they now cost around 10/40 gas per byte.
  • But for most users, the usual transactions that really get things done are mostly in the clear. They tend to follow the "standard" path, which still runs at 4/16 per byte. This means the execution costs are what really matter. (eips.ethereum.org)

Why This Matters

The floor definitely shakes up the earlier economics that let some L2s make the most of calldata. With that 10/40 floor now in effect, posting large amounts can really start to add up, especially when you stack it against using blobs--unless we end up with some crazy, temporary spikes in blob fees. For more info, take a look here: (eips.ethereum.org).


3) The other half: blobs just scaled up (and are easier to engineer against)

  • EIP‑7691 ramps up the blob capacity per block, aiming for a target of 6 and allowing a max of 9. It also smooths out how quickly the blob base fee changes depending on how packed or sparse the sections are. When the sections are full, the fee jumps by 8.2%, and when they're empty, it drops by around 14.5%. This adjustment does a solid job of enhancing availability while keeping those fee surprises under control. (eips.ethereum.org)
  • EIP‑7840 sets out the blob targets and maximum limits for each fork in the EL config under “blobSchedule.” This simplifies the process of making changes down the line without the heavy lifting usually required from the Engine API. (eips-wg.github.io)

Throughout much of 2024 and into 2025, the blob base fee mostly stuck close to the minimum. Sure, there were some crazy spikes here and there, especially during the blobscription hype, but for the most part, you could count on low prices. It’s smart for your batcher to be ready for those unexpected price jumps while keeping in mind that blob prices will likely stay low. Plus, with that new 6/9 capacity, there's plenty of space to work with. (ethresear.ch)


4) Cost model: when does calldata ever win now?

Let’s use S to stand for the payload size in bytes, and r for the fraction of nonzero bytes. Just so you know, for compressed rollup payloads, this fraction typically hangs out between 85% and 95%.

  • tokens_in_calldata = S × (1 + 3r)
  • Floor gas = 10 × tokens_in_calldata = 10 × S × (1 + 3r)

Let’s break this down with a quick example. If we set S to 250,000 bytes and r to 0.90, we can calculate the tokens like this: 250k × 3.7 = 925k. So, our floor gas would be 9.25M.

Now, if we’re looking at a gas price of 20 gwei, that comes out to roughly 0.185 ETH just for the data-- and we haven't even touched the EVM execution yet. For more juicy details, check this out: (eips.ethereum.org)

Blob equivalent:

In the world of programming, a "blob" (which stands for Binary Large Object) usually means a hefty piece of binary data packed away as a single unit in a database management system. Think of it as a handy way to manage and store large files like images, audio tracks, or videos all in one go.

Key Characteristics of Blobs:

  • Large Size: Blobs are capable of storing a massive amount of data, often going beyond several megabytes.
  • Unstructured Data: They can hold all kinds of content, including images and lengthy texts, which makes them really flexible.
  • Database Support: Most modern databases, such as MySQL, PostgreSQL, and MongoDB, provide blob support to manage this type of data.

How to Work with Blobs:

To handle blobs in a database, you can usually rely on SQL commands or some handy functions that the database offers. Here’s a straightforward way to go about it:

Storing a Blob:

INSERT INTO my_table (id, my_blob_column) VALUES (1, ?);

In this example, just swap out the ? with your actual binary data.

Retrieving a Blob:

SELECT my_blob_column FROM my_table WHERE id = 1;

Running this command will give you the blob data for the record you’ve specified.

Examples of Blob Use Cases:

  • Image Storage: Keeping user-uploaded images safe for social media platforms.
  • Document Management: Managing PDF files within a content management system.
  • Media Streaming: Storing video files for apps that stream content.

Conclusion:

If you find yourself needing to handle large binary files in your apps, keep blobs in mind--they’re the perfect fit! They simplify the process of storing and retrieving big chunks of data, making your life a whole lot easier.

  • One blob is about 131,072 bytes, so for 250 KB, you’re gonna need 2 blobs.
  • When it comes to blob fees, it’s basically blob_base_fee times 131,072 for each blob, plus the standard EL execution gas for a type‑3 transaction. From what we’ve noticed, the blob base fee tends to stay pretty close to the minimum, and any fluctuations tend to be brief. (eip.directory)

Conclusion

Now that we have the 10/40 floor in place, calldata is really considered a backup plan that we should only use when it's absolutely needed. If we do find ourselves in a situation where we have to roll out a "fallback-to-calldata" policy, we need to make sure we have a clear cutoff and that we can revert quickly. Also, just a heads up, the threshold for this is definitely going to be higher than it was before Pectra. You can check out more details here.


5) Pipeline refactors every L2 and infra provider should prioritize

A) Treat blobs as the default DA lane

  • First things first, check that your batcher is set to use blobs as the default option. If you're working with OP Stack, just throw in --data-availability-type=blobs, or you can choose auto if you want it to switch things up on its own. When you go with auto, the batcher will choose between calldata and blobs depending on L1 prices. With 7623 now in the mix, it's a smart move to adjust your thresholds to really lean towards blobs. (docs.optimism.io)
  • Give multi-blob transactions a shot to streamline your frame packing process. With OP Stack, you can tweak --target-num-frames to decide how many frames you want in each blob transaction--maybe aim for something like 6. It’s also worth experimenting with tip caps and resubmission timers, since swapping out blob transactions could mean you'll need to double up on fees in the current pools. Check out the docs.optimism.io for more details!
  • Make sure to adjust OP_BATCHER_MAX_CHANNEL_DURATION to around ~1500 L1 blocks, roughly 5 hours. This will help you strike the right balance between minimizing costs and maintaining safe-head liveness for downstream integrations, such as CEXs and bridges. Check out more details here: (docs.optimism.io)

B) Add a beacon‑aware ingestion layer

Blobs stick around on consensus (beacon) nodes instead of EL nodes. That means your sequencer/proposer, verifier, explorers, and data warehouse need to:

  • First things first, you’ll want to connect to the beacon API to fetch those blob sidecars. Use the endpoint (/eth/v1/beacon/blob_sidecars/{block_id}). You can dive deeper over on GitHub.
  • Set up a pruning plan that spans roughly 18 days. You have a couple of options here: you can either run your own blob archiver or use an existing one to pull historical blobs. Base offers an open-source archiver that works with both disk and S3 backends. Plus, the OP Stack has got all the info you need to tweak the archiver to your liking. Check it out on GitHub.

C) Implement end‑to‑end blob validation plumbing

  • We'll go ahead and use the C KZG library bindings (you know, like Go, Rust, Python, and so on) to verify commitments and proofs for our archival workflows, compliance, and analytics. Make sure to stay updated on the latest releases so we can dodge any known issues. You can find more info here: github.com

D) Update gas estimation, RPC, and bundling

  • When you're working with wallets, RPC providers, bundlers, and paymasters, it's super important to keep an eye on the 7623 floor when using eth_estimateGas. If a transaction's gas limit falls below 21000 + 10 × tokens_in_calldata, it’s a no-go. Make sure to sort this out before signing, so users don’t end up frustrated with those pesky out-of-gas errors. (eips.ethereum.org)
  • If you're diving into ERC‑4337 infrastructure, make sure you’re keeping an eye on those simulation bundles and that they respect the 7623 floor. We don’t want any of those little oversights sneaking by! (eips.ethereum.org)

E) Instrumentation and SLOs

  • Make sure to monitor how quickly the blobs are filling up (that’s blobs per block), along with the blob base fee, inclusion latency, and replacement rate--especially when fees start climbing. Just a little warning: EIP-7691 is set up to respond in an asymmetrical way, which means you can anticipate the base fee dropping faster than it goes up. (eips.ethereum.org)
  • Don’t forget to set an alert if the batcher keeps uploading those partially filled blobs for too long. It might be a sign that you need to change the channel duration or play around with the compression settings to reduce any waste. (docs.optimism.io)

6) Practical examples you can adopt this sprint

Example 1 -- Calldata floor estimator for batcher fail‑safes

Purpose

If you’re still letting calldata fallback happen during those “blob fee spike” moments, just make sure to keep a close eye on it.

  • To figure out the number of tokens, use this formula: tokens = S × (1 + 3r).
  • Next, let’s set the minimum gas limit like this: min_gas_limit = 21000 + 10 × tokens. If the wallet's limit isn’t high enough, we’ll need to turn down the transaction.
  • For estimating costs at gas_price_gwei, go with: gas = max(“standard path”, 10×tokens) + execution_gas; then, you can find the price using: price = gas × gas_price.
  • Lastly, only allow the fallback if price_blob is greater than price_calldata × (1 + policy_margin).

This works a lot like the 7623 formula and the invalid-tx rule, so you can be sure that your fallbacks won’t just hang out in the mempool. Check it out here: (eips.ethereum.org).

Example 2 -- OP Stack batcher knobs for medium‑throughput chains

  • First off, make sure to enable blobs and multi-frame channels with these settings: --data-availability-type=blobs --target-num-frames=6.
  • Next, start things off with OP_BATCHER_MAX_CHANNEL_DURATION=1500 (that’s roughly 5 hours) to optimize your fill ratio. If you’re running into issues with “safe” liveness SLOs, you might want to think about reducing this duration.
  • It’s also a good idea to nudge the minimum tip caps up a bit (around ~2 gwei) and extend the resubmission timeout. This will save you the trouble of dealing with those annoying fee doublings when making replacements. For more info, check out the details here: (docs.optimism.io).

Example 3 -- Beacon API fetching for explorers and warehousing

  • To grab blobs, just hit the beacon API with this request: GET /eth/v1/beacon/blob_sidecars/{block_id}. Remember to save the blob payload, KZG commitment, proof, and inclusion proof. You can get all the juicy details over at quicknode.com.
  • If you need historical data that's more than 18 days old, you’ve got a couple of choices:
    a) run a non-pruning beacon node, or
    b) set up a blob archiver (check out Base’s reference implementation with an S3 backend). For more info, swing by github.com.

7) Compression and framing tips that actually move your bill

  • Don’t forget to fill up your blobs, each packing in 131,072 bytes. It’s best to steer clear of sending just 1 to 1.5 blobs because that’s where you’ll end up wasting space. The channel/frame packing system (you know, those OP Stack “channels” and “frames”) is designed to help you bundle up your batches efficiently, so make good use of multi-frame channels to nail those target blob counts for each transaction. (specs.optimism.io)
  • If you're working with sparse chains, using span batches is the way to go (just set the OP Stack with --batch-type=1). These span batches significantly reduce metadata overhead and boost compression while keeping channel and frame formats intact. Check out more details on this here.
  • Steer clear of those zero-byte hacks that claim to “optimize” calldata under 7623. The minimum actually scales with the tokens (and yes, those zeros count too), and the main idea here is to encourage pure DA use cases to move toward blobs. Over time, expect more loopholes to get priced in--just take a look at EIP-7981 for the lowdown on access lists. (eips.ethereum.org)

8) What to update in your developer toolchain

  • So, with Solidity 0.8.30 dropping, the default EVM target has officially switched from Cancun to Prague. This update ensures that the compiler aligns with Pectra. If you're handling your CI/CD, it’s definitely wise to either pin or upgrade your compilers to keep things running smoothly. For more info, take a look here.
  • If you're managing execution clients and validator fleets, they should already be using the Pectra-compatible versions mentioned in the EF announcement. Be sure to double-check your EL/CL pairs if you're running validators. You can find more details here.

9) Observed market dynamics you should plan against

  • Blob fees typically chill at the minimum level, but they can really shoot up during those crazy speculative times, like during “blobscription” periods or airdrops. Thankfully, since capacity tends to rise and adjustments can happen quickly after a spike, these intense moments don’t last too long. Still, it’s smart to ensure your batcher has a good back-off and retry strategy ready to roll. (cryptorank.io)
  • Once EIP‑4844 was launched, builders and validators experienced a bit of a rollercoaster with blob inclusion rates. It’s a good idea to monitor latency for inclusion, and you might want to consider setting fee caps for multi-blob transactions to better handle those sudden spikes. (gate.com)

10) Security and reliability footnotes

  • Floor pricing is a game changer! It helps reduce the worst-case block size in EL payloads, which means there's more room for blob capacity while keeping everything flowing nicely. This sets us up perfectly for gradually increasing gas limits down the line. Check it out here: (eips.ethereum.org)
  • Don’t just let your L2 nodes hang out with EL endpoints; it's time to get your beacon endpoints set up! And hey, don’t sleep on archivers for those cold starts, or if you’re dealing with a bit of downtime. You can dive into the details about beacon requirements and archiving for historical blobs in the OP Stack and Arbitrum docs. Go see what you need: (docs.optimism.io)

11) What’s next: hedge for 2026‑grade DA

  • EIP‑7976 is all about increasing the calldata floor (from 10/40 to 15/60) to keep block sizes in check as usage grows. So, when you're working on your gas estimation and CI tests, make sure this new floor doesn’t trip you up. Take a look at the details here: (eips.ethereum.org).
  • EIP‑7981 brings a fresh approach to pricing access lists based on their data footprint, making strides to solve some of the issues that lingered with 7623's incentives. Just a heads up--if you're considering any setups that aim to “add EVM work” to dodge the floor, be warned: those strategies are likely to hit you in the pocket. For more details, check out (eip.directory).
  • PeerDAS (EIP‑7594) is gearing up for Fusaka, and it's set to really boost data availability with some cool sampling techniques. The main goal here is to whip up modular DA layers that can easily plug in new retrieval and verification methods without throwing your application code into chaos. If you want to dig deeper, check it out here: (eips-wg.github.io).

12) A concrete readiness checklist (use this with your team)

  • Batch posting

    • Start with blobs as your default option. When you're using auto, it’s a good idea to set conservative thresholds so that you keep calldata handy for emergencies. Check out the details here.
    • Enable multi-blob support (think of it like having 6 frames) and tweak the tip caps and resubmission settings to handle any replacement issues that might pop up. More info can be found here.
    • If you're noticing that your throughput isn't quite where you want it to be, consider using span batches and take a look at your compression algorithm. You might want to switch between options like zlib and brotli-10/11. Dive deeper here.
  • Ingestion and Archival

    • First things first, make sure you’ve got the beacon API added to your node setup. You’ll want to connect to /eth/v1/beacon/blob_sidecars/{block_id}. Check out the guide on quicknode.com for more details!
    • You might want to think about deploying or subscribing to a blob archiver--whether that’s on disk or S3. Just a heads-up, you’ll need to handle over 18 days of history. For more info, swing by github.com.
    • And don’t forget to integrate the c-kzg bindings into your warehouse and verifier services. You can find what you need here: github.com.
  • Estimation and ops

    • Revamp how we handle eth_estimateGas in wallets or bundlers for the 7623 floor; let’s make sure we’re enforcing those minimum gas limit reservations. (eips.ethereum.org)
    • Create dashboards to keep tabs on blob base fees, blobs per block, inclusion latencies, and partial-fill rates; we need to be ready to alert on any partial blobs that hang around too long.
  • Tooling and infra

    • Make sure to update Solidity to version 0.8.30+ (EVM prague) in your CI pipeline. And don’t forget to give your deployments a retest after the update! (soliditylang.org)
    • Take a moment to double-check your client versions against EF’s Pectra client matrix for EL/CL. It’s always good to stay aligned! (blog.ethereum.org)

13) Executive takeaway

Pectra’s merging of EIP‑7623 and EIP‑7691 has basically put a nail in the coffin for calldata as a go-to data availability option for rollups. We're now diving headfirst into a fresh era that’s all about blobs. If you’ve got an L2 or enterprise app that leans heavily on calldata, brace yourself for a hit in monthly costs and a spike in failure rates. So, what’s the smartest move here? Check out this winning playbook:

  • Make blobs the go-to choice with multi-blob packing,
  • Link your systems to beacon APIs and archivers, and
  • Boost gas estimation to handle the calldata floor--keeping an eye on PeerDAS and the potential for higher calldata floors over the next 12-18 months. (eips.ethereum.org)

Quick Readiness Check?

7Block Labs has your back! We can jump in and perform a Pectra audit on your batcher configurations, blob archival flow, and gas estimation logic. In less than two weeks, you'll receive a prioritized refactor plan that lays everything out for you.


References and primary specs

  • EIP‑7623 is all about ramping up calldata costs, and the EF Pectra announcement is on the way--it's set to kick off on May 7, 2025, at 10:05:11 UTC. You can find all the details here: (eips.ethereum.org).
  • EIP‑7691 is working on cranking up blob throughput, while EIP‑7840 is tackling blob scheduling in the Execution Layer. Want to dig deeper? Check it out here: (eips.ethereum.org).
  • If you're curious about how EIP‑4844 operates, you can dive into the parameters and mechanics here: (eip.directory).
  • Don’t miss the OP Stack batcher setup for handling blobs, frames, and span batches. All the specifics are laid out here: (docs.optimism.io).
  • For those interested in blob sidecars and the Base blob-archiver, the Beacon API has all the info you need. Check it out here: (quicknode.com).
  • Looking ahead, keep an eye on EIP‑7976 (which raises the floor to 15/60), EIP‑7981 (focused on pricing access lists), and EIP‑7594 (PeerDAS). You can get the scoop on these proposals here: (eips.ethereum.org).

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.