7Block Labs
Blockchain Technology

ByAUJay

Sharding isn’t just a concept anymore; it’s a real game-changer that shapes your L2 fees, uptime SLAs, and compliance strategies. In this post, I’m diving into “data sharding” on Ethereum (think proto‑danksharding to PeerDAS) and comparing it to some alternatives like NEAR Nightshade, Polkadot parachains, and Celestia/EigenDA. Plus, we’ll break down how these mechanics can impact your budget, procurement, and ROI.

What is “Sharding”?

Pain

Your current plan states that “we’ll scale on rollups," but honestly, the data availability math just isn’t adding up.

  • You initially budgeted for post-EIP-4844 blob pricing, but then demand shot up, and now blobs are hard to come by. This pushed out posting windows, ETAs got extended, and teams started doing that “hot-potato” thing by switching to calldata just to get batches shipped out--leading to fee regressions and some frustration from Finance. (eips.ethereum.org)
  • On top of that, architecture reviews keep getting held up over the “which sharding model?” debate, while vendors offer up all kinds of conflicting roadmaps: there's Ethereum’s data-sharding with blobs and sampling, NEAR’s Nightshade shards, Polkadot’s parachain cores, and Celestia/EigenDA’s modular DA. Each option brings its own mix of impacts on security assumptions, SOC2 audit scope, and cost curves. (ethereum.org)
  • In the meantime, the business side is looking for stable economics for RWA/tokenization, rather than a rollercoaster of variance. You really need a model that can translate “blobs per block,” “columns sampled,” or “cores per parablock” into solid KPIs like cost per transaction, time to finality, and uptime.

Delay risk, budget leakage, and compliance blind spots

  • Missed deadlines: Without a solid deadline runway, rollup sequencers are scrambling for blobspace. Before Pectra rolled out, teams aimed for 3 blobs per slot, but Pectra has bumped that up to 6 (with a max of 9). If you're not planning for capacity based on this new baseline and the upcoming PeerDAS behavior, your throughput projections are probably outdated. Check out more details on this over at blog.ethereum.org.
  • Opex drift: After Dencun hit on March 13, 2024, the costs for L2 DA took a nosedive--like, really significant drops. Many teams took those savings and worked them into their unit economics. But remember, the blob base-fee operates like a market, which means things like usage and protocol parameters can change (EIP-7691 made it a bit more responsive). If blob demand spikes quicker than your batching efficiency can keep up, your profit margins are going to feel the pinch. Dive deeper on this at investopedia.com.
  • Compliance ambiguity: Data sharding is shaking up where data is stored and for how long. On Ethereum, blobs get pruned after about 18 days, which shifts the focus toward bandwidth and availability sampling. Over on Celestia, light nodes are sampling erasure-coded shares while working under some assumptions about honest connectivity. These little details can mess with SOC2 controls, especially around data retention, monitoring, and incident response. For more info, check out the resources from blog.ethereum.org and celestiaorg.github.io.

7Block’s Sharding Methodology Focused on Business Outcomes

We make sure that our protocol mechanics match up with the kind of metrics you'd expect in a procurement setting. Our deliverables aren't just "whitepapers"; they're really hands-on deployment guides that are connected to SLAs and budget lines.

Architecture Sprints: Pick the Right Sharding Model for Your ROI

  • Ethereum Data-Sharding (Blobs today, PeerDAS next):
    Proto-danksharding (EIP-4844) has rolled out 128 KiB blobs (which breaks down to 4096 field elements at 32 bytes each), introduced a separate blob-gas market, and KZG commitments. With Pectra set to launch on May 7, 2025, we’re looking at a boost to 6 target blobs (and a max of 9). Then there’s PeerDAS (EIP-7594), coming in the Fusaka upgrade, which lets nodes verify data availability by downloading just parts (like 1/8) of the blob data. This means we can expand blob capacity without putting too much strain on the nodes. The bottom line? More reliable L2 DA capacity without hefty validator requirements. (eips.ethereum.org)
  • NEAR Nightshade:
    With a single chain that can handle multiple “chunks” per shard per block, NEAR’s got hidden validator assignments and, since 2024, has been rolling out stateless validation advancements. This approach promotes strong parallelism and operates under different validator-assignment and fraud-proof assumptions compared to Ethereum. It’s a solid option for high-throughput consumer apps with flexible sharding targets. (pages.near.org)
  • Polkadot Parachains:
    Thanks to Asynchronous Backing, execution time has jumped from 0.5 seconds to 2 seconds, and parablock time has been cut in half (from 12 seconds to 6 seconds). This could potentially ramp up parachain throughput by around 8 times! Agile Coretime comes into play here, providing bulk or on-demand cores and allowing for elastic scaling. From a procurement perspective, you can expect predictable blockspace through leases or on-demand options, much like how reserved cloud capacity works. (docs.polkadot.com)
  • Modular DA (Celestia/EigenDA):
    Celestia employs 2D Reed-Solomon erasure encoding along with DAS, which enables light clients to probabilistically confirm availability. Keep in mind that pruning windows and the need for honest bridge/full nodes play a part in how you manage operations. On the flip side, EigenDA is stepping up raw DA bandwidth (claims have gone from 15 to 50 to 100 MB/s in v2), and it’s all about trading different trust and operator-capacity assumptions for that high throughput. We view this as a bandwidth purchase decision, complete with clear SLOs. (celestiaorg.github.io) (blog.eigencloud.xyz)

Blob Budgeting & Posting Policy: Turning Protocol Constants into Dollars

Inputs We Model:

  • Blob Capacity: We're aiming for a target of 6 blobs and a max of 9 per 12-second slot (this is all post-Pectra stuff). Each blob sits at 128 KiB, and we keep an eye on base-fee sensitivity as per EIP-7691. When demand dips below our target, we’re looking at “near-zero” regimes. We also project DA operating expenses during both quiet times and peak periods, along with how fees might decay over time.
  • PeerDAS Parameters: Here we focus on factors like the sampling fraction (currently at 1/8 but could go lower), the custody duties for validators, and signals of network readiness from devnets and client documentation. This all gets translated into network risk and how timing rolls out. You can check out more in EIP-7594.

Policy Levers We Implement:

  • We’re setting targets for batch size and frequency (“fill blobs before posting”), establishing blob/calldata failover thresholds, and creating multi-DA fallbacks (think spilling to Celestia or EigenDA after a specific price spike). Plus, we adjust the compression level dynamically based on what's happening in the mempool.

3) Compliance-First Delivery: SOC2, Data Retention, and Incident Response

  • SOC2 Scope Mapping:

    • For Ethereum blobs, we’ve set up controls for ephemeral data, which is available for 18 days. Plus, we've got logging in place for any failed sampling or fetch events, along with runbooks for dealing with data availability incidents.
    • When it comes to Celestia/EigenDA, we’ve implemented controls that monitor light-node sampling confidence, check honest-peer connectivity, and set operator performance SLOs for things like bandwidth and storage. This all ties back to your SOC2 criteria for “Change Management,” “Availability,” and “Monitoring.” You can read more about it here.
  • Security and Audit:

    • We take DA design seriously and pair it up with pre-deployment threat modeling and audits through our security audit services. We also integrate compliance metrics into dashboards, making it easy for your auditors to review and test.

4) Procurement & Integration: vendor-neutral, priced to outcomes

  • We take the time to formalize our DA choices and rollup configurations into specifications that are on par with RFP standards. This includes things like blobspace headroom, service level objectives (SLOs) for latency to post, failover recovery time objective (RTO), recovery point objective (RPO), commitments from validators or operators, and any relevant data-residency details.
  • Our team makes sure to integrate DA and rollup pipelines seamlessly into your existing tech stacks--whether it’s ERP, data warehouse, or SIEM--through our blockchain integration expertise. Plus, we’ve got you covered on the application side with our web3 development services and smart contract development.

Deep-Dive: Translating Sharding Mechanics into Execution Plans

When it comes to databases, sharding can be a game-changer. It's all about breaking down your data across different servers, which helps improve performance and scalability. But translating those sharding mechanics into actual execution plans? That’s where things can get tricky. Let's break it down step by step.

What is Sharding?

Sharding divides your data into smaller, more manageable pieces--called shards--that can be distributed across multiple servers. By doing this, you can handle more requests simultaneously, reduce latency, and make sure your application can grow as your user base does.

Why Use Sharding?

  • Scalability: Easily add more servers to handle increased loads.
  • Performance: Spread out the workload to avoid bottlenecks.
  • Flexibility: Manage data in a way that aligns with your application’s needs.

Sharding Mechanics

Sharding mechanics involve a few key concepts:

  1. Shard Key: A unique identifier that determines where data gets stored. Choose wisely!
  2. Shard Map: Keeps track of which data resides in which shard.
  3. Data Distribution: How you distribute data across shards impacts performance and access speed.

Execution Plans

Translating sharding mechanics into execution plans means thinking about how your database queries will interact with these shards. Here’s what to consider:

1. Query Routing

Make sure your queries are routed to the correct shards based on the shard key. Your database management system (DBMS) should efficiently handle this part.

2. Data Aggregation

When you need to pull data from multiple shards, think about how to aggregate that data efficiently. You might need to combine results from different shards, which can get complex.

3. Load Balancing

Distributing requests evenly across all shards helps ensure no single server becomes a bottleneck. Load balancing strategies can greatly improve performance.

Example Execution Plan

Here’s an example to illustrate how to create an execution plan using sharding:

SELECT * FROM orders
WHERE order_id IN (SELECT order_id FROM shard_map WHERE region = 'us-east');

This query shows how to access sharded data by first checking the shard map for the appropriate region.

Conclusion

Translating sharding mechanics into execution plans is all about understanding how data flows through your system and making smart choices that optimize performance. With a solid grasp of sharding principles and execution strategies, you’re well on your way to leveraging the full power of your database architecture.

For more info, check out Sharding Best Practices and our Database Performance Guide.

A) Ethereum Today: Blobs, Tomorrow: Sampling

  • What’s Real Today

    • So, with EIP‑4844, we’re looking at 4096 32-byte field elements per blob, which rounds up to about 128 KiB. There’s a dedicated blob base-fee market in place, a point-evaluation precompile, and we’re seeing a practical fee collapse for Layer 2s after Dencun. You can dive deeper into the specifics here.
    • Looking ahead to Pectra (coming May 7, 2025), the target and max blobs have been upped to 6 and 9, respectively, along with some adjustments to calldata costs (thanks to EIP‑7623) aimed at pushing data availability into blobs. The latest execution and consensus spec releases are out, so make sure your models account for higher average capacity and quicker fee decay. More details can be found here.
  • What’s Shipping Next (Already Live as of Nov 6, 2025): PeerDAS

    • With PeerDAS, column sampling across extended blobs is definitely a game changer. It reduces the bandwidth and storage needs per node by verifying availability in a probabilistic way, which sets the stage for potential future increases in blob counts. We're keeping tabs on how ready we are in relation to EF devnet and client implementation status. Check out the full scoop here.

B) NEAR Nightshade: shards → chunks with hidden validators

  • Even though execution looks like one chain, the blocks actually hold per-shard chunks. This setup, with hidden validator assignments, helps toughen defenses against adaptive corruption. With Nightshade 2.0, we've introduced stateless validation and increased our shard count targets. For businesses, this means you’ll need to rethink your approach to validator assignments, cross-shard messaging, and monitoring in ways that differ from Ethereum. (pages.near.org)

C) Polkadot: Boosting Performance with Asynchronous Backing and Coretime

  • With Async Backing, the execution time for each parachain ramps up from 0.5 seconds to 2 seconds, while the parablock interval gets cut down in half from 12 seconds to 6 seconds. This combo can give you a throughput increase of up to about 8 times! Plus, Agile Coretime allows you to either pre-purchase or request cores when you need them, and there's elastic scaling coming up in the future. It’s kind of like choosing between “reserved instances” and “spot” capacity when you're getting cloud resources. Check out more details here: (docs.polkadot.com) and (docs.polkadot.com).

D) Modular DA: Celestia and EigenDA as Knobs, Not Dogma

  • Celestia’s Data Availability Sampling (DAS) uses 2D erasure coding, which lets light clients confirm that data is available. It’s crucial to have clearly defined pruning windows and honest-peer assumptions set in your Service Level Objectives (SLOs). This setup shines when you’re looking for sovereign rollups and speedy data availability finality. You can check out more about it here.
  • On the flip side, EigenDA is all about maximizing raw bandwidth--boasting speeds of up to 100 MB/s in version 2. It does this through horizontally scalable operators and erasure-coded sharding. This option is great when you need immediate "DA headroom," but just be mindful to manage trust levels, operator diversity, and contract terms carefully. We see provider throughput as an ongoing verification parameter. More details can be found here.

Real-World, Hands-On Examples

When we're talking about practical, implementation-level examples, it really helps to dive into specific situations where concepts are put to the test. Here are a few that illustrate how things play out in the real world.

Example 1: Setting Up a Simple Web Server

Imagine you want to get a basic web server up and running. Here’s a straightforward way to do it using Node.js:

mkdir my-server
cd my-server
npm init -y
npm install express

Then, create an index.js file with the following code:

const express = require('express');
const app = express();
const PORT = 3000;

app.get('/', (req, res) => {
    res.send('Hello, world!');
});

app.listen(PORT, () => {
    console.log(`Server is running on http://localhost:${PORT}`);
});

Run your server with:

node index.js

Now, if you hit http://localhost:3000 in your browser, you’ll see a friendly "Hello, world!" greeting.

Example 2: Data Fetching with Fetch API

If you're working on a front-end application and need to grab some data from an API, the Fetch API is your best friend. Here’s a quick example:

fetch('https://jsonplaceholder.typicode.com/users')
    .then(response => response.json())
    .then(data => {
        console.log(data);
    })
    .catch(error => console.error('Error fetching data:', error));

This will fetch a list of users and log it to the console. Super simple and effective!

Example 3: Basic CRUD Operations with a Database

Let’s say you’re building an app where you need to handle some basic create, read, update, and delete (CRUD) operations with a MongoDB database. Here's a basic flow using Mongoose:

Set Up Mongoose

First, install Mongoose:

npm install mongoose

Connect to Your Database

In your app, start by connecting to your MongoDB instance:

const mongoose = require('mongoose');

mongoose.connect('mongodb://localhost/mydatabase', { useNewUrlParser: true, useUnifiedTopology: true })
    .then(() => console.log('MongoDB connected'))
    .catch(err => console.error('MongoDB connection error:', err));

Define a Schema

Next, let’s define a simple schema for our data:

const userSchema = new mongoose.Schema({
    name: String,
    email: String,
});

const User = mongoose.model('User', userSchema);

Create a User

To add a new user to your database, you’d do something like this:

const newUser = new User({ name: 'John Doe', email: 'john.doe@example.com' });
newUser.save()
    .then(() => console.log('User created'))
    .catch(err => console.error('Error creating user:', err));

Fetch Users

To fetch and log all users:

User.find()
    .then(users => console.log(users))
    .catch(err => console.error('Error fetching users:', err));

Update a User

Updating a user by ID:

User.findByIdAndUpdate('USER_ID', { name: 'Jane Doe' })
    .then(() => console.log('User updated'))
    .catch(err => console.error('Error updating user:', err));

Delete a User

And to delete a user:

User.findByIdAndDelete('USER_ID')
    .then(() => console.log('User deleted'))
    .catch(err => console.error('Error deleting user:', err));

These examples show how you can put theory into practice, making concepts tangible and applicable. Happy coding!

1) Blob‑First Posting Policy (Ethereum L2)

  • Objective: The goal here is to keep the variance in transaction costs low while still hitting our time-to-finality (TTF) service level objective (SLO).
  • Policy:

    • We aim for at least 95% blob utilization per batch. If the blob base fee goes over X and the mempool depth exceeds Y, we’ll split across two blobs instead of just falling back to calldata.
    • Failover: If the estimated wait time for posting goes over T seconds, we’ll switch to a secondary Data Availability (DA) provider like Celestia or EigenDA. We’ll pin the on-chain hash to L1 so it’s easy to find later, and we’ll reconcile everything during the settlement window.
  • Why It Works: Thanks to the post-Pectra blob capacity (6/9) combined with PeerDAS, we can keep average blob fees lower even when demand is typical. Your set thresholds help protect against those pesky fee spikes while keeping things consistent. (blog.ethereum.org)

2) SOC2 Control Mapping for DA

  • Availability (A1): Keep an eye on sampling failure rates (PeerDAS) and how quickly data is posted on DA. We should set up alerts to notify us when the sampling confidence dips below N% or if blob postings take longer than our Target Time Frame (TTF) SLO.
  • Change Management (CC): We need to treat any changes to EIP parameters, like blob schedules via EIP-7840, as important configuration updates. This means running tests in staging and having a rollback plan ready. You can read more about this here.
  • Logical Security (CC): For Celestia/EigenDA, let’s make sure to verify that we’re connected to honest peers and that operators are sticking to their SLOs. Don't forget to log proof receipts and verification actions, so we’ve got a solid audit trail. Check out the details here.

3) Procurement Rubric (Condensed)

  • Ethereum Blobs Only (Pectra/PeerDAS): This option comes with the lowest integration risk, offers solid decentralization, and links DA costs to network demand. It's ideal when "Ethereum settlement" is a must-have, and you're okay with market-based DA pricing.
  • Ethereum + Modular DA Fallback (Celestia/EigenDA): Here, the added operational complexity gives you a handy "capacity escape hatch." This is your go-to choice when strict TTF SLOs are more important than having stable fees.
  • NEAR/Polkadot: These platforms shine with impressive throughput and parallelism, boasting different validator and coretime economics. They're perfect for product lines that need high, consistent TPS and a reliable way to budget for blockspace.

What’s new since 2024 (numbers you can plan around)

  • After Dencun, there’s been a huge drop in L2 fees--like 90-99%! Major rollups, especially Base, OP, and Starknet, have really benefited from the shift in calldata costs to blobs. And thanks to Pectra, daily blob capacity jumped from around ~5.5 GB/day to a target of ~8.1 GB/day, which should help ease those price pressures. Keep this in mind as you plan your FY26 budget, but don’t forget to model for any potential fee spikes. (thedefiant.io)
  • EIP‑7691 brought some changes to how blob base-fees respond (2:3 target:max). This means prices are likely to drop quicker when demand is low--which is awesome for your “quiet-hours” batch strategy. (eips.ethereum.org)
  • PeerDAS is officially live as of Nov 6, 2025 (Fusaka), and it's shaking things up by moving verification from “download all blobs” to a more efficient “sample columns” approach. This is a crucial step for safely boosting blob counts even more in 2026 and beyond. Don’t forget to keep an eye on client support and network telemetry as part of your go-live checklist. (blog.ethereum.org)

Best Emerging Practices We Recommend

  • Design for "DA elasticity": Start by creating a two-step posting pipeline. First, go with a blob-first approach on Ethereum, setting your budgets according to a 6/9 capacity. Then, use a policy-driven spillover to modular DA with on-chain commit references. This approach helps keep worst-case fees manageable without messing with your app logic. Check out more details here.
  • Treat sharding as capacity planning: When working with Polkadot, grab bulk core time for your baseline traffic and have extra on-demand cores ready for peak times. For NEAR, consider adjusting chunk producers and improving the visibility of hidden validators to ensure fraud-proof SLAs. You can dive deeper into this here.
  • Make "availability" observable: Whether you're looking at PeerDAS sampling rates or the chances of Celestia share reconstruction, make sure to expose these as key metrics in your Network Operations Center (NOC), right alongside your latency and error budgets. You can find more on this topic here.

How 7Block Labs Gets Things Done

  • We kick things off with a two-week assessment that gives you:

    • A DA bill of materials: this includes budgets for blobs, fallback thresholds, and expected $/txn ranges for both quiet and peak times.
    • A compliance design: think of this as a SOC2 control matrix tailored to your chosen DA/sharding path.
    • A delivery plan: a roadmap for integrating with your data platform and your current IAM/SIEM.
  • After that, we roll out your entire stack from start to finish with our blockchain development services, connect it to your app through dApp development, and if you're diving into token launches or RWA, we make sure the mechanics sync up with our asset tokenization and fundraising efforts via our fundraising practice.

GTM-Relevant Metrics You Can Stand By with Finance and Procurement

  • Cost: After EIP-4844 dropped, we saw a whopping 90-99% cut in Layer 2 user fees! Pectra has upped their blob capacity from around 5.5 GB per day to about 8.15 GB per day. Plus, with PeerDAS now live, you can strategize safer blob increases for the future. Our models break this down into per-transaction DA bands and reserve capacity policies. Check out more on this here.
  • Predictability: Thanks to EIP-7691’s new target:max ratio of 6:9 and the asymmetric base-fee response, prices can drop quickly when they fall below the target. This gives you a real leg up for batching during off-peak times. Want to dive deeper? You can find the details here.
  • Throughput Headroom: If Ethereum’s blob lane isn’t cutting it for your needs, modular DA options like EigenDA offer significantly higher raw bandwidth. We look at this as “MB/s to TPS” and tie it to penalties/SLOs in contracts while ensuring we validate all claims over time. More info can be found here.

If you’re trying to figure out “what is sharding” for your roadmap, it basically comes down to picking the data availability (DA) model that gives you the biggest cushion for error. Ethereum’s approach to data sharding (starting with blobs and moving to sampling next) is the go-to option. You can use modular DA as a safety valve, while also keeping an eye on other stacks like NEAR and Polkadot as your program-level alternatives, where you can count on predictable blockspace and parallelism to take the lead.

Internal resources from 7Block Labs

CTA -- Schedule Your 90-Day Pilot Strategy Call

Ready to kick-start your journey? Let's chat about how we can make it happen. Book your 90-day pilot strategy call today!

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.