7Block Labs
Cryptography

ByAUJay

Which Open-Source Libraries Support BLS Aggregation for zk Proofs So I Don’t Leak Inputs During Verification?

A Buyer’s Guide to Proof Aggregation with BLS-Friendly Tooling

When diving into the world of proof aggregation, you've got some cool options, especially if you're looking at BLS-friendly tools. Let’s break down how to keep your public inputs off-chain or at least minimized, plus some concrete libraries you can easily use today.

Keeping Public Inputs Off-Chain

To protect user privacy and improve efficiency, it’s smart to keep as many public inputs off-chain as possible. Here are some strategies you can adopt:

  • Use Local Computation: Perform as much computation as you can locally before sending any data on-chain.
  • Zero-Knowledge Proofs: Consider integrating zero-knowledge proofs to validate transactions without exposing underlying data.
  • Batch Transactions: If you can, batch multiple transactions together before submitting them to the chain. This way, you’ll limit the number of public inputs.

BLS-Friendly Libraries You Can Ship With

If you’re ready to get hands-on, here’s a selection of libraries that are BLS-friendly and perfect for your projects:

  1. Blst
    A high-performance BLS library in C that offers fast signature verification and is suitable for serious applications.
    Check it out here
  2. BLS12381
    This library is all about the BLS12-381 curve. It’s versatile and has been integrated into a number of blockchain projects.
    Take a look here
  3. KZG Commitment Scheme
    An efficient way to create and verify polynomial commitments which can be extremely useful for proof aggregation.
    Explore the details here
  4. Chia's BLS Library
    Perfect for lightweight use cases, this library offers a no-frills approach to BLS signatures.
    Find out more here
  5. Arkworks
    A collection of libraries for zero-knowledge proofs that also has strong support for BLS signatures.
    Visit Arkworks

With these tools and techniques at your disposal, you're all set to explore the exciting realm of proof aggregation while keeping things efficient and user-friendly. Happy coding!


First, let’s agree on terms

  • When we talk about BLS in this post, we're referring to a couple of connected ideas:

    • First, there’s the pairing-friendly curve family that's commonly used in various zk systems, like BLS12‑381.
    • Secondly, we're diving into the Boneh-Lynn-Shacham (BLS) signature schemes, which allow for non-interactive aggregation.
  • Now, what do we mean by proof aggregation? It’s all about simplifying the verification process of multiple SNARKs down to just one smaller verification. You’ll generally come across two main methods for this:

    • The first method involves aggregating the algebra of a certain SNARK (like Groth16) to create what's called an “aggregated Groth16 proof” (this is known as SnarkPack).
    • The second method is all about recursively verifying proofs within another proof (think along the lines of Halo2 KZG accumulation or zkVM recursion), which often ends up producing a single proof that can be verified on-chain. (research.protocol.ai)

Why Input Privacy is Tricky

Verifying vanilla Groth16 can be a bit of a challenge when it comes to input privacy. The reason? It explicitly uses the public inputs, which means you end up doing a multi-scalar multiplication (MSM) with the verifying key. So, if you’re verifying N proofs on-chain, you usually have to post N sets of public inputs.

But don’t worry, there’s a silver lining! With aggregation or recursion, you can actually compress this process. When done right, this approach allows you to keep those per-proof inputs under wraps. If you want to dive deeper into this topic, check out rareskills.io.


Where input leakage happens--and the two ways to avoid it

  • Leakage surfaces

    • On-chain verifiers that go through every proof’s public inputs.
    • Batched verifiers that need the actual input vector to redo MSMs.
    • Mistakes in field/modulo handling that open the door to “aliasing” exploits for public inputs. (rareskills.io)
  • Two ways to “not leak inputs”:

    1. Commit-and-compress the public inputs:

      • Here, provers or an aggregator lock in the list of inputs (think Merkle/KZG).
      • The verifier only needs to look at a small commitment and a constant-size proof to ensure that the compressed value aligns with all inputs based on a random challenge. With SnarkPack, this translates to sharing just a handful of evaluations (like z1, z2 = Σ a_i r^i) along with a proof of their consistency, rather than the entire list of inputs. You can find more about this here.
    2. Skip checking inputs on-chain altogether; verify the whole batch in a recursive proof:

      • Using something like a Halo2 or zkVM circuit, you can check multiple SNARKs internally, and the chain only has to verify one final proof while only seeing a compact summary of the batch. For more details, check out this link.

The short list: libraries you can use today

Below, we’ve organized libraries based on what they bundle together and how they assist in keeping your inputs secure.

A) Groth16 aggregation (SnarkPack)

  • What it is: It’s a neat construction that combines n Groth16 proofs into a single proof that’s way smaller--think logarithmic size--and can be verified super quickly, also in logarithmic time. Plus, it doesn’t need a new trusted setup since it reuses the existing Groth16 SRS. It’s been put to good use in Filecoin. You can read more here.
  • Use when: If you’ve got Groth16 proofs for the same circuit and you're aiming for the smallest aggregated proof possible along with lightning-fast verification (either off-chain or on L2), this is your go-to. It’s also handy if you can use the input-compression trick so that the chain only sees some of the inputs.
  • Open-source implementations:

    • Check out the filecoin-project/bellperson aggregation module in Rust. The Groth16 aggregator is tucked away under src/groth16/aggregate. You can find it here.
    • There’s also an Arkworks-based snarkpack in Rust, which is maintained by contributors and includes some proof of concept and benchmarks. Take a look here.
  • Performance you can rely on: You can aggregate 8,192 Groth16 proofs in about 8.7 seconds on a 32-core CPU, and verification only takes around 33 milliseconds. The size of the aggregated proof clocks in at under 40 KiB, which is pretty solid for the typical Filecoin miner workloads that deal with around 350 public inputs. More details can be found here.
  • Keeping inputs private with SnarkPack: Instead of sharing all N input vectors, you can share only a fixed number of “random linear combination” evaluations (z = Σ a_i r^i) along with a (KZG-style) proof that ensures these evaluations are correct in relation to a commitment to the inputs. This way, the verifier only learns about the batch digest and not each individual input. This approach is discussed in the open literature regarding how SnarkPack manages public inputs. You can dive into that here.
  • Curves / crypto notes: SnarkPack is all about Groth16, and you can use it with either BN254 or BLS12‑381 based on your tech stack. Filecoin goes with BLS12‑381 using bellperson/blstrs, and Arkworks supports both ecosystems. Just make sure to check what your target chain supports on-chain (EVM precompiles tend to prefer BN254). More details can be found here.
  • Inputs: You’ve got n Groth16 proofs π_i for the same circuit, along with the public inputs x_i, and the verifying key VK.
  • Output: The result is π_agg, a compressed input proof (which includes z-values and the proof), plus a commitment to all x_i.
  • On-chain: You’ll need to verify π_agg just once; make sure to check that compact commitment proof too. Then, you can store only the batch root or hashes.
  • What you’ve achieved: You’ve managed to keep all x_i private on-chain while still ensuring soundness with that compact input proof. (research.protocol.ai)

B) Recursive/accumulation circuits that end in one proof

Use a circuit to check a bunch of SNARKs, and then generate just one proof. The chain will only verify that final proof. This way, it keeps the individual inputs hidden--only the batch digest is visible to everyone.

  1. Halo2 + snark-verifier (PSE)
  • What it is: We're talking about a solid set of tools and SDK that bring KZG commitments and a KZG Accumulation Scheme (KzgAs) to life for Halo2. It also includes an aggregation circuit and some handy EVM verifier tools. You can check it out here.
  • Why it helps: The beauty of this setup is that you can bundle up tons of Halo2/KZG proofs (or even nest a Groth16 verifier within Halo2 using the right gadgets), which means you only need to verify one compact proof on-chain. All the inputs get sorted and hashed in the aggregation circuit--what’s public? Just a digest! More details can be found here.
  • How it works technically: The KzgAs is clever--it transforms multiple pairing checks into one single accumulator (think lhs/rhs points) using Fiat-Shamir challenges. The aggregation circuit is designed to show only a fixed-size public digest and the accumulator. The SDK makes life easier with AggregationCircuit and helpers that let you build keys and aggregate proofs. Get the nitty-gritty here.
  • EVM pipeline: You can leverage the snark-verifier’s Solidity generator (or an audited EVM verifier from the same family). There’s also the halo2-solidity-verifier floating around, but keep in mind it hasn’t been audited yet--so it’s smarter to stick with snark-verifier whenever possible. More info is available on GitHub.

2) zkVM Route (Succinct SP1)

The zkVM (Zero-Knowledge Virtual Machine) route, also known as Succinct SP1, brings some exciting advancements to the world of zero-knowledge proofs. It’s designed to enhance efficiency while maintaining a high level of security. Here’s a breakdown of what you need to know:

Key Features

  • Efficiency: The zkVM route is all about keeping things lean. By leveraging succinct proofs, it minimizes the amount of data that needs to be verified, making transactions faster.
  • Security: While it focuses on speed, don’t worry--it still upholds the tough security standards you expect from zero-knowledge systems. You get the best of both worlds!
  • Interoperability: This route is designed to work seamlessly with other blockchain ecosystems. If you’re diving into multiple platforms, this makes your life a lot easier.

How It Works

The zkVM route uses advanced cryptographic techniques to ensure that even the data being proven remains confidential. Here’s a quick look at the process:

  1. Transaction Initiation: When a user starts a transaction, the zkVM generates proof without revealing any sensitive information.
  2. Proof Generation: The system creates a succinct proof that confirms the validity of the transaction, which is significantly smaller than traditional proofs.
  3. Verification: Validators can easily check this proof without needing the actual data, ensuring privacy and speeding up the process.

Use Cases

Whether you're into finance or decentralized applications, the zkVM route opens up a bunch of possibilities:

  • Privacy-Preserving Payments: Users can transact without exposing their balances or transaction history.
  • Decentralized Identity: Keep your personal information secure while still proving your identity on various platforms.

Conclusion

In summary, the zkVM route (Succinct SP1) is a game-changer in how we think about zero-knowledge proofs. With its blend of efficiency, security, and interoperability, it’s a solid choice for developers looking to build cutting-edge applications.

For more information, check out the official documentation here.

  • What it is: Think of it as a zkVM that can handle some serious workloads, including other proof verifiers. It takes the results and packs them into a Groth16 or PLONK-sized proof for EVM chains. Typically, SP1 verifications on EVM hover around 275-300k gas for each final proof verification, especially with mainnet-deployed gateways. You can read more about it on succinct.xyz.
  • Why it helps: Instead of getting lost in the weeds of writing an aggregation circuit by hand, you can just code in regular Rust! You can verify N proofs inside SP1 and then kick out just 1 proof. The public input? Just a batch commitment! The chain won’t even see the individual inputs. Check out the details on this over at blog.succinct.xyz.
  • Operationally: SP1 comes with built-in precompiles inside the zkVM for BN254/BLS12-381 primitives. Plus, it publishes canonical verifier gateways that you can easily reuse. For more info, swing by succinct.xyz.

Note: Both strategies (Halo2 and zkVMs) have options for GPU provers and acceleration. If you're finding that prover time is your biggest hurdle, think about integrating GPU backends like Snarkify/cuSnark or Ingonyama ICICLE into your setup. You can check out more details here: (docs.snarkify.io)


C) BLS signature aggregation as an “attest-and-wrap” layer

Sometimes, your product goal might be to “prove validity with zk,” but maybe your on-chain interface just needs to confirm that a bunch of off-chain verifications took place, linked to some not-so-clear inputs. A solid approach here is:

  • Off-chain: For every user proof, you’ll need to calculate a digest (d_i) of its public inputs using Poseidon or Keccak, and then get a BLS key to sign that digest. After that, aggregate all those signatures into one big signature, which we’ll call (\sigma^). If you want to go the extra mile, you can also verify each zk proof off-chain and produce a single recursive zk proof that binds (\sigma^) to the whole batch.
  • On-chain: Now, when you’re ready to verify on-chain, you just need to check that the single aggregated BLS signature (\sigma^*) matches up with a compact root or digest. If you went ahead with the recursive zk proof, you’ll want to verify that too. The great part? The chain never sees the individual inputs for each proof.
  • Libraries you’d want to check out:

    • supranational/blst (C/asm) is your go-to for BLS12‑381 signatures and for aggregating those verifications--it’s been audited and is super popular out there.
    • herumi/bls (C++ on mcl) is a solid alternative if you’re looking for something with multi-verify helpers and compatibility with ETH2.
    • If you’re diving into the ERC‑4337 universe, check out the “signature aggregator” pattern (ERC‑7766). It standardizes an on-chain contract interface that helps you validate those aggregated signatures. You can totally adapt this aggregator pattern to work with zk-proof digests too. Take a look at it here.

This pattern lets you keep tight control over what gets shared publicly. The chain only picks up batch-level attestations along with a tiny zk proof, but it never gets to see the individual inputs.


Curve and platform considerations (you’ll need to decide this up front)

  • EVM L1/L2: If you’re diving into EVM, there are native precompiles available for BN254 pairings. On the other hand, BLS12‑381 pairings can get pretty pricey unless you either roll out your own precompile or do the verification off-chain. That's why a lot of folks stick with BN254 for on-chain verification, even if their off-chain setups are using BLS12‑381. So, keep that in mind when planning! (github.com)
  • BLS12‑381 for cryptographic comfort and ecosystem: If you’re on the hunt for some audited, high-performance primitives, check out gnark-crypto. It offers BLS12‑381 and other curves, while you can find zkcrypto/bls12_381 on Rust. Both of these are solid options if you need to whip up a custom aggregator or need to handle pairing checks right in your circuit. (github.com)
  • In-circuit pairings: Got to verify BLS signatures or pairings inside a circuit (thinking about recursion or attestation)? Just a heads-up, the constraint costs can add up quickly. The gnark folks have some handy notes, and the community has shared write-ups that cover optimized “emulated pairing” gadgets for both BN254 and BLS12‑381. Seriously, if you’re doing multiple pairings in-circuit, plan for millions of constraints! (hackmd.io)
  • Non‑EVM chains: If you’re working with Solana and using BN254 syscalls, the groth16-solana crate is your friend--it’s good for about ~<200k compute units per verification. This is super useful if you’re looking to port an aggregated Groth16 flow over to Solana. (lib.rs)

Practical wiring recipes

SnarkPack with Input Privacy (No Per-Proof Inputs On-Chain)

So here’s the deal with SnarkPack when it comes to keeping those inputs private--no need to have every single proof hanging around on-chain.

Off-Chain Aggregator:

  • We’re looking at a setup where we take {π_i, x_i} using Groth16, and stick to the same circuit.
  • Next, we create commitments for the sequence of x_i. After that, we compute the challenges r and z-values (that’s Σ a_i r^i) and make sure those are all correct based on the commitments we created.
  • Finally, we run SnarkPack to crank out π_agg.

On-Chain or L2 Verifier:

  • The verifier steps in here to verify π_agg just once and checks out the compact commitment proof of those z-values.
  • All you're storing is the commitment root. You can check out more details on this here.

2) Halo2 KZG Accumulation (snark-verifier SDK)

  • First off, you'll want to design an AggregationCircuit using the snark-verifier SDK. Make sure to set it up so it can handle a variable batch size, while sticking to the constraints.
  • When you're aggregating, use KzgAs to combine a bunch of pairing checks into a single accumulator. This way, you can produce a neat, succinct public digest for your batch.
  • Finally, generate an EVM verifier using the available tools in the ecosystem. Keep in mind that no individual proof inputs are shared--just the batch digest! Check out the docs here for more details.

3) zkVM-based “wrap and ship” (Succinct SP1)

  • Create a Rust program that takes care of verifying N Groth16/Halo2 proofs and also checks the input commitments for the batch internally.
  • Prove it in SP1, and then go ahead and verify the resulting single SP1 proof on EVM for around 275-300k gas; make sure to only publish the batch digest as the public input for the verifier. (succinct.xyz)

4) BLS Signature Attest-and-Wrap

  • For each proof, let's compute (d_i = H(\text{public_inputs}_i)).
  • Next, we'll sign (d_i) using BLS and then aggregate everything into (\sigma^*) with the help of blst.
  • Option A: If you just need the attestations, you can stop here.
    Option B: If you want to take it a step further, add a (Halo2/zkVM) recursive proof that checks: "for every signed (d_i), there exists a valid SNARK with those inputs."
  • Finally, verify (\sigma^*) (along with the single recursive SNARK) on-chain, but only reveal the batch digest. You can find more details on this GitHub page.

Library-by-library details to help you choose

  • SnarkPack (Rust)

    • What you get: Enjoy the perks of real Groth16 aggregation with a logarithmic verifier, plus you can reuse the Groth16 SRS. Oh, and did we mention it comes with solid production benchmarks for over 8,000 proofs? Check it out here: (research.protocol.ai).
    • Input privacy: To keep your inputs on the down-low, use compressed-input proof (z-evaluations) so you don’t have to list everything out. If you’re going the EVM route, just post the commitment roots along with a bit of extra data. More details available at (ethresear.ch).
    • Repos: Don’t forget to check out the bellperson aggregate module and the arkworks-based snarkpack. You can find them here: (github.com).
  • snark-verifier + snark-verifier-sdk (Halo2, KZG accumulation)

    • What you get: You’ll get some top-notch, audited circuit gadgets and an SDK that helps you aggregate proofs. Plus, there’s a Solidity verifier generator and KZGAS modules that include succinct verifying keys and accumulators. Check out the details here: (docs.rs).
    • Input privacy: The cool part about the aggregation circuit is that it only shows a batch digest as public input. This means on-chain, no one can snoop on the individual proof inputs! For more info, take a look here: (docs.rs).
  • Succinct SP1 (zkVM)

    • What you get: You can write in Rust, check any proof systems as programs, and send off a compact on-chain proof (with verification using about 275-300k gas; the vendor takes care of maintaining the canonical verifier gateways). Check it out here: succinct.xyz.
    • Input privacy: You can enforce commitments and relations right within your program, so the chain only sees the public IO of the final output (yep, that's your batch digest). More details can be found here: blog.succinct.xyz.
  • blst and herumi/bls (BLS signatures)

    • What you get: This is all about high-performance BLS12‑381 signature aggregation that's been audited, so you know it’s legit. It features “fast aggregate verify” and is already being used by Ethereum clients. It’s especially handy when you want off-chain attestations paired with a single on-chain check, making it perfect to stack on top of zk systems. Check it out on GitHub!
    • 4337 context: This pattern has been formalized in the form of an “aggregator” contract interface in ERC‑7766. It’s super easy to adapt this to attest your zk batch digest instead of needing separate signatures for each transaction. For the nitty-gritty details, head over to EIP-7766.
  • gnark and gnark-crypto (Go)

    • What you get: A solid zk library for production use (think Groth16/PLONK) that works across BN254, BLS12‑381, and more. It's been audited, so you can trust the crypto side of things. Plus, it delivers strong performance and includes pairing/KZG primitives, which are super handy if you're looking to build custom gadgets or work on in-circuit pairings. Check it out here.
    • In-circuit pairings: The community has put together some notes that break down constraint counts and techniques for emulating BN254/BLS12‑381 pairings right inside circuits. This is essential if you're aiming to design "verify BLS signatures in-circuit." For more details, visit this link.
  • Other ecosystem tools you might want to check out

    • zkcrypto/bellman, arkworks/groth16 for Groth16; and if you’re looking to work with Solana, take a peek at Solana’s groth16-solana. While these aren’t exactly “aggregators” on their own, they lay the groundwork for your aggregation or recursion setup. (github.com)

Best emerging practices (what teams are converging on)

  • Choose your on-chain proof format based on precompiles:

    • If you’re verifying directly on EVM L1, you’ll want to stick with BN254 (altbn128) because it keeps gas costs low. At the end of the day, aggregate or recurse into a BN254-friendly proof, even if you use BLS12‑381 off-chain. (github.com)
  • When aggregating Groth16 with SnarkPack, skip the raw input vectors:

    • Instead, go for the “challenge-and-evaluate” compression approach (think polynomial evaluation at r) so that the verifier only sees O(1) field elements plus a proof, instead of O(n) inputs. (ethresear.ch)
  • If you’re racing against the clock, opt for recursion/zkVMs over building an aggregator from scratch:

    • The Halo2/snark-verifier SDK is packed with reliable building blocks, and zkVMs like SP1 allow your team to stick with regular Rust while still achieving a 1-proof on-chain flow (~275-300k gas). (docs.rs)
  • Strengthen your input handling:

    • Make sure to guard against “input aliasing” (like uint256 vs field mod q) in your verifiers. Your libraries and contracts should consistently reduce mod q and reject any out-of-range values wherever necessary. (galxe.com)
  • Keep track of your audit trail and ensure reproducibility:

    • Opt for audited primitives (like blst and gnark-crypto) and published gateways (check out SP1). Remember to publish hashes of keys/SRS and lock those versions down in CI. (github.com)

Decision guide: which path fits your product?

  • I’m looking for a way to speed up verification for a bunch of Groth16 proofs without getting too deep into the engineering side:

    • Check out SnarkPack (via bellperson or arkworks). It adds input-compression proof to keep those per-proof inputs private. This is super handy for batch attestations and works well with systems that are already using Groth16. (research.protocol.ai)
  • I’m interested in having a general-purpose aggregator that I can tweak later, plus EVM verification:

    • You can create an aggregation circuit using snark-verifier (Halo2/KZG), which gives you control over what info is made public--typically just a batch digest. (docs.rs)
  • I want to keep my business logic in Rust and steer clear of custom circuits:

    • You can verify proofs right inside SP1 and publish a single proof on-chain, which should cost around 275-300k gas. You can also layer in BLS signature aggregation for those off-chain attestations when necessary. (succinct.xyz)
  • My main requirement is just attested membership, and I don't need to see each proof:

    • Aggregate BLS signatures over commitments to the inputs of each proof; you could also throw in one recursive proof to ensure that “every signature corresponds to a valid proof.” This approach minimizes the on-chain footprint. (github.com)

Brief, in-depth details: example snippets

1) Snark-verifier SDK Flow (Halo2, KZGAS)

When we talk about the snark-verifier SDK flow with Halo2 and KZGAS, we're diving into a pretty cool area of cryptography that allows us to validate smart contracts efficiently. Let’s break it down into a more digestible format.

What’s in the Flow?

The flow primarily consists of these key steps:

  1. Setup Phase

    • Generate the proving key and verification key, which are crucial for creating and verifying proofs. This is done using Halo2's setup functions.
  2. Proof Generation

    • Here, you’ll actually create the proof. You provide the necessary inputs, call the appropriate functions in the SDK, and voilà--you’ve got your proof ready!
  3. Verification

    • Finally, this is where you check the proof to ensure it's valid. Using KZGAS, the verification becomes efficient and straightforward.

Code Snippet

Here’s a quick look at how you might set this up in code:

// Example code for proof generation
let proof = generate_proof(&proving_key, &inputs)?;

Workflow Summary

In summary, you'll start with setting everything up, generate the proof by inputting your data, and then use KZGAS to verify everything. It's a pretty slick process once you get the hang of it!

Feel free to check out the official documentation for more details on each step.

// Pseudocode sketch
use snark_verifier_sdk::halo2::aggregation::{
  AggregationCircuit, aggregate_snarks, AggregationConfigParams
};
use snark_verifier_sdk::{gen_pk, Snark};

let params = AggregationConfigParams {/* sizing, lookup bits, etc. */};
let snarks: Vec<Snark> = load_my_snarks(); // Groth16/Halo2 proofs wrapped for SDK
let agg = aggregate_snarks(params.clone(), snarks);
let (pk, vk) = gen_pk(&agg); // keygen on aggregation circuit
let proof = prove(&agg, &pk); // one proof verifying all the others
// publish proof + batch_digest; no per-proof inputs revealed

This method relies on KzgAs, which cleverly combines several pairing checks into a compact accumulator along with a concise verifying key. You can dive into the details here.

2) BLS Attest-and-Wrap

The BLS (Basic Life Support) Attest-and-Wrap process is pretty crucial, especially in emergency situations. This method ensures that responders have the necessary skills and knowledge to handle life-threatening scenarios effectively. Here’s a rundown of what you need to know:

What is Attest-and-Wrap?

The Attest-and-Wrap approach is all about confirming that someone has the right training and can perform BLS correctly. It’s a way to make sure everyone’s on the same page when it comes to lifesaving techniques.

Key Steps in the Process

  1. Training Verification: Before anything else, make sure that you’ve completed a BLS training course and have the certification to back it up.
  2. Skill Assessment: It’s not just about the certification; you need to demonstrate those skills in a controlled environment. This could be through simulations or practical assessments.
  3. Documentation: Always keep your training documentation handy. This can include your certification cards or any other relevant credentials.
  4. Wrapping Up: Once you’ve successfully gone through the attest part, make sure to wrap it up with a debriefing to talk through what went well and what might need improvement.

Resources

For more detailed information, you can check out these helpful links:

Quick Tips

  • Practice regularly to keep your skills sharp.
  • Stay updated with the latest guidelines and techniques in BLS.
  • Collaborate with other trained professionals to gain different perspectives.

By following these steps and utilizing the resources available, you can confidently navigate the BLS Attest-and-Wrap process.

for each proof i:
  d_i = Poseidon(public_inputs_i)  // or Keccak, per your stack
  sig_i = BLS.Sign(sk, d_i)

sigma = BLS.Aggregate(sig_1,...,sig_n)
on-chain: BLS.Verify(AGG_PUBKEYS, {d_1,...,d_n?} or a batch root, sigma)
optional: verify one recursive SNARK that checks all (proof_i, d_i) pairs

Use blst for aggregation and verification. If you're unable to pass all ( d_i ), just commit using Merkle/KZG and verify only the root along with membership proofs. Check it out on GitHub.

3) SnarkPack Input Compression (Conceptual)

When we talk about SnarkPack input compression, we're diving into some pretty cool concepts that help reduce data size while keeping essential information intact. Here's how it all works in a nutshell:

What is SnarkPack?

SnarkPack is basically a clever way to compress input data. It's built to optimize efficiency, especially when you're dealing with large sets of information. The idea is to keep the data manageable while still making sure everything you need is still there.

How Does It Work?

SnarkPack uses some nifty techniques to shrink your data:

  1. Redundancy Elimination: It identifies and removes any unnecessary duplicates in the data set. This means less clutter and more space for what really matters.
  2. Efficient Encoding: By using smart encoding methods, it transforms the data into a leaner version that's easier to store and transmit. This is like putting your clothes through a vacuum bag - it saves space without losing anything important.
  3. Flexible Schemes: Depending on the type of input, SnarkPack can adapt its strategies. Whether you're working with text, images, or something else, it tailors its approach for the best results.

Benefits of SnarkPack

Using SnarkPack for input compression offers several perks:

  • Space Savings: By cutting down on data size, you can save on storage costs and improve performance.
  • Speed Up Processing: Smaller data sets mean quicker processing times. This is a game-changer when you're working with big data.
  • Resource Efficiency: You'll make better use of network bandwidth and processing power, which is always a plus!

Conclusion

In summary, SnarkPack input compression is all about making data more manageable without sacrificing quality. With its smart strategies and flexibility, it’s a great tool for anyone looking to streamline their data handling. For more details, check out the official documentation here.

Commit to inputs {x_i} with a vector commitment
Derive challenge r from transcript
Send z = Σ x_i r^i and a proof that z matches the committed inputs at r
Verify aggregated Groth16 + small input proof; don’t reveal all {x_i}

This is exactly what we talked about when it comes to managing public inputs effectively, using flows similar to SnarkPack. You can check out more details here.


What about curve gadgets and in-circuit pairings?

If your design involves “verifying BLS signatures or pairings within a circuit,” make sure to plan for capacity: handling millions of constraints for multiple pairings is pretty standard. The gnark community has some great write-ups that cover optimization techniques for emulated pairings on BN254 and BLS12‑381. Plus, circom has some proof-of-concept pairing circuits you can check out too. Use these tools wisely, or consider zkVM recursion if you’re looking at a lot of pairings. (hackmd.io)


Gotchas to avoid

  • Input aliasing in verifiers (especially Solidity BN254): It's super important to always mod-reduce and range-check! Don’t let multiple integer representatives of the same field element slide. Check out more on this at galxe.com.
  • Curve mismatch: If you're aggregating off-chain using BLS12‑381 but verifying on-chain with BN254, don't forget to include that wrap/recursion step (think Halo2/zkVM) to keep costs down. You can read more details here: github.com.
  • DIY cryptography: If you're putting together a custom aggregator, make sure to use audited primitives like blst and gnark-crypto, along with established constructions such as SnarkPack and KZGAS. For more guidance, check this link: github.com.

TL;DR recommendations you can act on this quarter

  • If you're already using Groth16 and want to dive into instant batching, check out SnarkPack! You'll be able to ship aggregated proofs along with a compact input-compression proof. You can find more details here.
  • Looking for flexible, EVM-first aggregation without leaking any input? Consider building on Halo2! Utilize the snark-verifier’s aggregation circuit and KZGAS, and you can just publish batch digests. Learn more about it here.
  • Want the quickest development process and portability across chains? Just recurse in SP1 and verify a single proof (it’s about 275-300k gas) on the mainnet. You can also combine this with a BLS attestation layer if you want. More info can be found here.

If you’re interested, 7Block Labs can take care of prototyping all three options for your workload (using the same circuit and data). We’ll provide you with a side-by-side comparison of costs and latency, plus a suggested migration plan.


References and Further Reading

  • Check out the SnarkPack paper along with the Filecoin implementation and benchmarks. You can find it here.
  • Dive into the snark-verifier KZG Accumulation Scheme and get the lowdown on the Halo2 aggregation SDK in the docs over here.
  • Learn about SP1 on-chain verification gas and the verifier gateways through this informative piece from succinct.xyz.
  • Explore the BLS signature libraries (like blst and herumi/bls) and the ERC‑7766 aggregator on GitHub. Check it out here.
  • Be aware of the Groth16 public-input handling and avoid those pesky input aliasing pitfalls by reading more here.
  • For a deeper understanding of gnark-crypto and pairing gadgets, plus in-circuit pairing notes, take a look at their GitHub repository over here.

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.