ByAUJay
SP1 Private Proving: Integrating TEE‑Protected Inputs into Your Proof Pipeline
Succinct’s SP1 Private Proving lets you create zero-knowledge proofs right within a hardware Trusted Execution Environment (TEE). This means that witness data stays safe inside a sealed enclave while your application still gets a verifiable ZK proof. In this guide, we’ll walk decision-makers and architects through how to connect TEE-protected inputs to an SP1 proof pipeline. We’ll cover practical architecture patterns, attestation flows, and some handy operational tips along the way.
Summary
Private Proving with SP1 lets you run your prover within a TEE (that's both CPU and GPU), ensuring that sensitive inputs stay private down at the silicon level. Meanwhile, SP1’s ZK proof keeps everything verifiable. Here’s a straightforward blueprint to help you get started with TEE-protected witnesses, verify your attestation, and tie that proof to trustworthy hardware execution.
Why this matters now
- Succinct just dropped their new Private Proving feature! SP1 runs inside a TEE, so any app can request proofs with hardware-backed privacy--no need to rely solely on custom ZK circuits. It’s built on the SP1 zkVM and has the full support of the Succinct Prover Network. Some cool use cases already popping up include private payments, perpetual DEXs, identity verification, and confidential data analytics. Check it out here!
- So what’s going on under the hood? Private Proving utilizes SP1 running in H200-class GPU TEEs (that’s a CPU TEE paired with NVIDIA Confidential Computing), giving you privacy at scale without the hassle of rewriting your code into complicated circuits. You can read more about that here.
- When it comes to performance, the TEE overhead for the SP1 zkVM on H200 is under 20% even for more complex workloads when the GPU is in Confidential Computing mode. That’s pretty solid for rollups, zkEVMs, ZK-TLS, and zkML. For the detailed benchmarks, check out this link!
What security guarantees do you actually get?
- Keeping Inputs Confidential During Proving (Witness Privacy): We're talking hardware isolation and memory encryption all within the Trusted Execution Environment (TEE). This means that neither operators nor cloud admins can peek at the inputs. On the H100/H200, the GPU memory is encrypted and attested, while the CPU TEE (like Intel TDX/AMD SEV‑SNP or AWS Nitro) keeps the VM nice and isolated. (developer.nvidia.com)
- Verifying Computation: The SP1 proof can be verified both on-chain and off-chain, so it doesn't rely solely on the TEE. Plus, with aggregation and recursion options, you can cut down on on-chain costs and enable scalable composition. (docs.succinct.xyz)
- Platform Integrity Attestations:
- CPU TEE attestation (think AWS Nitro Enclaves) generates a CBOR/COSE attestation document. This doc includes PCR measurements--like image, kernel, app, IAM role, instance ID, and a signing cert--all signed by AWS Nitro Attestation PKI. (docs.aws.amazon.com)
- For GPU attestation on NVIDIA H100/H200, it uses device identity certificates (with ECC‑384 embedded in fuses) and the NVIDIA Remote Attestation Service (NRAS) to issue tokens that prove the GPU is legit and in Confidential Computing mode. (developer.nvidia.com)
- Intel Trust Authority steps in to unify evidence from both CPU and GPU, making it easy to verify both TEEs with a single policy. (community.intel.com)
- Optional “TEE 2FA” Integrity Signatures for SP1: You can run the SP1 executor inside a Nitro TEE, which uses a key that’s never been exfiltrated to sign the program's public outputs and verifying key. This adds an extra layer of security--just in case something goes wrong with the proving system. (github.com)
End‑to‑end architecture blueprint
Here’s a solid, real-world pipeline that you can tweak to fit your needs.
- Get your SP1 program up and running
- Create your program in Rust (or any LLVM target that can compile to RISC‑V) and use the sp1‑sdk to generate proofs. If you're working with heavy pipelines, don't forget to consider SP1 aggregation/recursion to help spread out the onchain verification costs. Check out the details here: (docs.succinct.xyz)
2) Choose Your Proving Substrate
- Option A: Go with the Succinct Prover Network and enable Private Proving (this is what most teams prefer). You’ll be able to request proofs through the network, which will then connect to clusters equipped with TEE. Just fire up the network client (ProverClient) using your private key and RPC URL to submit your jobs and, if you need to, reserve capacity for strict SLOs. Check out the details here: (docs.succinct.xyz).
- Option B: If you want a bit more control, you can bring your own confidential cluster to Phala Cloud, using their H200 or H100 GPUs with Intel TDX. You’ll get dual attestation from both Intel and NVIDIA, plus you can choose between on-demand or reserved pricing. More info can be found here: (phala.com).
- Option C: Consider an on-prem PoC using Intel TDX and SP1 TEE prover examples (like Automata’s Intel TDX PoC). This is particularly useful for regulated workloads or if you're working in air-gapped environments. You can dive into the specifics here: (github.com).
3) Provision TEE with Attestation-Gated Key Access
- CPU TEE: If you’re using AWS Nitro Enclaves, make sure to tweak those KMS policies. They should only let decryption keys be accessed by enclaves with PCRs that match your golden image (like, say, ImageSha384 or PCR0/1/2). This way, you’re binding witness decryption to a specific enclave measurement. Check out the details here.
- GPU TEE: Don’t forget to turn on NVIDIA CC mode, and make NRAS verification a part of your admission control process. You’ll want to validate device certificates through NVIDIA’s CA and OCSP, then challenge the GPU and confirm those NRAS token claims. Learn more about it here.
- Combined: It’s a good idea to use Intel Trust Authority to gather evidence from both CPU and GPU, enforcing policies together. For example, you could set it up so that “Intel TDX PCRs = X AND GPU CC mode = ON with certified VBIOS.” Find out more here.
- Witness Intake (Always Keep Plaintext Inside the TEE)
- The client encrypts the witness using a KMS key, and the policy for this key requires specific enclave PCRs. This means only the enclave can decrypt it. If you're working with Nitro, this is a typical approach that involves KMS condition keys tied to fields in the attestation document. Check out the details in the AWS docs.
- Proving Inside the TEE
- The enclave kicks off the SP1 prover, decrypts the witness right inside the TEE, and executes the program. If you’re using SP1 TEE 2FA, the enclave signs the public outputs and the verifying key using a key held by the enclave itself. (github.com)
- Export artifacts
- The only things that should leave the enclave are the SP1 proof, the optional TEE signature, and the attestation tokens (both CPU and GPU). Make sure to save the attestation bundle together with the proof in a tamper-evident log.
7) Verification and Settlement
- Onchain: You can check the SP1 proof (and even aggregate it if you want) for low gas costs. For example, SP1‑CC keeps verification around 280k gas for contract-call proofs. You can dig into the details here.
- Offchain: You might want to store or validate the TEE attestation, or even just a hash or subset of claims, to meet your policy, compliance, or customer audit requirements. You can find more info on that here.
Practical attestation flows you can implement today
A. AWS Nitro Enclaves (CPU TEE) as the trust gate
- First up, Enclave asks the Nitro Hypervisor for a COSE-signed attestation document. This includes the PCRs for the image, kernel, and application, along with the parent instance's IAM/ID and the signing certificate. It's important to check this against the AWS Nitro Attestation PKI and to turn away any debug-mode enclaves (you'll know them by their PCRs being all zeros). You can read more about this here.
- Next, make sure to enforce the rule of “decrypt only if PCRs match” using KMS condition keys like ImageSha384 and PCR0. This way, you're ensuring that your witness is only decryptable by the enclave image you’re expecting. For further details, check out this link: docs.bluethroatlabs.com.
Minimal Rust Verifier Example
Here's a straightforward example of a Rust verifier that takes an attestation document as bytes. This example uses a well-maintained crate from the Veracruz project:
use veracruz::attestation::Attestation;
fn main() {
// Simulate receiving an attestation document as bytes
let attestation_bytes: &[u8] = &/* your attestation bytes here */;
// Attempt to verify the attestation document
match Attestation::from_bytes(attestation_bytes) {
Ok(attestation) => {
println!("Attestation verified successfully!");
// You can now work with the attestation object
println!("{:?}", attestation);
}
Err(e) => {
eprintln!("Failed to verify attestation: {}", e);
}
}
}
Make sure to replace /* your attestation bytes here */ with the actual bytes you want to verify. This will check the attestation and let you know if everything's good to go or if something went wrong.
use nitro_enclaves_attestation_document::AttestationDocument;
fn verify_attestation(doc_bytes: &[u8], aws_root_der: &[u8]) -> anyhow::Result<AttestationDocument> {
let doc = AttestationDocument::authenticate(doc_bytes, aws_root_der)?;
// check PCRs, nonce, and image hash against policy here
Ok(doc)
}
This step is all about authenticating the COSE/CBOR object with the AWS Nitro root, and it helps you get structured PCRs that you can check against your policy. You can check it out here.
B. NVIDIA H100/H200 (GPU TEE) with NRAS
- First off, grab the device identity (ECC‑384) and make sure to verify it against the NVIDIA Certificate Authority. Don’t forget to check its revocation status using OCSP. After that, you can use the NRAS SDK to attest the GPU, which will give you a signed JWT (with EAT claims) that confirms the CC mode, VBIOS/driver versions, and more. You can read up on this here.
- Next, enforce the rule that the “GPU must be in CC mode with an approved firmware/driver combination.” NVIDIA has your back with a Secure AI Compatibility Matrix that lists all the supported combos (firmware, VBIOS, driver). Check it out here.
C. One‑shot combined attestation
- The Intel Trust Authority client gathers TDX evidence and reaches out to NRAS for the GPU. After that, it spits out either a single token or a pair that you can attach to your proof submission. Your policy engine has the flexibility to require claims from both CPU and GPU to go through successfully. (community.intel.com)
Concrete example: Private identity check with SP1 + Nitro + GPU CC
Goal: Prove “KYC-Verified” Without Revealing PII
When we talk about KYC (Know Your Customer) verification, the big challenge is to show that someone's been verified without giving away sensitive personal info. Here’s how we can tackle that!
Why It Matters
In today’s digital world, privacy is key. People want to prove their identity without sharing too much personal data. This helps protect them from fraud and keeps their info safe.
Strategies for KYC Verification
- Zero-Knowledge Proofs (ZKPs)
ZKPs let you prove you have certain information without showing the actual data. Imagine saying, “I’m over 18” without revealing your birthday! - Decentralized Identifiers (DIDs)
With DIDs, individuals control their own identities. They can share proof of verification without handing over all their personal details. - Tokenization
This involves converting sensitive data into a unique code or token. So, instead of sharing your full name or ID number, you’d just share the token that corresponds to your verified status. - Third-Party Verification Services
There are services out there that can verify your identity and provide a credential without revealing the underlying data. It’s like a trusted friend vouching for you!
Real-World Examples
- Airlines: Some airlines use biometric verification, allowing travelers to check in using facial recognition without needing to present an ID.
- Banking Apps: Certain financial apps validate your identity through secure methods that don’t expose your personal data, making transactions smoother and safer.
Conclusion
Proving you're KYC-verified without spilling your personal info is not just a possibility--it’s happening! By using innovative strategies like ZKPs and DIDs, we can keep our data private while still meeting verification needs. The future looks bright for secure and private identity verification!
- Check out this Rust program for SP1 that verifies signed credentials, like passports or driver’s license signatures. The cool part? SP1 lets you handle this without having to mess with custom circuits. You can read more about it here.
- When it comes to client-side security, the client encrypts personal identifiable information (PII) using an AWS KMS key that’s linked to Nitro PCRs. The enclave takes care of fetching and decrypting this info within the Trusted Execution Environment (TEE) only after it passes attestation. For more details, check out the documentation here.
- The prover runs on the H200 with GPU Cloud Computing on Phala Cloud or a similar setup. The NRAS attestation verifies that the GPU is legit and operating in Cloud Computing mode while TDX/Nitro attestation ensures the integrity of the CPU TEE. You can dive deeper into this here.
- After everything’s in place, the enclave generates the SP1 proof along with an optional TEE 2FA signature binding that verifies the key and outputs. The only things that get revealed are the proof itself and a simple “verified: yes/no” flag. You can find the code for this here.
- For verification, you’ll want to check the SP1 proof on-chain, and off-chain, make sure to archive those attestation tokens for future audits.
Performance, capacity, and cost planning
- Overhead: When you check out the numbers, the SP1 zkVM running in the H200 TEE really shines with under 20% overhead for those realistic, long-running proving jobs. This is where memory encryption and PCIe encryption costs balance out. Sure, short bursts might see a bit of a hit in TTFT and latency, but if you’re working with sustained workloads, that’s where you’ll see the best results. (phala.com)
- Capacity: The H200 packs quite a punch with 141 GB of HBM3e and a whopping 4.8 TB/s bandwidth. This gives you plenty of room for those large witnesses, proof aggregation, and even zkML. Nice, right? (phala.com)
- Pricing reference (Phala Cloud, as of March 2025 to December 2025): If you’re considering the H200 on-demand, you’re looking at about $3.50 per GPU per hour. But if you’re ready to commit for six months, you can snag it for as low as $2.56 per GPU per hour. There are also enterprise SLAs available, so you can plan your budget versus throughput more effectively. (phala.com)
Deployment options in practice
- Succinct Prover Network (recommended default): Just set up the network client with
SP1_PROVER=network, throw in your payment key, and you’re ready to submit proof requests. With Private Proving, your tasks get sent to TEE provers, and if your team needs to handle strict latency or volume, you can even reserve some capacity. Check out the details here. - Phala Cloud “click‑to‑deploy”: This one's super easy! Package your SP1 prover in Docker, pick either the H200 or H100 GPU TEE, and just deploy it--no code changes needed to run inside the TEE. Plus, you’ll get dual attestation and public PCCS for quote verification. Learn more here.
- On‑prem PoC: If you’re looking to use Intel TDX hosts, you can adopt the SP1 TEE prover PoC (Automata) to run ELF + stdin in a TDX VM. This is a solid option for regulated sectors trying out confidential proofs. More info can be found here.
Binding attestations to proofs: three patterns
1) Out‑of‑band policy log (simplest)
Keep those CPU/GPU attestation tokens right alongside the proof in an append-only log, like an object store that doesn’t let you mess with the past. When verifiers come in, they’ll check both the SP1 proof and the set of attestation tokens linked to the job ID. (docs.aws.amazon.com)
2) Signed result envelope (TEE 2FA)
Get the enclave to sign: (program hash || verifying key || public outputs || proof commitment). After that, check the SP1 proof along with the TEE signature. If either one doesn’t check out, just toss it out. (github.com)
3) Attestation-bound witness keys
To secure the witness, encrypt it with a key that’s only accessible to enclaves that match the PCR. You’ll also want to add a hash of the attestation document (or some chosen claims) as a public input to SP1. This creates a cryptographic link between “who could read the witness” and “what was proved.” (docs.aws.amazon.com)
Best emerging practices (checklist)
- Make sure you're running in production mode: avoid Nitro debug mode (that means PCRs should all be zeros). (docs.aws.amazon.com)
- Use nonces in your attestation requests to keep those pesky replay attacks at bay. (enclaver.io)
- Tie your KMS policy to the image/kernel/app measurements (PCR0/1/2). If it makes sense, also link it to the parent instance's IAM role and instance ID (PCR3/4). (docs.aws.amazon.com)
- Always validate that your NVIDIA device certificate and NRAS token are legit; double-check that the CC mode claims and driver/VBIOS versions match what's on your allow-list (take a look at the compatibility matrix). (developer.nvidia.com)
- Go for combined attestation verification whenever possible--it's a great way to simplify things (check out Intel Trust Authority). (community.intel.com)
- Consider adopting SP1 proof aggregation to bundle multiple proofs for cheaper verification on-chain. (docs.succinct.xyz)
- Don’t forget to add TEE 2FA signatures to your public outputs for that extra tamper-evident security layer. (github.com)
- Keep those attestation TTLs nice and short; re-attest for each job or when something changes (like driver or firmware updates).
- Make sure to log and hash the attestation claims and proof metadata in a WORM store for your audits.
- For multi-tenant setups, keep things separated by isolating enclaves and keys per tenant; regularly rotate your KMS keys and enclave images.
- Use reserved GPU capacity for steady-state pipelines, but go with on-demand for those burst proving needs.
- Stay on top of firmware and driver updates; remember to re-baseline PCRs and NRAS policies after any upgrades.
Example: SP1‑CC + TEE for cheap onchain consumption
When you're making EVM calls offchain and then verifying them onchain, SP1‑CC keeps the gas cost for verification pretty low, around 280k. If you pair SP1‑CC with Private Proving, you get the best of both worlds: confidentiality for your inputs thanks to TEE and verifiability through ZK. This means that the chain doesn’t have to see any secrets, but it can still trust the outcome. Check it out here: (github.com)
Pipeline:
- Offchain: First, we decrypt the sensitive calldata inside a Trusted Execution Environment (TEE), run the EVM call in SP1‑CC, and then create a proof.
- Onchain: Next, we verify the SP1 proof on-chain and keep a minimal attestation reference (that's just a hash) offchain to stay compliant.
Governance, compliance, and vendor risk
- Private Proving is all set for production, thanks to Succinct! Just make sure to keep an attestation/audit trail for each proof request to have your SOC2/GDPR evidence covered. Check it out here.
- If you're looking to rent TEEs, it's a good idea to choose providers that offer dual attestation (Intel + NVIDIA) and a solid compliance record. Phala Cloud lays out its pricing, regions, and boasts of offering audit-friendly dual attestation. You can find more info here.
- Take some time to understand SP1’s security model: it ensures that your program runs correctly; however, you’re still on the hook for ensuring your program and memory safety. For the details, visit this guide.
What to roadmap next
- Real-time proving: The SP1 Hypercube is aiming for those lightning-fast sub-12 second proofs for most Ethereum blocks. So, as this technology develops, think about designing your user experience around near-real-time confirmations. Check it out here: (blog.succinct.xyz)
- Decentralized proving: With the Succinct Prover Network, you get the scalability you need while creating a marketplace for performance. If you're dealing with critical workloads, it’s wise to set aside some reserve capacity. More details can be found here: (docs.succinct.xyz)
Implementation runbook (condensed)
- Choose a substrate: Go with either Succinct Private Proving or set up your own Phala Cloud/TDX cluster. (blog.succinct.xyz)
- Establish your policies: Think about golden PCRs, getting your GPU firmware/driver on the allow-list, and setting up a combined CPU+GPU attestation policy. (community.intel.com)
- Wire up attestation:
- Nitro: Make sure you’re parsing those COSE/CBOR attestation docs and checking them against the AWS Nitro root. (docs.aws.amazon.com)
- NVIDIA: Grab the device certificate, perform the NRAS attestation, and then verify those JWT claims. (docs.nvidia.com)
- Bind KMS: Set it up so that you need matching PCRs before you can release the witness decryption key. (docs.aws.amazon.com)
- Prove with SP1: Run the prover inside the TEE and make sure you enable proof aggregation for your batch jobs. (docs.succinct.xyz)
- Export: Get your SP1 proof along with the TEE 2FA signature and attestation tokens, then archive everything to WORM. (github.com)
- Verify: Run the on-chain SP1 verification and check the off-chain attestation policy. (docs.succinct.xyz)
- Operate: Keep an eye on firmware/driver updates, rotate your images and keys, rebaseline those PCRs, and track the GPU CC mode.
How 7Block Labs can help
- Architecture and policy design: We'll help you set up your PCR/NRAS allow-list, KMS bindings, and combine those attestation policies just right.
- Build + run: Let’s get your SP1 programs bundled up, deploy those TEEs, connect to the Prover Network, and handle proof aggregation seamlessly.
- Compliance: We’ll put together immutable proof and attestation logs, plus design workflows for retention and sealing that match up with your requirements.
Key references
- Check out this overview on Succinct Private Proving and its readiness. (blog.succinct.xyz)
- Dive into the SP1 proof aggregation docs for all the nitty-gritty details. (docs.succinct.xyz)
- Need some guidance on SP1‑CC verification costs? We've got you covered. (github.com)
- Explore the SP1 security model to understand how it all fits together. (docs.succinct.xyz)
- Learn about Nitro attestation, including PCRs, CBOR/COSE, and PKI. (docs.aws.amazon.com)
- Discover NVIDIA's take on CC and attestation, covering NRAS and device identity. (developer.nvidia.com)
- Intel’s Trust Authority helps with combined CPU and GPU attestation--check it out! (community.intel.com)
- Get the scoop on H200 TEE capacity, pricing, and dual attestation. (phala.com)
- Look into SP1 TEE integrity signatures, aka “TEE 2FA.” (github.com)
- If you’re interested in an on-prem PoC, check out the SP1 TDX prover. (github.com)
With this blueprint, your team can keep sensitive inputs safe in a TEE while still putting out an SP1 proof that anyone can check out--offering both confidentiality and trust without the hassle of rewriting into custom circuits.
Like what you're reading? Let's build together.
Get a free 30-minute consultation with our engineering team.
Related Posts
ByAUJay
Building 'Private Social Networks' with Onchain Keys
Creating Private Social Networks with Onchain Keys
ByAUJay
Tokenizing Intellectual Property for AI Models: A Simple Guide
## How to Tokenize “Intellectual Property” for AI Models ### Summary: A lot of AI teams struggle to show what their models have been trained on or what licenses they comply with. With the EU AI Act set to kick in by 2026 and new publisher standards like RSL 1.0 making things more transparent, it's becoming more crucial than ever to get this right.
ByAUJay
Creating 'Meme-Utility' Hybrids on Solana: A Simple Guide
## How to Create “Meme‑Utility” Hybrids on Solana Dive into this handy guide on how to blend Solana’s Token‑2022 extensions, Actions/Blinks, Jito bundles, and ZK compression. We’ll show you how to launch a meme coin that’s not just fun but also packs a punch with real utility, slashes distribution costs, and gets you a solid go-to-market strategy.

