ByAUJay
Verifiable Surveillance Data: Privacy-Aware Architectures
Why this matters in 2025
Surveillance data from cameras, access control systems, IoT devices, and vehicle sensors is now being integrated into high-risk AI and compliance processes. These buyers are looking for two key things: solid cryptographic proof that their data is both authentic and untouched, along with robust privacy protections that minimize exposure of individuals and locations. Fortunately, the current standards landscape is stepping up to meet this dual need:
- The W3C Verifiable Credentials 2.0 hit Recommendation status on May 15, 2025, which means we now have a way to create interoperable, machine-verifiable proofs across different wallets, devices, and services. Check it out on w3.org.
- The EU AI Act is getting serious with high-risk AI systems; they need to log events throughout their entire lifecycle (that’s Article 12) and keep those automatically generated logs for a minimum of six months (Article 19). More details can be found on ai-act-law.eu.
- Over in California, the new CPPA rules are rolling out on January 1, 2026, shaking up how risk assessments, cybersecurity audits, and Automated Decisionmaking Technology (ADMT) obligations are handled. This will directly impact video analytics and telemetry decision-making. For the scoop, visit cppa.ca.gov.
- There’s been some cool progress in device/compute attestation: the IETF’s RATS architecture (RFC 9334) and the Entity Attestation Token (EAT, RFC 9711) now let verifiers cryptographically check the state of cameras, gateways, or enclaves. You can dive into the details on ietf.org.
- Content provenance is getting a major upgrade with on-device capabilities: C2PA 2.0/2.2 has established strong cryptographic baselines. Companies like Leica and Sony are even embedding capture-time Content Credentials/C2PA in their production firmware and offerings now. Take a look at the specs on spec.c2pa.org.
- Oh, and if you’re into data availability, it just got cheaper: Ethereum’s Dencun (EIP-4844) introduced 18-day “blobs” for cost-effective rollup data. Plus, modular DA layers like Celestia/Avail and operator DA like EigenDA are expanding the options. More info can be found on ethereum.org.
Here’s a solid architecture template and some implementation patterns we use for both startups and enterprises looking into blockchain-backed surveillance systems.
Design goals (and non‑goals)
- Verifiable end-to-end provenance: We've got a solid cryptographic chain that runs all the way from the lens or sensor right through to the final report.
- Privacy by default: We're all about keeping things private with options for selective disclosure, data minimization, and analytics that are snug in their own enclave.
- Interoperability: Thanks to VC 2.0 and OpenID4VCI issuance, our devices and organizations are all set to be DID-addressable. You can check it out here: (w3.org).
- Transparent but non-leaky auditability: We’ve got public and verifiable logs that keep things transparent without spilling any raw PII.
- Cost control at scale: You’ll get to choose the data availability method (whether it's Ethereum blobs or Celestia/Avail/EigenDA) based on your retention needs and service level agreements.
Non-goals:
- We're not going for full on-chain storage of video and audio. Instead, we'll be storing hashes, manifests, and proofs--just not the raw streams.
A reference architecture for verifiable, privacy‑aware surveillance
Imagine working with seven interconnected layers. Each layer can seamlessly exchange components based on standards.
1) Capture and Attestation (Device and Compute)
- When you capture media, cameras and sensors use C2PA Content Credentials to sign it. If possible, they also include device attestation (EAT). You can check out more details here.
- If there’s any preprocessing or analytics happening in a trusted execution environment (TEE), like AWS Nitro Enclaves, make sure to gather the attestation documents (CBOR/COSE). These documents prove the enclave's identity and code hash, specifically looking at PCRs 0-4,8. You can find out more about it here.
2) Identity and Authorization
- Let's tackle how we deal with devices and organizations using Decentralized Identifiers (DIDs). We can issue access and processing entitlements as W3C Verifiable Credentials (VCs) through OID4VCI. It's a good idea to lean toward selective-disclosure formats like BBS+ Data Integrity or SD-JWT VC to keep the data we share to a minimum. Check this out for more details: w3.org.
- Ingest and Privacy-Preserving Transport
- Utilize Oblivious HTTP (RFC 9458) relays to ensure that telemetry submission remains IP-unlinked (client↔relay↔gateway split). You can check it out here.
4) Analytics with Verifiability
- You can run watchlist matching or face-blur pipelines right inside Trusted Execution Environments (TEEs). Just make sure to gate those decryption keys based on successful attestation, using KMS policies tied to enclave measurements. Check out the details here.
- When you're looking at comparisons between different parties, like a mall watchlist against a tenant list, consider using Private Set Intersection (PSI). This method ensures that only the matches are revealed. You can dive into more info about it here.
Provenance and Transparency Logs
- Keep track of signed manifests, like C2PA evidence, EAT claims digests, and pipeline configs, by storing them in an append-only transparency log (think Sigstore Rekor v2) or a service that follows IETF SCITT style. This way, you ensure everything is auditable and can be shared across different domains. Check out more on this here.
- Data Availability and Anchoring
- To make the most of retention and cost, anchor Merkle roots of daily manifests to the right DA layer: you’ve got options like Ethereum EIP‑4844 blobs (which are around 18 days ephemeral), Celestia (with DAS + fraud proofs), Avail (that uses KZG-backed DAS), or EigenDA (focused on operator DA and high throughput). Check it out here.
7) Compliance, Governance, and Lifecycle
- Make sure to generate VC-backed audit logs to meet the AI Act record-keeping requirements, which means you’ll need lifetime logging and at least 6 months of retention. Also, don’t forget to document your risk management under the CPPA ADMT. It’s a good idea to align your governance with the latest updates from the NIST Privacy Framework 1.1. Check out more details here.
Pattern 1: “Authenticity at the lens” for video evidence
Goal
Our aim is to trace the journey of our footage, from the moment it’s captured right through to its presentation in court or before a regulatory body. We want to establish the who, what, when, and where of the footage, but we’ll keep identities hidden unless it’s absolutely essential to reveal them.
- Capture-time signing: You can turn on Content Credentials (C2PA) with compatible cameras like the Leica M11‑P, which comes with CC built-in, or Sony’s Alpha series, which activates C2PA through firmware updates along with a Camera Authenticity Solution. This nifty feature embeds cryptographic provenance, hooks for edit history, and on Sony models, sensor-based 3D depth signals, making it easier to differentiate between original scenes and photos of screens. (blog.adobe.com)
- Device attestation: Whenever you can, it’s a good idea to add an EAT during the capture session. This helps confirm the device model, firmware status, secure element details, and ensures everything is fresh with a nonce. (ietf.org)
- Provenance manifests: For each clip or photo, create a compact, signed manifest that includes hashes, a C2PA claimset digest, an EAT digest, and the capture geofence policy ID. Then, push that manifest to a transparency log like Rekor v2, and make sure to pin an inclusion proof on a DA layer checkpoint. (blog.sigstore.dev)
- Selective disclosure: When you’re sharing evidence, you can use a VC 2.0 package to show only the necessary details like the time window, location grid cell, and the device attestation OK flag. You can do this using BBS+ or SD‑JWT VC presentations. (w3.org)
- Verifier UX: Make things easy for verifiers by providing a one-click validation flow that checks the C2PA signature chain, verifies the EAT, and looks for log inclusion proofs. Don't forget to display the chain-anchored timestamps, such as the day-batch Merkle root anchored to Celestia or Ethereum. (celestiaorg.github.io)
Why it Works
- The cryptographic linking from C2PA to EAT, then to the transparency log, and finally to the DA anchor creates tamper-evidence while keeping identities safe from unnecessary exposure.
- Courts and regulators can access independent verification paths and concise proofs, making their job a whole lot easier.
Gotchas:
- Don’t forget to rotate your CC/C2PA private keys and share those revocation lists! C2PA v2.2 also tightens up trust lists and time-stamping, so be sure to take advantage of it. Check it out here: (c2pa.org).
- Keep in mind that some platforms might strip away metadata. Your manifest and log proofs still need to back up the authenticity, even if the EXIF/XMP data goes missing. Get more details at: (c2pa.org).
Pattern 2: Private watchlist matching with TEEs and PSI
Scenario: A Stadium Operator and Live Face Embeddings
Imagine a stadium operator who's on a mission to keep things safe and secure. They’ve got live face embeddings, which are basically fancy digital snapshots of faces, and they're using them to compare against a law-enforcement watchlist. But here’s the catch: they really want to make sure that they’re not exposing anyone’s personal data unnecessarily.
How It Works
- Live Face Embeddings: The operator captures real-time images of attendees as they walk into the stadium.
- Watchlist Comparison: These images are then compared to a watchlist provided by law enforcement. This list includes individuals who might pose a risk.
- Minimizing Personal Data Exposure: The operator is super mindful about privacy. They use techniques like anonymization, ensuring that personal data isn't stored or used beyond what's necessary for security measures.
Key Considerations
- Balancing Security and Privacy: It’s all about finding that sweet spot where safety is enhanced without stepping on anyone’s privacy rights.
- Technology and Ethics: The operator needs to stay updated on the latest tech and regulations to keep their methods ethical and effective.
- Community Trust: Maintaining the trust of the fans is crucial. Transparency about how data is collected and used goes a long way.
Conclusion
In this scenario, the stadium operator is leveraging cutting-edge technology to enhance security while being responsible about personal data. It’s a tricky balance, but with careful planning and consideration, they can create a safe environment for everyone.
- Enclave processing: Stream those embeddings straight to a Nitro Enclave. The cool part? The enclave verifies everything (using a CBOR/COSE document, signed by Nitro PKI), and only after that does it unwrap the comparison keys from KMS, based on PCR values like ImageSha384 and PCR0/1/2. Check it out here: (docs.aws.amazon.com)
- PSI for cross-party comparisons: When a third party shares a list, say, for banned patrons, we do a PSI so both sides only figure out the common items. There are some solid open-source PSI protocols like ECDH and Bloom filters/GCS that handle set cardinality and exact matches. Dive deeper here: (github.com)
- Oblivious transport: Send enclave outputs through OHTTP to separate request metadata (like IP and location) from the payload. This basically helps reduce how much tracking there is across different sessions. More info can be found here: (ietf.org)
- Minimal proofs: Generate a Verifiable Credential that says, “Face X matched watchlist policy Y at time T with enclave Z attested.” It’s smart to avoid sharing the raw face embedding or any non-matches. We recommend using BBS+ or SD-JWT VC for this. Check it out: (w3.org)
- Auditing without leakage: Only log derived proofs and enclave attestations in Rekor/SCITT--definitely skip logging the full embedding vectors. For more about this, see: (blog.sigstore.dev)
Compliance alignment:
- The AI Act logging requirements (which call for event tracking throughout the system's life and a minimum retention of 6 months) are met through append-only logs and DA anchors. Plus, the CPPA ADMT documentation can point back to enclave policies and PSI protocol settings. (ai-act-law.eu)
Pattern 3: Cost‑right data availability and anchoring strategy
Your choice of anchor really hinges on three key factors: how well it retains information, how easily you can query it, and how independent it is.
- Short-lived/Price-sensitive: Ethereum blobs (EIP-4844) are a game changer! They let you post daily manifest roots to blobs, which can really slash costs. And you'll love this--they get pruned after about 18 days by design. For long-term storage, consider using something like S3 or an object store, and remember to keep those independent inclusion proofs handy. (ethereum.org)
- Modular DA (Probabilistic Verification): Ever heard of Celestia? It offers data availability sampling (DAS) along with fraud proofs. Light clients can sample erasure-coded shares and toss out any bad encodings thanks to those fraud proofs. It’s a sweet setup for keeping things transparent while allowing light-client verification. (celestiaorg.github.io)
- Modular DA (Succinct Verification): Check out Avail! It uses KZG commitments with DAS, which means you get strong availability guarantees with just a handful of samples--and no need for fraud proofs. This is especially handy if you're working with devices that have limited power, like mobile phones. (docs.availproject.org)
- Operator DA for High Throughput: EigenDA is making waves with 100 MB/s write throughput and an average latency of around 5 seconds in their 2025 V2 version. This is super useful when you need to anchor tons of manifests in a minute--think about city-wide sensor grids! (blog.eigencloud.xyz)
Tip: Make sure to keep the on-chain/DA payload down to just a Merkle root and a bit of metadata like the timestamp and policy version. Your transparency log (that would be Rekor/SCITT) should stay as your main source; the DA anchors will give you those global, independently verifiable checkpoints. For more info, check out this post on blog.sigstore.dev.
Implementation details that save months
- Identity and Issuance:
- Let’s use the VC 2.0 data model for all device and organization credentials. We should get on board with standardized issuance through OID4VCI 1.0, which is set to hit the Final spec by 2025. Check it out on w3.org.
- When it comes to DID methods, pick ones that allow for offline resolution--like
did:keyordid:webfor those early pilots. For production, though, let’s migrate to methods that have solid rotation and recovery features. More on that can be found at w3.org. - For selective disclosure, go with BBS+ (W3C Data Integrity BBS) and SD‑JWT VC. They both come with active test suites and a good amount of implementer excitement--so let's steer clear of any proprietary formats. Details are available on w3.org.
- Capture/authenticity:
- Turn on C2PA right at the point of capture if your hardware can handle it (think Leica or Sony cameras that have those cool authenticity licenses). If you're using older cameras, make sure to have a fallback “edge signer” in place. (blog.adobe.com)
- Make sure your validators are all set up to embrace the C2PA 2.2 behaviors, like including time-stamps, revocation info in update manifests, and trust list EKU constraints. (c2pa.org)
- Attestation and TEEs:
- For model attestation, we’re using IETF RATS roles--think Attester, Verifier, and Relying Party. We’ll emit EATs as either JWT or CWT, depending on the footprint. Also, let's make sure to gate keys based on attested PCRs. Check it out here: (ietf.org)
- Transport/minimization:
- Implement OHTTP relays for telemetry and credential presentations. This helps keep requests from being linked on the server side. You can check out more about it here.
- Transparency and auditing:
- Check out Rekor v2 to get more affordable, tile-backed verifiable logs that make routine monitoring a breeze. Auditors can easily verify the append-only consistency proofs. (blog.sigstore.dev)
- If you’re dealing with multi-ecosystem assertions (like OEM to integrator to operator), you might want to look into SCITT. It’s pretty useful for signed statement transparency and interoperability patterns. (datatracker.ietf.org)
- Governance:
- Make sure your privacy governance lines up with the NIST Privacy Framework 1.1 (the draft got its latest update on April 14, 2025). This way, it’ll sync nicely with CSF 2.0 and keep in mind the AI/ADMT risk themes. (nist.gov)
Regulatory map: how the architecture proves compliance
- EU AI Act (high-risk):
- Article 12 is all about keeping track of events for the lifetime of the AI system, while Article 19 says you need to hang on to those auto-generated logs for at least 6 months. To keep things transparent, use append-only logs and DA anchors to show integrity. Plus, get your VC-backed evidence packets ready for audits or if you ever need to piece together any incidents. (ai-act-law.eu)
- California CPPA (ADMT, audits, risk assessments):
- You’ll need to keep records of how your model and data are used, proof of your computing environments, and those selective-disclosure dossiers for user rights requests. Starting January 1, 2026, new rules will lay out the timelines for audits and assessments. So, keep an eye out for those! (cppa.ca.gov)
- Chain of custody:
- With C2PA Content Credentials, EAT, and Rekor, you've got yourself some serious multi-party verifiable provenance. DA anchors will give you that extra layer of independent, time-stamped checkpoints too. So you can rest assured knowing your data's got a solid chain of custody. (c2pa.org)
Performance and cost notes (2025 reality)
- Anchoring Costs: EIP‑4844 blobs are a game changer when it comes to cutting down the costs for data posting, especially for rollups and commitment anchoring. The plan is to prune blobs after about 18 days, which means rotating anchors and moving raw data off-chain. You can get more details here.
- Throughput Scaling: If you're looking to anchor a bunch of manifests each minute, the operator DA (EigenDA V2) can handle it like a pro, sustaining 100 MB/s writes with an average latency of just around 5 seconds. Just make sure to confirm your SLA needs before making a choice. Check it out here.
- Light-Client Verification: With Celestia’s DAS, light clients can verify data affordably. Plus, Avail’s KZG method provides strong security with just a handful of samples, making it a great fit for mobile verifiers. Learn more about it here.
Common pitfalls (and how to avoid them)
- Treating C2PA as “just metadata”: Make sure to enforce trust lists, time-stamping, and revocation checks as per v2.2. Don't just depend on EXIF data. You can find more details here.
- Skipping device/compute attestation: If you're not using EAT/TEE proofs, you're leaving yourself open to attacks where adversaries can replay or inject tampered streams. It’s a good idea to create policies that require verified PCRs and results from the verifier policy before allowing decryption. More info can be found here.
- Over-sharing during verification: Instead of giving away too much information, consider adopting selective disclosure techniques like BBS+/SD-JWT VC. This way, verifiers only get to see the essential facts. Check it out here.
- “On-chain everything”: Instead of trying to store everything on-chain, focus on anchoring proofs only and keep the raw media off-chain. You can use Rekor/SCITT for making your provenance searchable and DA for managing checkpoints. More details are available here.
- Misunderstanding blob retention: Just a heads up--Ethereum blobs get pruned after about 18 days! It’s smart to establish rotation or archival policies, or consider a DA layer that has different retention strategies. For more insights, check out here.
Procurement checklist (copy‑paste into your RFP)
- Cameras/sensors:
- They support C2PA capture-time signatures, which is pretty cool! Plus, they publish key rotation and revocation procedures, and there's optional device EAT support. Check it out here: c2pa.org.
- Analytics platform:
- This platform offers TEE execution with verifiable attestation. Key release is tied to PCRs, and it also includes PSI for cross-party matching and OHTTP support. For more details, visit: docs.aws.amazon.com.
- Identity and credentials:
- They issue VC 2.0 credentials through OID4VCI. You’ll find support for BBS+ and SD-JWT VC presentations, along with DID methods featuring rotation and recovery. Dive into it here: w3.org.
- Auditability:
- The system writes to a verifiable transparency log (Rekor v2/SCITT) and provides inclusion/consistency proofs. It also maps retention policies to the AI Act/CPPA. Learn more at: blog.sigstore.dev.
- Anchoring:
- There’s a configurable DA target, which includes Ethereum blobs, Celestia, Avail, and EigenDA. It comes with documented costs, retention details, and a light-client verification story. Check it out here: ethereum.org.
- Governance:
- They provide a mapping to the NIST Privacy Framework (v1.1) along with change-control for models and policies. For more info, head over to: nist.gov.
A brief example: city‑scale incident reconstruction with privacy
- Setup:
- So, here’s the deal: intersection cameras are tagging their frames with C2PA. The gateway operates in a Nitro Enclave and is attested before decrypting those frame hashes. Manifests--which include hashes and EAT digests--get sent to Rekor v2 every hour, and they also provide a daily Merkle root that's anchored to Celestia. You can check out more about it here.
- Event:
- After something goes down, investigators can ask for a VC package that proves “vehicle ABC123 was spotted between 21:10 and 21:20 in zone G.” The person holding the data will show a minimal disclosure proof that includes the time window and zone, signed by the operator and anchored on the day root. For more information, click here.
- Privacy:
- Don’t worry--faces and license plates that aren’t part of the query never leave the enclaves. They’re using PSI to cross-check sightings against a stolen vehicle list, and auditors are on it to make sure everything’s up to snuff for AI Act compliance and that logs are kept for six months. You can dive deeper into it here.
Emerging practices to adopt now
- Generate single-file "evidence bundles" that include: C2PA validations, EAT verification results, Rekor inclusion proof, DA anchor reference, and a VC with selective disclosures.
- Keep an eye on transparency logs automatically; with Rekor v2, you can cut down on operational costs--just set up those external witnesses/monitors. (blog.sigstore.dev)
- Streamline your issuance and verification processes with VC 2.0 + OID4VCI; ditch proprietary attestation formats and stick to RATS/EAT. (w3.org)
Final take
You can finally stop stressing about whether to pick audit-grade provenance or privacy. Thanks to capture-time C2PA, RATS/EAT device and compute attestation, selective-disclosure VCs, verifiable transparency logs, and modular data availability, it’s possible to create surveillance systems that satisfy regulators and earn public trust--all without cramming sensitive footage on-chain or oversharing identities. Plus, the components are standardized, interoperable, and ready for production.
When you're getting started with piloting, keep it simple: begin with one capture device class that has C2PA enabled, one enclave-based analytic, Rekor v2 for your provenance logs, and set a daily anchor using Celestia or Ethereum blobs. Once you’ve got that down, you can grow from there by adding OID4VCI issuance, BBS+/SD-JWT VC presentations, and cross-party PSI as collaboration kicks off.
7Block Labs has got your back when it comes to crafting a solid delivery plan. We can guide you through everything from device onboarding and attestation policies to VC schema design and DA anchoring, all tailored to match your risk tolerance and budget.
Like what you're reading? Let's build together.
Get a free 30-minute consultation with our engineering team.
Related Posts
ByAUJay
Building 'Private Social Networks' with Onchain Keys
Creating Private Social Networks with Onchain Keys
ByAUJay
Tokenizing Intellectual Property for AI Models: A Simple Guide
## How to Tokenize “Intellectual Property” for AI Models ### Summary: A lot of AI teams struggle to show what their models have been trained on or what licenses they comply with. With the EU AI Act set to kick in by 2026 and new publisher standards like RSL 1.0 making things more transparent, it's becoming more crucial than ever to get this right.
ByAUJay
Creating 'Meme-Utility' Hybrids on Solana: A Simple Guide
## How to Create “Meme‑Utility” Hybrids on Solana Dive into this handy guide on how to blend Solana’s Token‑2022 extensions, Actions/Blinks, Jito bundles, and ZK compression. We’ll show you how to launch a meme coin that’s not just fun but also packs a punch with real utility, slashes distribution costs, and gets you a solid go-to-market strategy.

