7Block Labs
Blockchain and AI

ByAUJay

The End of API Keys: Transitioning to Transactional Authentication for AI

As we dive into the world of AI, we’re seeing a big shift in how we handle authentication. Goodbye API keys; hello transactional authentication! Here’s what’s happening and why it matters.

What’s Wrong with API Keys?

API keys have been around forever, and while they’ve served their purpose, they come with some serious drawbacks:

  • Security Risks: If someone gets their hands on your API key, they can access your service like it’s no big deal.
  • Mismanagement: It’s really easy to lose track of who has access to what, especially if you’re working with multiple teams.
  • Limited Control: Once an API key is out there, you can’t easily revoke access without creating a whole new one.

Transitioning to Transactional Authentication

So, what’s the alternative? Enter transactional authentication. This new approach is all about securing your AI interactions without the hassle of traditional API keys. Here’s why it’s worth considering:

Enhanced Security

With transactional authentication, you’re using temporary tokens for each session. This means that even if a token is intercepted, it’s only useful for a short time. Much safer, right?

Fine-Grained Control

You can define permissions more specifically. Instead of giving blanket access, you can customize what each user or service can do. This makes it a lot easier to manage who can access your data and how they can use it.

Better User Experience

No more fumbling around with lost API keys or permissions. Users get a smoother experience that’s easier to manage and understand. Plus, developers can focus on building and innovating instead of worrying about security protocols.

Getting Started

Transitioning to transactional authentication doesn’t have to be a headache. Here are some steps to guide you through:

  1. Assess Your Current System: Take a close look at how you’re currently using API keys and identify potential security risks.
  2. Implement Token-Based Authentication: Start integrating token-based systems that support short-lived tokens.
  3. Educate Your Team: Make sure everyone understands the new processes. Training is key to a smooth transition.
  4. Monitor and Adapt: Keep an eye on how the new system performs and be ready to make adjustments as needed.

Resources to Explore

If you're curious about diving deeper into transactional authentication, check out these resources:

AI Security

Conclusion

As we move toward a future where AI plays an even bigger role in our lives, adapting our authentication methods is crucial. Ditching API keys for transactional authentication not only enhances security but also improves user experience. Embrace the change -- your data will thank you!

  • Your AI stack is a messy collection of API keys stashed away in .env files, LangChain notebooks, MCP servers, and CI runners. When a contractor or agent accidentally leaks a key, you’ve got no idea which call was made, who made it, or what data might have been compromised.
  • Nowadays, security questionnaires are pushing for more advanced stuff like sender‑constrained tokens (DPoP/mTLS), HTTP request signing, passkeys, and FIPS-approved key storage. But let’s be real: your team doesn’t have the bandwidth for a complete overhaul right now.
  • Plus, Model Context Protocol (MCP) toolchains just make the situation worse: servers usually fall back on bearer keys and long‑lived sessions, and the latest write-ups and patches really highlight the risks tied to agent frameworks and connectors. The bottom line? Transaction‑scoped auth--rather than those old static keys--is the only way to effectively manage this growing challenge. (modelcontextprotocol.io)
  • Compliance timers are a real thing! The EU AI Act's main obligations are set to kick in on August 2, 2026. That means high-risk use cases and transparency duties start to matter, with more milestones rolling out through 2027. Get ready for some serious scrutiny on identity, auditability, and access governance. Check it out here: (digital-strategy.ec.europa.eu)
  • If you're using FIPS 140‑2 cryptomodules, you should know they’ll be considered outdated as of September 21, 2026. If your secrets or signing keys are stored in non-validated modules, you might run into trouble with new federal contracts--lots of enterprise clients won’t be happy either. More details here: (csrc.nist.gov)
  • Heads up! Cloud defaults are shifting. Google Cloud is now pushing organizations to enforce iam.disableServiceAccountKeyCreation and transition to Workload Identity Federation. Sticking to service-account keys is going to become a hassle as organization-level controls tighten up. Get the scoop here: (cloud.google.com)
  • Passkeys and WebAuthn Level 3 are on the rise, totally changing the game for phishing-resistant human authentication. If your “admin portal” for AI controls still relies on passwords plus TOTP, you’re likely to fall short on both user experience and security metrics. Dive in for more info: (w3.org)

We’re moving away from using static keys and instead opting for a layered, standards-first approach. This setup ties every AI or tool call to a signed transaction that's been checked against our policies, all while ensuring it's running in a hardware-attested environment.

Layer 1 -- Identity Sources (Humans and Workloads)

Humans

For the human side of things, we’re looking at using Passkeys through WebAuthn. It’s crucial to have device-bound or synced FIDO credentials that come with attestation. Keep an eye on the WebAuthn Level 3 features as they’re expected to reach Candidate Recommendation in 2026. Also, let’s align with NIST SP 800-63-4, which is set to finalize by July 31, 2025, to ensure we’re employing phishing-resistant AALs. Check out more details here.

Workloads

Now, let’s tackle workloads. It's a good idea to adopt workload identities like SPIFFE/SPIRE or go with cloud-native roles. If you’re using Google Cloud, make sure to enforce the "no new service-account keys" policy and take advantage of Workload Identity Federation for your SaaS and CI integrations. Over on AWS, it’s best to stick with short-term role credentials, and only use long-lived access keys as a last resort. You can read more about that here.

Layer 2 -- Transaction Tokens (Proof-of-Possession > Bearer)

  • OAuth 2.0 DPoP (RFC 9449) is your go-to for sender-constrained access and refresh tokens. If you don’t have a valid proof, you can’t replay--that’s the deal. These days, many identity platforms like Okta and Keycloak come with DPoP support right out of the box, so you don’t have to build it from scratch. For scenarios that involve server-to-server interactions or regulated environments, it’s a good idea to pair this with OAuth mTLS (RFC 8705). You can check out more about it here.
  • HTTP Message Signatures (RFC 9421) help ensure the integrity of your targets, headers, and body for each request. This is super important, especially when it comes to MCP and tool calls that cross trust boundaries. With this setup, you get a nice cryptographic audit trail for each transaction. Dive more into the details here.

Layer 3 -- Authorization and Policy (Runtime Decisions, Not Hardcoded Roles)

  • To make authorization a breeze, consider the following:

    • Use OPA/Rego right at the edge with Envoy’s ext_authz. This way, you’re making those contextual decisions close to where the traffic is. Check it out here.
    • Implement fine-grained permissions with a Policy Decision Point (PDP) like Amazon Verified Permissions (Cedar). This tool got a nice upgrade in 2025 with new language features, making it super handy for managing multi-tenant setups and per-tool permissions. You can read more about it here.
  • When it comes to sharing attributes between humans and AI, Verifiable Credentials with selective disclosure are the way to go:

    • OIDC4VCI 1.0 and OIDC4VP 1.0 have been finalized in 2025 for issuing and presenting credentials. Find out the details here.
    • The SD-JWT (RFC 9901, Nov 2025) allows users to share just what’s necessary, like saying “employee in BU-X,” and it’s tied to the request key. Dive into the specifics here.

Layer 4 -- Runtime Integrity and Attestations (Where Your AI Runs)

  • Keep sensitive actions in check (like accessing PII, updating your CRM, or pushing code) within trusted environments:

    • Confidential VMs powered by Intel TDX or AMD SEV‑SNP, along with H100 GPUs on GCP A3, now allow you to verify attestation workflows right in-band. You can use Google Cloud Attestation to pull EAT-based claims and enforce them through policy. Check out the details here: (docs.cloud.google.com).
  • Bonus web3 trust anchor: You can mark important approvals as on-chain attestations (like EAS) or signed EIP‑712 payload receipts. This way, you create a tamper-evident audit trail that spans across organizations for those regulated actions. Dive into it more here: (attest.org).

What This Looks Like in Practice (Reference Flows)

  1. Human Approves a Sensitive Tool Action
    • The user logs in using a passkey, and your Authorization Server gives out a short-lived DPoP-bound token.
    • The client creates an RFC 9421 signature for the outgoing request to the MCP server. They also attach a selective SD-JWT presentation that proves something like “Manager, Region=EU” without giving away personal details like birthdate or employee ID. (rfc-editor.org)
    • Envoy makes a call to ext_authz, where OPA kicks in to evaluate Rego. It checks the token claims, SD-JWT disclosures, and whether the MCP tool is running on a verified node. Only after all that does the call actually reach the tool. (openpolicyagent.org)

2) Agent-to-System Write-Back

  • The AI agent kicks things off by asking a Cedar-backed PDP for a one-time "write intent." This PDP then hands out a constrained capability, which could be a GNAP or an OAuth transaction token. It's tied to a specific resource and method, and it only lasts for a few minutes. You can check out more about it here.
  • Next up, the agent makes a call to the system API using mTLS (or DPoP), an HTTP Signature, and that capability token. The resource server checks all three of these components for validity, and then it logs a cryptographic receipt. For any critical changes, it also generates a signed EIP-712 acknowledgment to ensure an immutable audit trail. More details can be found here.

3) Workload Identity (CI → Model Hosting)

  • In this setup, the CI job operates under a workload identity, meaning it doesn't rely on any keys. Instead, it grabs a DPoP-bound token through OIDC federation. Then, it deploys a new MCP server build into a Confidential VM. To kick things off, there’s a startup phase that includes attestation verification before the orchestrator starts directing any requests. If you want to dive deeper, check out the documentation.

Practical Implementation Details You Can Lift

  • DPoP proof on each call (Node/TypeScript pseudocode)
// Create the DPoP proof for the request
async function createDPoPProof(method: string, uri: string, key: CryptoKey): Promise<string> {
    const jwtHeader = {
        alg: "ES256",
        typ: "dpop+jwt",
    };

    const jwtClaims = {
        jti: generateUniqueId(), // Some method to generate a unique ID for each proof
        iat: Math.floor(Date.now() / 1000), // Issued at time
        htm: method,
        htu: uri,
    };

    const jwtHeaderEncoded = base64UrlEncode(JSON.stringify(jwtHeader));
    const jwtClaimsEncoded = base64UrlEncode(JSON.stringify(jwtClaims));
    
    const unsignedToken = `${jwtHeaderEncoded}.${jwtClaimsEncoded}`;
    
    const signature = await signWithPrivateKey(unsignedToken, key); // Signing function you'd implement
    return `${unsignedToken}.${signature}`;
}

// Usage Example
const method = "GET"; // or "POST", etc.
const uri = "https://api.example.com/resource";
const privateKey = await getPrivateKey(); // Function to fetch your private key

const dpopProof = await createDPoPProof(method, uri, privateKey);
import { createPrivateKey, sign } from 'crypto';
import * as jose from 'jose';

const key = await jose.generateKeyPair('ES256'); // device-bound or stored in FIPS-validated HSM
const accessToken = process.env.ACCESS_TOKEN;

async function dpopProof(url: string, method: string) {
  const jkt = await jose.calculateJwkThumbprint(await jose.exportJWK(key.publicKey), 'sha256');
  const payload = {
    htm: method.toUpperCase(),
    htu: url,
    jti: crypto.randomUUID(),
    iat: Math.floor(Date.now()/1000),
    ath: jose.base64url.encode(crypto.createHash('sha256').update(accessToken).digest()),
  };
  return new jose.SignJWT(payload)
    .setProtectedHeader({ alg: 'ES256', typ: 'dpop+jwt', jwk: await jose.exportJWK(key.publicKey) })
    .sign(key.privateKey);
}

// usage:
const proof = await dpopProof('https://api.internal/tools/mcp/job', 'POST');
// send: Authorization: DPoP <accessToken>; DPoP: <proof>

Envoy → OPA External Authorization Pattern (High Level)

  • When it comes to Envoy's http_filter for ext_authz, it does a great job of forwarding important headers: Authorization, DPoP, Signature, and Signature-Input.
  • On the OPA policy side (using Rego), here’s what gets validated:

    • The DPoP token checks are in place for iat, jti, htu, htm, and the ath matches the token being presented.
    • The HTTP Message Signature needs to cover the method, path, host, date, and digest.
    • A PDP decision from Cedar or Verified Permissions is used for those fine-grained actions we often need.
    • Plus, attestation claims only allow “write” actions if they come from an approved TCB. Check out more details here.
  • Verifiable Credentials in practice

    • You can issue employment or role credentials using OIDC4VCI and keep them safe in an enterprise wallet.
    • When you need to share information, use OIDC4VP with SD-JWT to show just the necessary attributes as per policy (like “Line-of-Business = Pharma, Role = Qualified Person”). (openid.net)
  • MCP Hardening Quick Wins

    • Make sure to require OAuth with DPoP for MCP connectors and ditch those bearer keys.
    • Enforce HTTP Message Signatures on tool requests and responses to block any sneaky injections from tampered intermediaries.
    • Link high-risk tools to verified runtimes; if the attestation token or GPU/TEE claim is missing or invalid, just reject those calls. (docs.anthropic.com)

Why This is a Business Outcome, Not Just “More Crypto”

  • Better Conversion and Lower Support Costs: Companies that are jumping on board with passkeys are seeing some really impressive results. They report a big boost in sign-in success rates and way fewer password resets. For instance, some case studies show a drop of around 30,000 support calls each month, along with multiple times better success rates. This translates to higher user activation and lower customer acquisition costs (CAC) and cost of goods sold (COGS). Check it out here.
  • Faster Security Reviews and Fewer Vendor Redlines: By aligning with NIST SP 800‑63‑4, RFC 9449/9421/8705, and FIPS-validated modules, companies can speed up procurement cycles. Risk management teams are now explicitly asking for things like “sender-constrained tokens,” “HTTP signatures,” and definitely “no long-lived keys.” More on this can be found here.
  • Compliance Runway: With the EU AI Act enforcement and the upcoming sunset for FIPS 140‑2, there’s a clear, well-defined path for compliance that auditors and boards can follow. This includes controls, timelines, and all the necessary artifacts to keep everything in check. You can read more about it here.
  • Incident Containment: With proofs for each request and all policy decisions being logged, any security incidents can be pinpointed to just a handful of transaction IDs. This is a huge step up from having to sift through a week's worth of bearer-token traffic to find the source of a problem.
  • Who: If you're part of the Heads of AI Platform, CISOs, Directors of Identity & Zero Trust, Enterprise Architects, or Procurement teams managing AI risk, this is for you!
  • Keywords to keep in mind for your internal RFCs and RFPs:

    • OAuth 2.0 DPoP (RFC 9449); OAuth mTLS (RFC 8705); HTTP Message Signatures (RFC 9421) (rfc-editor.org)
    • WebAuthn Level 3; Passkeys rollout metrics (2025) (w3.org)
    • NIST SP 800‑63‑4 (AAL/IAL/FAL mapping) (pages.nist.gov)
    • OIDC4VCI 1.0, OIDC4VP 1.0; SD‑JWT (RFC 9901) (openid.net)
    • Verified Permissions (Cedar 4.x), OPA/Rego with Envoy ext_authz (aws.amazon.com)
    • SPIFFE/SPIRE workload identity; GCP Workload Identity Federation controls (don't forget to disable key creation) (cloud.google.com)
    • Confidential Computing attestation (Intel TDX/AMD SEV‑SNP), GPU attestation on H100/A3 (docs.cloud.google.com)
    • EU AI Act timeline; FIPS 140‑3 CMVP adoption deadlines (digital-strategy.ec.europa.eu)

What “Good” Looks Like in 60-90 Days (Without Boiling the Ocean)

0-30 Days

  • Take stock of all your API keys. Group them by system/agent and start rotating them to short-lived tokens.
  • Turn on DPoP and mTLS for one high-risk integration (pick an IdP that has this built-in). (okta.com)
  • Set up Envoy ext_authz and OPA for a single MCP tool path; don’t forget to start logging those decisions. (openpolicyagent.org)

31-60 Days

  • Roll out passkeys for your AI admin and data-tools; aim for at least 90% WebAuthn coverage for admins. (fidoalliance.org)
  • Issue a Verifiable Credential (OIDC4VCI) for “Data Steward,” and make sure to require OIDC4VP presentation plus SD-JWT for any dataset exports. (openid.net)

61-90 Days

  • Enforce iam.disableServiceAccountKeyCreation on GCP folders that host your AI infrastructure; make the switch to Workload Identity Federation for CI. (cloud.google.com)
  • Control PII write-backs with Confidential VM attestation (A3/TDX); if the claims don’t align with your policy, just say no. (docs.cloud.google.com)
  • Create an audit demo: show a complete transaction proof (DPoP + HTTP Signature + PDP decision + attestation claim) in under three clicks.

Where Blockchain and ZK Add Pragmatic Value (No Hype)

  • When it comes to getting regulatory approvals--like pricing changes, supply-chain releases, or KYC overrides--why not just stamp an immutable receipt? Here’s how:

    • You can sign an EIP-712 “ApprovalReceipt” using the approver’s enterprise key. Then, create a minimal on-chain attestation (EAS) for inter-organizational verification. This method ensures that no personally identifiable information (PII) hits the chain--just hashes and timestamps. It’s a great way to enhance your off-chain logs without completely replacing them. Check it out more here.
  • If you’re looking at privacy-preserving proofs in agent flows, consider mixing SD-JWT’s selective disclosure with zk-friendly circuits only when necessary. For instance, you can verify “over-18” or “resides-in-EEA” without sharing exact dates of birth or addresses. Start with SD-JWT, which is now an RFC, and think about incorporating ZK later on for those high-assurance markets. More details can be found here.

GTM proof points you can quote in your business case

  • Passkey adoption and impact:

    • Did you know that 69% of consumers have turned on a passkey for at least one of their accounts? Businesses are noticing some real benefits too. For example, Aflac saw a drop of 30,000 calls a month related to account recovery! Microsoft is also seeing great results with three times more success and eight times faster sign-ins. Check out more about this here.
  • Standards momentum:

    • There’s some exciting movement in standards! WebAuthn Level 3 hit Candidate Recommendation status back in January 2026. Meanwhile, OIDC4VCI/4VP was wrapped up in 2025, and SD-JWT became RFC 9901 in November 2025. Plus, DPoP is now RFC 9449 and is being rolled out by big players in mainstream Identity Providers. You can learn more about that here.
  • Cloud and compliance pressure:

    • The pressure is on when it comes to cloud and compliance. Google Cloud Platform is now defaulting to “disable service-account key creation” for all new organizations, plus they have migration guidance in place. This signals the end of static keys! On top of that, the EU AI Act is set to kick in with obligations by mid-2026, and the sunset of FIPS 140-2 is scheduled for September 2026. If you want to dive deeper, take a look at this link.

How We Engage (and Where We Fit into Your Roadmap)

Transactional Auth Accelerator (2-3 weeks)

  • What You'll Get: We’ll deliver a solid reference architecture, a starter kit for Envoy/OPA policies, and set up DPoP for one IdP/client. You'll also get libraries for HTTP Message Signatures, along with a demo for SD-JWT/OIDC4VP and a prototype for the attestation gate.
  • Mapped Controls: We’ll align with NIST SP 800‑63‑4 (AAL2/3), RFC 9449/9421/8705, VC 2.0, OIDC4VCI/4VP.

Build + Integrate (6-8 weeks)

  • We're going to enhance your MCP toolchain by implementing DPoP and HTTP signatures. We’ll also connect Cedar or OPA for those detailed scopes you need, enforce Workload Identity, add Confidential VM gating, and make sure we produce auditor-ready artifacts.

Optional: On-chain Receipts

  • We can provide minimal, regulation-aware EAS/EIP-712 receipts for seamless cross-organization approvals, all while keeping sensitive data under wraps.

Check Out These 7Block Labs Services You Can Dive Into Today

A Final Word on Sequencing

  • Don’t rush into “replacing keys everywhere” right off the bat. Instead, focus on the most high-risk area first: think about agent write-backs to critical systems or exporting datasets. Once you’ve got DPoP, HTTP Signatures, policy, and attestation set up there, validate the logs, and then you can start templating.

Personalized CTA

Hey there! If you’re a Director of AI Platform or the CISO for a Fortune 1000 company looking to roll out five or more agentic use cases by Q3 2026, we’ve got something you might find super helpful. If you need a solid, auditor-ready plan to phase out static keys without messing up your delivery timelines, why not join our 45-minute “Transactional Auth for AI” working session?

During our time together, we’ll take one of your current live flows and map it out using DPoP, HTTP Signatures, OPA/Cedar, and Confidential VM attestation. By the end, you’ll walk away with a 10-day pilot plan, RFP-grade controls, and a cost model that’s easy to work with.

And don’t worry about the usual slides--just bring along a tricky endpoint you’re dealing with, and we’ll brainstorm solutions together on the whiteboard. Can’t wait to help you out!

Like what you're reading? Let's build together.

Get a free 30-minute consultation with our engineering team.

7BlockLabs

Full-stack blockchain product studio: DeFi, dApps, audits, integrations.

7Block Labs is a trading name of JAYANTH TECHNOLOGIES LIMITED.

Registered in England and Wales (Company No. 16589283).

Registered Office address: Office 13536, 182-184 High Street North, East Ham, London, E6 2JA.

© 2026 7BlockLabs. All rights reserved.