ByAUJay
Private Proving for Enterprises: Integrating TEEs Without Violating Security Policy
Short description: Private proving keeps sensitive “witness” data confidential while generating zero‑knowledge proofs at production speed. This guide shows decision‑makers how to integrate Trusted Execution Environments (TEEs) on AWS, Google Cloud, Azure, and NVIDIA GPUs—without breaking security policy—using concrete architectures, attestation‑gated key release, and enforcement patterns you can deploy today. (microsoft.com)
Why private proving now
Zero‑knowledge proofs (ZKPs) guarantee correctness, but they don’t prevent a prover operator from seeing the witness (the private inputs). That’s why enterprises exploring privacy‑preserving ledgers, identity, or compliance proofs are pairing ZK with Trusted Execution Environments (TEEs): the TEE shields witness data in use, remote attestation proves the environment, and the ZK proof exits without leaking inputs. In Microsoft’s “Confidential Computing Proofs,” the authors contrast ZKPs and hardware‑based “confidential computing proofs,” and argue they’re complementary: ZK gives public verifiability; TEEs deliver high‑throughput privacy for the prover. (microsoft.com)
What’s changed in 2024–2025:
- Cloud TEEs matured from pilots to GA across CPU and GPU: Intel TDX and AMD SEV‑SNP VMs are widely available; NVIDIA H100/H200 GPUs expose Confidential Computing with device attestation; and Google’s Confidential Space productizes container‑level attestation policies. (cloud.google.com)
- KMS/HSM stacks now accept attestation as an authorization factor (AWS KMS, Azure Secure Key Release, Google Workload Identity Federation + policy), letting you gate key unwrap strictly to measured, production‑mode workloads. (docs.aws.amazon.com)
The result: you can run provers in TEEs, gate decryption on attestation, and ship proofs on‑chain—sans policy waivers.
Executive snapshot: where TEEs fit in a ZK architecture
- Threat addressed: Prevent prover operators, cloud admins, or root on the host from inspecting the witness or intermediate proving state. (docs.cloud.google.com)
- Control: Run the prover inside a TEE and require remote attestation to release witness keys or session secrets. Deny if the measurement, image digest, or “debug” status isn’t exactly what you expect. (docs.aws.amazon.com)
- Evidence: Persist attestation tokens and KMS audit logs so risk, compliance, and customers can verify every proof run was hardware‑isolated. (docs.aws.amazon.com)
TEE options for private proving (and what each buys you)
AWS: Nitro Enclaves for strong isolation and KMS‑gated decryption
- Enclaves are separate VMs carved from a parent EC2 instance with no external networking, no persistent storage, and vsock‑only IPC back to the parent. This sharply reduces the attack surface but means the parent must proxy any egress. (docs.aws.amazon.com)
- Cryptographic attestation is first‑class: KMS policies can check attested measurements such as ImageSha384 (PCR0) and individual PCRs before handing out data keys or decrypting ciphertext. CloudTrail records include these attestation fields. (docs.aws.amazon.com)
- Practical constraints: No GPUs inside enclaves; everything must fit in enclave RAM; and debugging is not permitted (debug enclaves produce zeroed PCRs and cannot pass attestation). Good for CPU‑bound provers, key handling, or splitting work across multiple enclaves via the parent. (packetsensei.com)
When to choose it: Highly regulated workloads that want the smallest I/O surface and policy‑driven KMS gating without changing cloud provider. (aws.amazon.com)
Google Cloud: Confidential VM + Confidential Space for policy‑driven containers (with GPU support)
- CVMs run on AMD SEV‑SNP or Intel TDX; Confidential Space adds a hardened OS and an attested container launcher that issues a signed OIDC token with rich claims (container digest, support channel STABLE/USABLE, debug status, and, in preview, GPU CC mode). You can map these claims into IAM via Workload Identity Federation to gate KMS, Storage, or external resources. (cloud.google.com)
- Regional and hardware coverage keeps expanding: TDX on C3, SEV and SEV‑SNP on N2D/C3D/C4D, and NVIDIA Confidential Computing with H100 in specific zones (a3‑highgpu‑1g). Check release notes for supported zones and attestation quirks (e.g., specific OS images that temporarily break attestation). (cloud.google.com)
- Security notes you can act on today: verify dbgstat equals “disabled‑since‑boot”; pin image_digest in your policy; and be mindful of CPU errata that affected RDSEED in early 2025 (workarounds were rolled out). (docs.cloud.google.com)
When to choose it: Containerized provers (e.g., Halo2, Plonky2, SP1) with strong policy hooks and optional GPU acceleration in CC mode. (cloud.google.com)
Azure: Confidential VMs + Secure Key Release (SKR) from (Managed) HSM
- CVM families support AMD SEV‑SNP and Intel TDX; there’s also an H100‑backed confidential series. The standout capability is Secure Key Release: Azure Key Vault (Premium/Managed HSM) can release an HSM‑protected key only to an attested CVM that meets your policy (claims include TEE type and compliance status) via Microsoft Azure Attestation. (learn.microsoft.com)
- SKR policy grammar lets you pin claims (for example, “x‑ms‑isolation‑tee.x‑ms‑attestation‑type == sevsnpvm” and compliance flags) and require the target to present an encryption key that never leaves the attested runtime. Managed HSM is FIPS 140‑3 Level 3 validated. (docs.azure.cn)
When to choose it: You need FIPS‑validated HSMs, enterprise‑grade attested key export, and tight AAD/RBAC integration. (learn.microsoft.com)
NVIDIA H100/H200: GPU Confidential Computing for prover acceleration
- H100/H200 enable GPU‑side memory encryption with device identity (ECC‑384‑backed) and remote attestation. NVIDIA’s Remote Attestation Service (NRAS) verifies GPU attestation reports; CVMs should validate the GPU cert chain and revocation status before treating the GPU as trusted. (developer.nvidia.com)
- On GCP, Confidential Space tokens include an nvidia_gpu.cc_mode claim (preview) to assert that the GPU ran with CC enabled—use it to enforce “proofs must be generated only on CC GPUs.” (cloud.google.com)
When to choose it: Large provers with MSM/FFT bottlenecks that benefit from GPU acceleration and must keep model parameters or witnesses confidential end‑to‑end. (developer.nvidia.com)
Reference architectures you can deploy this quarter
A. AWS “air‑gapped” private prover with KMS‑gated secrets (no GPU)
- Build an enclave image file (EIF) that contains your prover binary and a minimal runtime.
- Boot the enclave from a Nitro‑enabled parent instance; on boot, request an attestation document (PCRs include the signed image digest).
- Call AWS KMS using cryptographic attestation: the KMS policy allows Decrypt/GenerateDataKey only when ImageSha384 (PCR0) and, optionally, PCR1/others match your production EIF. The response is encrypted to the enclave’s public key and only usable inside the enclave.
- Decrypt the witness inside the enclave; run proving; return only the proof via vsock to the parent; publish proof on‑chain. (docs.aws.amazon.com)
Policy nucleus (conceptual):
- kms:RecipientAttestation:ImageSha384 == your EIF digest (PCR0)
- kms:RecipientAttestation:PCR1 == your kernel/bootstrap hash
- Deny if attestation missing or invalid; audit CloudTrail for those fields. (docs.aws.amazon.com)
Operational guardrails:
- Never run debug enclaves; their PCRs are zero and will fail policy checks.
- Enforce memory sizing to keep the entire proving state in enclave RAM; avoid logging secrets over vsock. (docs.aws.amazon.com)
Where it shines: static circuits, KMS‑wrapped witnesses, regulated footprints where “no external network” is a policy advantage. (aws.amazon.com)
B. GCP GPU‑accelerated private proving with Confidential Space + H100 CC
- Package your prover as a container and sign it (e.g., with Cosign).
- Launch a Confidential Space workload on a CVM with NVIDIA H100 in CC mode (a3‑highgpu‑1g in supported zones).
- The launcher fetches an attestation OIDC token containing container image_digest, support_attributes (require STABLE), dbgstat, and in preview the nvidia_gpu.cc_mode claim.
- Configure IAM via Workload Identity Federation so KMS/Storage only accept tokens whose assertions match your attestation policy (e.g., specific image_digest and gpu.cc_mode == ON).
- Prover fetches the encrypted witness, uses a short‑lived KMS DEK gated by those conditions, generates the proof on GPU, writes out the proof and deletes the DEK. (cloud.google.com)
Why it’s enterprise‑ready:
- Claim‑rich, auditable policies on the container and platform—not just the VM family.
- Documented caveats and release notes: for example, some OS images temporarily broke remote attestation in mid‑2025; pin to supported images (e.g., Ubuntu 24.04) until resolved. (docs.cloud.google.com)
C. Azure CVM with Secure Key Release (HSM‑attested key unwrap)
- Create an exportable key in Key Vault Premium or Managed HSM and attach an SKR policy that requires a Microsoft Azure Attestation (MAA) token with specific claims (e.g., sevsnpvm and “azure‑compliant‑cvm”).
- The attested CVM requests key release; Key Vault validates the token against your policy and encrypts the key under a runtime public key presented by the TEE; only the CVM can unwrap it.
- The prover runs inside the CVM, uses the released key to decrypt the witness, generates the proof, and discards the key. Managed HSM provides FIPS 140‑3 Level 3 validation for auditors. (learn.microsoft.com)
Why it’s compelling: HSM‑backed policy grammar and enterprise compliance posture with fine‑grained “environment assertion” checks baked into the HSM release workflow. (docs.azure.cn)
Enforcing “no policy exceptions” with attestation
To deploy private proving without security waivers, design your controls so keys, data, and egress are functionally unusable unless the prover is attested:
-
Bind decryption to attestation
- AWS: Set kms:RecipientAttestation:ImageSha384/PCRn conditions; KMS returns only ciphertext sealed to the enclave’s key. (docs.aws.amazon.com)
- GCP: Use Workload Identity Federation to map token claims and allow access only when assertion.swname == CONFIDENTIAL_SPACE, the image_digest matches, and support_attributes include STABLE. (docs.cloud.google.com)
- Azure: Use SKR policy to require sevsnpvm and compliance claims; release only to that environment. (learn.microsoft.com)
-
Require “production mode”
- GCP tokens include dbgstat; require disabled‑since‑boot. Nitro debug enclaves cannot be used for cryptographic attestation. (docs.cloud.google.com)
-
Pin supply chain artifacts
- GCP tokens carry container image_digest and image_signatures key_id; pin them in policy. AWS PCR0 covers the EIF image hash. (cloud.google.com)
-
Log and prove it
- CloudTrail includes attestation fields on Nitro‑gated KMS calls. GCP logs the subject (VM selfLink) and federated identity used for each access. Archive attestation tokens next to proofs for end‑to‑end auditability. (docs.aws.amazon.com)
GPU proving with confidentiality: what to verify
If you accelerate proving on GPUs, extend the trust chain to the device:
- Verify NVIDIA device identity and attestation
- Fetch device certificate (ECC‑384) and check against NVIDIA CA; verify OCSP; validate attestation reports via NRAS; accept the GPU only then. (developer.nvidia.com)
- Use platform signals where available
- On GCP, assert nvidia_gpu.cc_mode == ON in Confidential Space token policy. (cloud.google.com)
Emerging practices we recommend (and implement for clients)
- Short‑lived, nonce‑bound attestation tokens
- Always request fresh attestation with a unique nonce to prevent replay; time‑limit tokens aggressively. AWS Nitro attestation includes a nonce; GCP tokens support eat_nonce claims. (docs.bluethroatlabs.com)
- Prohibit “debuggable” builds by policy
- Enforce dbgstat == disabled and deny any dev build from accessing keys. Nitro debug PCR zeros are a clear red flag. (docs.aws.amazon.com)
- Circuit‑prover separation of duties
- Keep circuit definitions/public parameters in source control; ship a prover container whose digest is pinned in policy. On rotation, update the policy digest and attest again. (cloud.google.com)
- Attestation‑aware KMS/HSM schemas
- AWS: KMS keys that can only decrypt for specific PCR measurements;
- Azure: SKR policies that tie release to MAA claims;
- GCP: IAM conditions on image_digest/STABLE, with federated identities scoped to that digest only. (docs.aws.amazon.com)
- Side‑channel hygiene in TEEs
- Prefer constant‑time cryptographic libraries; keep secrets off logs; avoid using 16/32‑bit RDSEED instructions impacted by 2025 advisories—use 64‑bit or platform RNG. (cloud.google.com)
- Multi‑vendor attestation verification
- Where you need a single verifier across AMD/Intel, evaluate Intel Trust Authority, which added preview support for attesting AMD SEV‑SNP on Azure. (docs.trustauthority.intel.com)
Concrete example: Policy snippets you can adapt
Below are minimal policy fragments you’d evolve in production. The point: make the attestation facts the gate.
-
AWS KMS condition keys (conceptual summary):
- Allow Decrypt/GenerateDataKey only when kms:RecipientAttestation:ImageSha384 equals your EIF’s SHA‑384 digest, and optionally require PCR1 to match the enclave kernel hash. CloudTrail will capture these fields for audits. (docs.aws.amazon.com)
-
GCP Confidential Space IAM condition ideas:
- assertion.swname == "CONFIDENTIAL_SPACE"
- "STABLE" in assertion.submods.confidential_space.support_attributes
- assertion.submods.container.image_digest == "sha256:…"
- assertion.submods.nvidia_gpu.cc_mode == "ON" (where supported) (docs.cloud.google.com)
-
Azure SKR policy claims to pin:
- x‑ms‑isolation‑tee.x‑ms‑attestation‑type == "sevsnpvm"
- x‑ms‑isolation‑tee.x‑ms‑compliance‑status == "azure‑compliant‑cvm"
- Only release to the presented runtime public key (x‑ms‑runtime/keys) for the session. (learn.microsoft.com)
“Do we lose performance?” What our clients should expect
- CPU TEEs (SEV‑SNP/TDX): modest overhead for memory encryption and attestation, typically small compared to proving time; watch release notes for patches that can temporarily affect performance. (cloud.google.com)
- GPU CC (H100/H200): NVIDIA’s CC mode keeps data encrypted in GPU memory; with proper batching we see near‑native throughput. Validate device attestation before adding the GPU to your trust boundary. (developer.nvidia.com)
Tip: Profile MSM/FFT phases and batch proof jobs to amortize TEE setup costs across larger workloads.
Implementation checklist (prover teams)
- Governance
- Define attestation policy owners; record digest pinning and rotation procedures.
- Pre‑approve allowed TEEs (Nitro Enclaves, Confidential Space TDX/SNP, Azure SKR) in the security catalog. (learn.microsoft.com)
- Supply chain
- Sign containers; pin image_digest/key_id in attestation policy; fail closed on mismatch. (cloud.google.com)
- Keys and secrets
- AWS: Use KMS RecipientAttestation keys;
- GCP: Use Workload Identity Federation with attestation assertions;
- Azure: Use SKR from Managed HSM with MAA claims. (docs.aws.amazon.com)
- Attestation
- Require nonce; cache tokens only for minutes; archive tokens with proof artifacts. (docs.cloud.google.com)
- GPU: Verify NVIDIA device certs, OCSP, and NRAS reports before enabling CC mode workloads. (developer.nvidia.com)
- Operations
- Enforce “no debug”; monitor policy denials; alert on any proof generated outside TEEs.
- Keep to supported OS/kernel versions for attestation; consult release notes before upgrades. (docs.cloud.google.com)
Real‑world momentum: private proving at scale
The industry has begun converging on “ZK proof generation inside TEEs.” One notable example is Succinct’s Private Proving: SP1 zkVM runs inside TEEs with GPU CC, shielding the witness and outputting only proofs. The approach demonstrates how ZK verifiability and hardware‑level privacy can be combined without rewriting prover stacks—packaged as Docker, deployed to a TEE platform, and controlled by attestation. (blog.succinct.xyz)
For GPU‑backed workloads, insist on dual attestation (CPU TEE + GPU CC). Google exposes GPU CC status as a token claim in Confidential Space; NVIDIA provides device certs and NRAS for verification. These are the exact signals auditors now expect to see in regulated deployments. (cloud.google.com)
How 7Block Labs engages
- Architecture and vendor selection: We map your circuits and SLAs to the right TEE stack (Nitro, Confidential Space, Azure SKR, GPU CC).
- Policy engineering: We implement KMS/HSM policies that physically prevent decryption outside attested workloads and wire those checks into CI/CD.
- Attestation pipelines: We build per‑run attestation capture, verification, and archiving alongside your proofs, so you can prove your proofs were generated privately.
If you want a proof‑of‑concept, we typically deliver an end‑to‑end private prover in 3–6 weeks, including policy, attestation, and audit artifacts.
Key takeaways for decision‑makers
- ZK alone doesn’t hide witnesses from the prover operator; TEEs do. Use attestation‑gated key release to make “privacy by default” enforceable. (microsoft.com)
- You don’t need waivers: All three major clouds now support attestation‑aware authorization paths that satisfy strict security policies. (docs.aws.amazon.com)
- For performance, pair CPU TEEs with GPU Confidential Computing and pin those states in policy; validate device attestation before trusting acceleration. (developer.nvidia.com)
Private proving is ready for production—and policy—and we can help you deploy it with confidence.
Like what you're reading? Let's build together.
Get a free 30‑minute consultation with our engineering team.

