How a Mid-Size Fintech Team Used Confidential Computing Enclaves to Finally Ship HIPAA-Compliant AI Features Their Legal Team Had Been Blocking for Two Years
For two years, the engineering team at a mid-size health-payments fintech company we'll call ClearPath Financial had the same recurring nightmare: a promising AI feature would get built, demoed, and celebrated internally, only to be quietly strangled in a legal review meeting. The culprit was never the code. It was the data. Specifically, it was the impossibility of proving to auditors, legal counsel, and compliance officers that sensitive protected health information (PHI) could be processed by a machine learning model without being exposed, logged, or leaked in a way that violated HIPAA's Security Rule.
By early 2026, that nightmare was over. ClearPath had shipped three AI-powered features, earned a clean third-party HIPAA audit, and completely transformed its relationship with its legal team. The secret weapon was not a new regulation or a new cloud vendor promise. It was confidential computing enclaves, and the architectural playbook the team built around them is something every engineer in a regulated industry needs to understand.
This is that story, told in enough technical detail to be genuinely useful.
The Two-Year Stalemate: Why Legal Kept Saying No
To appreciate the solution, you have to understand the exact shape of the problem. ClearPath processes health insurance premium payments, EOB (Explanation of Benefits) data, and HSA/FSA transaction records. That means their databases are saturated with PHI: diagnosis codes, member IDs, provider information, and payment history that is directly linked to identifiable individuals.
The AI features the engineering team wanted to ship were genuinely valuable:
- Anomalous claim flagging: An ML model that could identify likely fraudulent or erroneous claims before payment, saving millions annually.
- Personalized HSA investment nudges: A recommendation engine that analyzed a member's health spending patterns to suggest appropriate HSA investment allocations.
- Predictive cash-flow modeling: A forecasting tool for employer clients that predicted upcoming health benefit expenditures based on historical PHI-linked patterns.
Every single one of these features required training or inference on raw or minimally processed PHI. And every single one ran into the same wall. The legal team's objections were not irrational. They fell into three distinct categories that will sound painfully familiar to anyone who has worked in a regulated environment.
Objection 1: The "Inference Surface" Problem
Even when PHI was encrypted at rest and in transit, the moment it was loaded into a GPU or CPU for model inference, it existed in plaintext in memory. That memory was accessible, in principle, to the cloud hypervisor, to cloud provider support staff with elevated access, and to any sufficiently privileged process running on the same host. Legal called this the "inference surface," and they were right to be worried. HIPAA's Security Rule requires covered entities and business associates to implement technical safeguards that guard against unauthorized access to PHI transmitted over or stored in electronic information systems. Plaintext in a shared cloud host's memory was an uncomfortable gray zone.
Objection 2: The Audit Trail Gap
HIPAA requires detailed audit controls: mechanisms to record and examine activity in information systems that contain or use PHI. The engineering team's existing ML pipeline had excellent logging at the application layer, but legal pointed out a critical gap. There was no cryptographic proof that the model had only seen the data it was supposed to see, that no side-channel copy had been made during inference, or that the model weights themselves had not been tampered with in a way that caused them to memorize and exfiltrate PHI. Application-layer logs are written by the application. They can be wrong, incomplete, or forged. Legal wanted attestation, not assertions.
Objection 3: The Business Associate Agreement (BAA) Chain Problem
ClearPath used a major cloud provider for compute, a third-party MLOps platform for model management, and a separate vector database vendor for embedding storage. Each of these relationships required a valid BAA. Getting a BAA from a large cloud provider is straightforward. Getting one from a specialized MLOps vendor that actually covers GPU-based inference workloads, in language precise enough to satisfy a careful healthcare attorney, is a months-long negotiation. And the chain is only as strong as its weakest link.
The Discovery: Confidential Computing as a Compliance Architecture
The breakthrough came when ClearPath's principal infrastructure engineer, attending a cloud security conference in late 2025, sat through a session on Trusted Execution Environments (TEEs) and their application to regulated data workloads. She came back with a single slide that changed everything: a diagram showing how a confidential computing enclave creates a hardware-enforced boundary around a computation, such that even the cloud provider's hypervisor cannot read the memory inside it.
Confidential computing is not new as a concept, but by 2026, it had matured dramatically. The major building blocks the team evaluated were:
- AWS Nitro Enclaves: Isolated compute environments on EC2 instances, with no persistent storage, no interactive access, and cryptographic attestation of the enclave's identity and code integrity.
- Intel Trust Domain Extensions (TDX): A hardware-level isolation mechanism available on newer Intel Xeon processors that creates encrypted virtual machine partitions invisible to the host OS and hypervisor.
- AMD SEV-SNP (Secure Encrypted Virtualization with Secure Nested Paging): AMD's equivalent, offering memory encryption and integrity protection for virtual machines with strong attestation capabilities.
- Azure Confidential Computing with DCsv3 instances: Microsoft's managed offering, layering Intel SGX and TDX capabilities with Azure's identity and key management infrastructure.
The core property that mattered for ClearPath's legal problem was remote attestation. Before any PHI is released into an enclave, the enclave can cryptographically prove to an external verifier (in this case, ClearPath's own attestation service) that it is running a specific, unmodified piece of code on genuine, trusted hardware. The verifier can check that proof without trusting the cloud provider at all. This was the answer to Objection 2. It was not just a log saying "we only processed what we were supposed to process." It was a hardware-signed certificate saying "this exact code, with this exact hash, ran in this exact isolated environment, and nothing outside that environment could observe the memory."
The Architecture They Built: A Step-by-Step Breakdown
ClearPath chose AWS Nitro Enclaves as their primary platform, combined with Intel TDX instances for their GPU-accelerated inference workloads (Nitro Enclaves, as of 2026, still have limited GPU passthrough support, making a hybrid approach necessary for model inference at scale). Here is how the full architecture came together.
Step 1: The Attestation-Gated Data Vault
PHI never enters the enclave directly from the application layer. Instead, ClearPath built an Attestation-Gated Data Vault using AWS Key Management Service (KMS) combined with a custom attestation broker. The flow works like this:
- The enclave boots and generates an attestation document signed by the AWS Nitro Security Module, cryptographically proving its code hash and configuration.
- The attestation document is sent to ClearPath's attestation broker, which verifies the document against a pinned list of approved enclave measurement hashes (PCR values).
- Only after successful attestation does the broker instruct AWS KMS to release the data encryption key to the enclave.
- The enclave uses that key to decrypt the PHI dataset from an encrypted S3 object, processes it entirely within the enclave's protected memory, and releases only the model output (a risk score, a recommendation, a forecast) to the outside world.
The PHI itself never exists in plaintext outside the enclave boundary. The encryption key never exists outside the enclave boundary. The cloud provider's infrastructure cannot observe either. This directly addressed Objection 1.
Step 2: Immutable Enclave Images with Reproducible Builds
To address the audit trail gap, ClearPath implemented a strict reproducible build pipeline for their enclave images. Every enclave image is built from a locked, version-pinned dependency tree, and the build process is run in an isolated CI environment that produces a deterministic binary artifact. The SHA-256 hash of that artifact is registered in an append-only audit log stored in AWS QLDB (Quantum Ledger Database), which provides cryptographic verification that the log has not been tampered with.
When legal or an external auditor asks "what code processed this PHI batch?", the answer is a specific enclave image hash, verifiable in the QLDB ledger, reproducible from a specific Git commit, and cryptographically tied to the attestation document generated at the time of processing. This is the audit trail that application-layer logging can never provide.
Step 3: Collapsing the BAA Chain with a "Compute-Only" Vendor Model
The third-party MLOps platform problem was solved by a deliberate architectural decision: no PHI ever touches the third-party MLOps vendor's infrastructure. Model training happens in ClearPath's own confidential compute environment. Model weights are stored encrypted in ClearPath-controlled storage. The MLOps platform is used only for experiment tracking with synthetic or fully de-identified data, for model registry metadata, and for deployment orchestration that never touches the actual PHI-processing pipeline.
This collapsed the BAA chain problem. The cloud provider (AWS) has a robust BAA. The MLOps vendor never sees PHI, so it does not need to be a business associate at all. Legal signed off on this structure in a single review meeting, which the engineering team described as "almost suspiciously fast."
Step 4: Output Sanitization and Differential Privacy Guardrails
Even though the enclave protects PHI during processing, the outputs of the model could theoretically leak information about individuals through model inversion or membership inference attacks. ClearPath added a final layer: an output sanitization module running inside the enclave that applies differential privacy noise (using the OpenDP library) to any aggregate outputs before they leave the enclave, and enforces strict output schema validation so that the enclave can only emit data in pre-approved formats. A risk score between 0 and 1. A recommendation category from an enumerated list. A forecast as a probability distribution. Never raw PHI, never a free-form string, never anything that could carry a diagnosis code or member ID out of the enclave by accident.
The Legal Team's Conversion: What Actually Changed Their Minds
The engineering team expected that presenting this architecture would result in another round of objections. What they got instead was a series of very specific questions that, when answered, led to genuine buy-in. Here is what legal actually cared about, and how the architecture answered each concern.
"Can the cloud provider see our patients' data?"
The answer, backed by the attestation model and AWS's published Nitro security whitepaper, was a verifiable no. Not even AWS support staff with root access to the underlying hardware can read memory inside a Nitro Enclave. This is not a policy promise. It is a hardware enforcement property. Legal had seen policy promises before. Hardware enforcement was different.
"If we get breached, what did the attacker get?"
Because PHI only exists in plaintext inside the enclave, and the enclave has no persistent storage and no network interface except a constrained vsock channel to the parent instance, a breach of the surrounding infrastructure yields encrypted blobs and model outputs, not PHI. The blast radius of a credential compromise or a misconfiguration is dramatically reduced.
"How do we prove this to OCR during an audit?"
The QLDB ledger, the reproducible build hashes, and the attestation documents together form a complete, tamper-evident chain of custody for every PHI processing event. ClearPath's compliance team worked with outside counsel to map each element of this chain to specific HIPAA Security Rule implementation specifications, producing a controls matrix that became the backbone of their HIPAA audit response package.
The Results: What Shipped and What It Cost
Within six months of beginning the confidential computing implementation, ClearPath shipped all three previously blocked AI features. The anomalous claim flagging model alone identified approximately $4.2 million in suspicious claims in its first quarter of operation. The HSA investment nudge engine increased member engagement with investment options by 34 percent. The cash-flow forecasting tool became a key differentiator in enterprise sales conversations.
The implementation was not free. Here is an honest accounting of the costs:
- Engineering time: Approximately 14 weeks of work from a team of four engineers, including one dedicated to the attestation broker and one to the reproducible build pipeline.
- Infrastructure cost premium: Confidential compute instances (both Nitro Enclave-enabled and TDX-capable instances) carry a roughly 20 to 35 percent cost premium over standard equivalent compute. For ClearPath's workload scale, this translated to approximately $8,000 per month in additional cloud spend.
- Operational complexity: Enclave images must be rebuilt and re-attested whenever dependencies change. This added meaningful overhead to the release process, which the team addressed by investing in enclave-aware CI/CD tooling.
- Legal and compliance review: Outside counsel was engaged for approximately 40 hours to review the architecture and produce the controls mapping. This was a one-time cost that produced reusable documentation.
Against the backdrop of $4.2 million in identified fraud value in a single quarter, the investment calculus was not difficult.
The Playbook: What Every Regulated-Industry Engineer Can Steal
ClearPath's experience distills into a set of principles that apply far beyond health payments fintech. Whether you are working in healthcare, insurance, financial services, defense contracting, or any other domain where sensitive data and AI intersect, this playbook is directly transferable.
1. Frame Confidential Computing as a Compliance Architecture, Not a Security Feature
Engineers tend to pitch confidential computing as a security improvement. Legal and compliance teams respond to compliance architecture. The framing matters enormously. Lead with "this is how we satisfy the HIPAA Security Rule's technical safeguard requirements for our AI inference pipeline," not "this makes our system more secure." The first statement maps to a regulatory obligation. The second is vague and subjective.
2. Make Attestation the Foundation, Not an Afterthought
Remote attestation is the killer feature of confidential computing for regulated industries. If you build an enclave-based system without a robust attestation verification service, you have protected the data but you cannot prove it. Build the attestation broker first, before you build anything else, and design your audit trail around attestation documents as first-class artifacts.
3. Collapse Vendor Chains Aggressively
Every third-party vendor that touches PHI is a BAA negotiation, a compliance review, and a liability. Use the enclave architecture as an opportunity to aggressively re-evaluate which vendors actually need to touch sensitive data. In most ML pipelines, the answer is: far fewer than you think. Synthetic data and de-identified data can cover most of the MLOps toolchain. Reserve real PHI access for the enclave and the enclave alone.
4. Invest in Reproducible Builds Early
The reproducible build pipeline is unglamorous work, but it is the foundation of your audit story. If you cannot deterministically reproduce the exact binary that ran in a given enclave from a specific source commit, your audit trail has a gap. This is not a problem you want to discover during an OCR investigation or a SOC 2 audit. Build it early, automate it completely, and treat the enclave image hash as a first-class artifact in your deployment metadata.
5. Design Outputs, Not Just Inputs
Regulated-industry engineers spend enormous energy protecting data going into AI systems. They spend far less energy thinking about what comes out. Output sanitization, differential privacy for aggregates, and strict output schema enforcement are not optional extras. They are the difference between a system that is technically HIPAA-compliant and one that is legally defensible when a plaintiff's attorney asks whether your model could have leaked PHI through its outputs.
6. Bring Legal In as an Architecture Stakeholder, Not a Reviewer
ClearPath's most important process change was inviting their general counsel and outside healthcare attorney into the architecture design sessions, not just the review meetings. When legal understands what attestation documents are and why they exist, they stop asking for things the architecture cannot provide and start asking for things it can. The relationship shifts from adversarial to collaborative. This is a cultural change as much as a technical one, and it is the hardest part of the playbook to steal. But it is also the most valuable.
Looking Ahead: Where Confidential Computing Is Going in 2026
The confidential computing landscape is moving fast. Several developments are worth watching for regulated-industry engineers planning their own implementations.
GPU confidential computing is maturing rapidly. NVIDIA's Hopper architecture introduced Confidential Computing support for H100 GPUs, and by 2026, the ecosystem around attestation for GPU-based inference is significantly more mature than it was even 18 months ago. Teams that need to run large language models or other GPU-intensive workloads on PHI can now do so with hardware-enforced isolation in ways that were impractical before.
Multi-party confidential computing is emerging as a pattern. Several health systems and payers are beginning to explore architectures where multiple organizations contribute PHI to a shared confidential compute environment for federated model training, with cryptographic guarantees that no party can observe another party's raw data. This is a significant unlock for population health AI that has historically been blocked by data-sharing legal complexity.
Regulatory recognition is growing. The HHS Office for Civil Rights has begun issuing more specific technical guidance on AI and HIPAA, and confidential computing is increasingly referenced in that guidance as a recognized technical safeguard approach. This is not yet a safe harbor, but the direction of travel is clear.
Conclusion: The Two-Year Block Was Never Really About the Technology
Here is the uncomfortable truth that ClearPath's story reveals: the two-year stalemate was not caused by the absence of confidential computing. It was caused by the absence of a compliance architecture that legal could reason about clearly. The engineering team had been building technically sound systems and asking legal to trust them. Legal's job is not to trust engineers. It is to verify. Confidential computing enclaves gave the engineering team something they had never had before: a system that could prove its own behavior, in terms that mapped directly to regulatory requirements, without asking anyone to take anything on faith.
That is the real lesson. In regulated industries, the gap between "this is secure" and "this is compliant" is not a technical gap. It is an evidence gap. Confidential computing closes that gap by making the evidence hardware-enforced, cryptographically verifiable, and auditor-legible.
If your legal team has been blocking your AI features, the question worth asking is not "how do we convince them?" It is "what evidence would make this decision easy?" Build the system that produces that evidence, and the conversation changes entirely.
Are you working through a similar compliance architecture challenge in a regulated industry? The patterns described here are broadly applicable across HIPAA, PCI-DSS, FedRAMP, and GDPR contexts. The specifics differ; the core principle of hardware-attested, evidence-producing computation does not.