The ₹250 Crore Question: How One PII Leak Can Kill Your Startup
Why one careless model call can become a board-level risk in the DPDP era.
Harsh · 29 April 2026 · 5 min read
The leak is usually boring
The scary version of a data leak sounds cinematic. The real version is much quieter: a support assistant receives an Aadhaar number in a chat message, forwards it to OpenAI for summarization, and the number is now outside your data residency boundary.
Under the DPDP Act, the penalty for a data breach can be up to ₹250 crore.
The three most common AI data leak patterns
Pattern one: passthrough — forwarding user input directly to a model provider without inspection. Pattern two: context stuffing — including customer records in the context window. Pattern three: output leakage — the model generates personal data from training data or context.
All three patterns are preventable with a governance layer that inspects both inputs and outputs.
Prevention belongs in the request path
A compliance policy document does not prevent data leaks. Only code that runs in the request path can prevent data leaks. This is why CrewCheck is built as a proxy, not a library.
The proxy approach means PII detection, compliance rules, and audit logging are applied consistently across all model providers, all endpoints, and all teams.
How to operationalize this
Treat this topic as a production workflow, not a policy note. Identify the user action that starts the AI call, the personal or regulated data that can enter the prompt, the model provider that receives it, and the owner responsible for changing the route when something goes wrong. That simple inventory is often where weak AI governance becomes visible.
Once the workflow is named, put the control at the boundary. For CrewCheck, that means routing the model call through the gateway so detection, redaction, rule evaluation, provider choice, and audit logging happen consistently. The important detail is that the control should run on every request, including retries, fallback providers, demos, and internal admin tools.
What evidence a buyer should ask for
A serious buyer should ask for evidence that connects the claim to live behavior. For a privacy claim, that means redaction logs, blocked examples, sanitized payloads, and data-retention behavior. For a safety claim, that means red-team cases, circuit-breaker decisions, and output scanning results. For a compliance claim, that means the notice, purpose, rule, and provider route can be reconstructed from the audit trail.
The practical standard is whether the team can answer a specific question without manual archaeology: what happened to this request, which rule fired, what data was removed, which provider saw the final payload, and who can approve or reverse the decision? If that answer requires five tools and a memory of how the system was meant to work, the evidence layer is not ready yet.
A safe next step
Start with one high-risk path and make it boringly inspectable. Run realistic Indian examples through it, including Aadhaar-like numbers, PAN formats, UPI IDs, mixed-language prompts, and attempts to override system instructions. Check the user-facing response, the gateway event, the dashboard state, and the exportable report. The path is ready only when all four tell the same story.
That narrow verification habit matters more than a large compliance checklist. AI governance fails when teams assume controls are present because the architecture says they are. It becomes trustworthy when the live product can show the exact request, exact decision, and exact evidence behind the claim.
After that, make the check repeatable. Keep the examples in a small regression pack, rerun them before deployment, and compare the result with the public claim you are about to make. If the route, report, or dashboard no longer proves the claim, change the product or change the claim before a customer finds the gap.
The habit is deliberately plain: one workflow, one owner, one evidence trail, one live verification path. That is enough to turn a short article or launch note into something an operator can actually use.
Author
Harsh
Building CrewCheck in public from India.
Related posts
RBI FREE-AI Sutras for LLM Gateways
BFSI compliance operating note 1: a practical note for banking AI teams on policy principles stay disconnected from traffic.
SEBI AI Governance Evidence for Fintech Websites
BFSI compliance operating note 2: a practical note for fintech operators on public claims mention AI without control evidence.
BFSI Prompt Logging Without Customer Leakage
BFSI compliance operating note 3: a practical note for bank platform teams on debug logs capture account numbers and identity data.