DPDP Act 2023: What Every Indian SaaS Founder Needs to Know About AI Compliance
A practical founder-first look at consent, purpose limitation, audit trails, and why AI agents change the risk model.
Harsh · 30 April 2026 · 5 min read
AI changes the compliance surface
Indian SaaS teams are moving from human-only workflows to AI agents that draft, summarize, route, and respond. That creates a new kind of processor inside the business — one that touches personal data at machine speed, across every customer interaction, without the natural pause that a human operator provides.
The practical question is no longer just where customer data is stored. It is whether every AI interaction has purpose, minimization, redaction, and evidence attached to it. The DPDP Act does not mention AI explicitly, but its principles — consent, purpose limitation, data minimization, and accountability — apply directly to every LLM call your product makes.
If your support bot receives an Aadhaar number in a chat message and forwards it to OpenAI for summarization, you have just transferred personal data to a processor without purpose limitation. That is a compliance event, whether or not anyone notices.
What the DPDP Act actually requires
The Digital Personal Data Protection Act, 2023 establishes six core obligations for data fiduciaries: lawful purpose, consent, data minimization, storage limitation, accuracy, and accountability. For AI-powered products, three of these are especially tricky.
Purpose limitation means you cannot use personal data for a purpose the user did not consent to. Data minimization means you should only process the minimum personal data necessary. Accountability means you must be able to demonstrate compliance with immutable audit trails.
Indian PII is different from Western PII
Most AI governance tools are built for US and EU data types — Social Security numbers, EU tax IDs, GDPR categories. Indian personal data has its own patterns: Aadhaar numbers with Verhoeff checksums, PAN cards with a specific alphanumeric format, UPI IDs, IFSC codes, Indian mobile numbers starting with 6-9, and regional language text.
CrewCheck's PII detection is built from the ground up for Indian identifiers. Aadhaar detection uses Verhoeff checksum validation to reduce false positives. PAN detection validates the format and character classes. UPI ID detection handles the user@provider pattern.
Getting started without boiling the ocean
You do not need to solve every compliance problem on day one. Start with the highest-risk surface: the AI interactions that touch personal data. Route them through a governance layer that detects PII, applies compliance rules, and logs everything.
CrewCheck is designed for exactly this starting point. Change one environment variable to route your OpenAI or Anthropic traffic through the gateway. PII redaction, compliance rules, cost tracking, and audit logging are applied automatically.
How to operationalize this
Treat this topic as a production workflow, not a policy note. Identify the user action that starts the AI call, the personal or regulated data that can enter the prompt, the model provider that receives it, and the owner responsible for changing the route when something goes wrong. That simple inventory is often where weak AI governance becomes visible.
Once the workflow is named, put the control at the boundary. For CrewCheck, that means routing the model call through the gateway so detection, redaction, rule evaluation, provider choice, and audit logging happen consistently. The important detail is that the control should run on every request, including retries, fallback providers, demos, and internal admin tools.
What evidence a buyer should ask for
A serious buyer should ask for evidence that connects the claim to live behavior. For a privacy claim, that means redaction logs, blocked examples, sanitized payloads, and data-retention behavior. For a safety claim, that means red-team cases, circuit-breaker decisions, and output scanning results. For a compliance claim, that means the notice, purpose, rule, and provider route can be reconstructed from the audit trail.
The practical standard is whether the team can answer a specific question without manual archaeology: what happened to this request, which rule fired, what data was removed, which provider saw the final payload, and who can approve or reverse the decision? If that answer requires five tools and a memory of how the system was meant to work, the evidence layer is not ready yet.
A safe next step
Start with one high-risk path and make it boringly inspectable. Run realistic Indian examples through it, including Aadhaar-like numbers, PAN formats, UPI IDs, mixed-language prompts, and attempts to override system instructions. Check the user-facing response, the gateway event, the dashboard state, and the exportable report. The path is ready only when all four tell the same story.
That narrow verification habit matters more than a large compliance checklist. AI governance fails when teams assume controls are present because the architecture says they are. It becomes trustworthy when the live product can show the exact request, exact decision, and exact evidence behind the claim.
After that, make the check repeatable. Keep the examples in a small regression pack, rerun them before deployment, and compare the result with the public claim you are about to make. If the route, report, or dashboard no longer proves the claim, change the product or change the claim before a customer finds the gap.
The habit is deliberately plain: one workflow, one owner, one evidence trail, one live verification path. That is enough to turn a short article or launch note into something an operator can actually use.
Author
Harsh
Building CrewCheck in public from India.
Related posts
DPDP Consent Checklist for LLM Support Bots
DPDP operating note 1: a practical note for support teams on consent language becomes vague once messages are handed to a model.
Purpose Limitation for AI Ticket Triage
DPDP operating note 2: a practical note for SaaS operations leaders on classification prompts often reuse customer messages for unrelated analytics.
Data Minimization for Indian Customer Chats
DPDP operating note 3: a practical note for founders shipping AI support on Aadhaar, PAN, UPI IDs, and addresses leak into prompts by accident.