DPDP Act
Cross-Border Data Transfer Rules under the DPDP Act
How India's DPDP Act regulates cross-border data transfers. Trusted country lists, consent basis for transfers, AI model providers abroad, and cloud compliance.
Section 11: The transfer framework
Section 11 of the DPDP Act governs cross-border transfers of personal data. The default position is that transfers are permitted unless the Central Government specifically restricts transfers to particular countries or territories. This is notably permissive compared to GDPR's default restriction (no transfer without an adequacy mechanism). The Government may publish a list of restricted jurisdictions based on national security, strategic, or public order concerns.
The trusted country list mechanism is the other side of Section 11. The Government may also notify countries to which transfers are affirmatively permitted under Section 16(1). The practical significance of the trusted country list is uncertain until published — it may operate as a whitelist for simplified transfers or simply as guidance.
Until the transfer Rules and lists are published, companies should rely on consent as their transfer mechanism. If your privacy notice discloses that personal data will be transferred to a named provider in a named country for a specific AI purpose, and the user consents, this constitutes lawful transfer under the consent basis. Ensure your consent notices are geographically specific about where data goes.
AI model providers: the cross-border challenge
Every time your application sends an Indian user's personal data to an API endpoint operated by OpenAI (US), Anthropic (US), Google (US), or Mistral (France), you are making a cross-border transfer of personal data. This has been true since these providers launched, but the DPDP Act makes this transfer legally significant in a way the IT Act never did.
For most Indian AI products, cross-border transfers to US-based LLM providers are not currently restricted — the Government has not published a list of restricted countries. However, regulatory policy on data localisation has historically been more restrictive in India than the DPDP Act's current default suggests. Financial data localisation under RBI guidelines, for example, requires payment data to be stored in India. Similar sector-specific requirements may emerge post-DPDP.
The practical mitigation for cross-border AI transfer risk is two-fold: (a) apply PII redaction before data crosses borders so that the data reaching the foreign provider is not personal data, and (b) ensure your consent notices disclose the cross-border transfer and the foreign provider's name so users have the information needed to make an informed choice.
Data residency and Indian cloud providers
Data residency — storing and processing data within India — is the most reliable way to avoid cross-border transfer compliance complexity. Indian cloud providers (AWS Asia Pacific Mumbai, Azure India, Google Cloud Mumbai and Delhi) offer Indian data residency for compute and storage. Domestic LLM providers (Krutrim, SarvamAI, and others) are emerging and may offer India-resident inference endpoints.
For BFSI companies under RBI supervision, data localisation may already be required. RBI's 2018 circular on storage of payment system data mandates that payment data be stored only in India. AI systems processing payment flows — fraud detection, transaction categorisation, credit scoring — must respect this requirement, which means either using India-resident model endpoints or applying strict PII redaction before cross-border inference.
Even with Indian data residency for storage, cross-border transfers can still occur for AI inference if you use foreign-hosted LLM endpoints. Consider using Indian-hosted model endpoints for the most sensitive use cases, with appropriate fallback to foreign providers with PII redaction for use cases where Indian-hosted models do not perform adequately.
DPDP Act operational checklist
Cross-Border Data Transfer Rules under the DPDP Act should be reviewed as an operating control, not only as a reference article. The minimum checklist is a data inventory, a stated processing purpose, owner approval, PII detection at the AI boundary, redaction or tokenisation where possible, retention limits, vendor transfer records, and a tested user-rights workflow. This checklist gives engineering and compliance teams a shared language for deciding what must be blocked, what can be allowed in shadow mode, and what needs human review before production release.
For AI systems, the review should include prompts, retrieved context, tool call arguments, model responses, logs, traces, analytics events, exports, and support attachments. Many incidents happen because teams scan only the visible form field while sensitive data moves through background context or observability tooling. CrewCheck's recommended pattern is to place the scanner at the request boundary, record the policy version, and keep audit evidence that shows which identifiers were detected and what action was taken.
A practical rollout starts with representative samples from production-like traffic. Run a DPDP scan, sort findings by identifier sensitivity and blast radius, fix Aadhaar, PAN, financial, health, children's, and precise-location exposure first, then move to consent wording, retention, deletion, and vendor review. Use shadow mode when false positives could disrupt users, and promote to enforcement only after the exceptions have owners and expiry dates.
This page is educational and should be paired with legal review for final policy interpretation. The operational proof should still come from repeatable evidence: scanner results, audit exports, pull-request checks, policy configuration, and a documented owner for the workflow. That combination is what makes the content useful during buyer diligence, board review, regulatory questions, or an incident investigation.
DPDP Act pillar implementation addendum
A pillar page should also connect the legal idea to a concrete implementation path. Start with ownership: name the product owner, engineering owner, security reviewer, and compliance reviewer for this topic. Then map the systems that can create, store, transform, or transmit the relevant personal data. The map should include frontend forms, backend APIs, queues, warehouses, LLM prompts, embedding stores, admin exports, vendor dashboards, and customer-success tooling.
Next, document the lawful purpose and the user-facing notice. The notice should be clear enough that a data principal understands what is processed, why AI may be involved, what categories of personal data are affected, and how consent or withdrawal works. If the workflow supports children, healthcare, financial services, employment, or government delivery, treat that context as higher risk and add stricter review before allowing personal data into model calls.
The engineering control should run before data leaves the application boundary. Scan the full prompt package, not just the user's message. That means system instructions, retrieved snippets, tool outputs, attachments, OCR text, chat history, and structured JSON all need inspection. When a high-confidence identifier is found, redact, tokenise, block, or route to a safer model depending on the policy. Keep the original sensitive value out of general logs unless a protected exception is approved.
Audit evidence should be designed for reconstruction. A reviewer should be able to answer: when did the request happen, which application sent it, which data type was detected, which rule fired, what action was taken, which provider received the final payload, and who approved any exception. Without that trail, teams are left with policy claims rather than proof. With it, they can respond faster to buyer diligence, internal audits, breach triage, and regulator questions.
Finally, make the process repeatable. Add sample payloads to tests, run scheduled scans against logs and representative documents, check sitemap and page health for public guidance, and keep the DPDP scanner linked from the page so readers can move from learning to action. The goal is not to freeze the system; it is to make every future AI workflow easier to review, safer to launch, and easier to explain.
Related pages
Check your own workflow
Run a free DPDP scan before this risk reaches production.
Scan prompts, logs, documents, and API payloads for Indian PII exposure, missing redaction, and audit gaps. Backlinks: learn hub, developer docs, pricing, and the DPDP scanner.