Compliance

Azure OpenAI with DPDP Data Residency Requirements

How to configure Azure OpenAI Service to meet DPDP Act data residency requirements for Indian companies — regions, DPAs, and governance layer setup.

11 min readUpdated 2026-05-04

DPDP and Data Residency: What the Act Actually Says

The DPDP Act 2023 does not mandate data localisation for all personal data. Section 16 restricts cross-border transfers to countries/territories notified by the Central Government as providing adequate protection. As of 2026, this whitelist has not been fully published — meaning the safe interpretation is to treat all cross-border transfers as requiring a mechanism until the whitelist is confirmed.

For AI workloads, the practical question is: when your application sends user prompts containing personal data to Azure OpenAI, does this constitute a cross-border transfer? Yes, if the data is processed on Azure infrastructure outside India. Azure's India regions (Central India — Pune, South India — Chennai, West India — Mumbai) can address this.

Configuring Azure OpenAI for India Data Residency

Step 1: Create your Azure OpenAI resource in the 'Central India' region (Pune). This ensures model inference happens in India. Step 2: Enable Azure Private Link or Virtual Network Service Endpoints to ensure your traffic never leaves the Azure India network boundary. Step 3: Confirm your Azure subscription is on an Enterprise Agreement or MCA (Microsoft Customer Agreement) that includes Microsoft's Data Processing Addendum (DPA).

The Azure DPA is your contractual safeguard for cross-border transfer risks — it commits Microsoft to processing personal data only for the stated purpose and to providing GDPR-equivalent protections. While India's DPDP hasn't codified DPA requirements exactly as the EU has, having a strong DPA in place is the right defensive posture.

What Azure's India Region Doesn't Solve

Even with India-region deployment, Azure OpenAI has limitations for strict DPDP compliance: (1) Fine-tuning: Azure OpenAI fine-tuning jobs may replicate model weights to Microsoft's global infrastructure during training. Check the specific fine-tuning region policy before uploading personal data. (2) Model logs: Azure's service-level logging (distinct from your application logs) may capture input/output tokens for a limited period. Configure 'no-log' mode if available. (3) Abuse monitoring: Microsoft scans content for abuse detection. Review the Azure OpenAI abuse monitoring documentation to understand what data is retained.

The safest approach: deploy CrewCheck as a gateway before Azure OpenAI. All personal data is redacted before it reaches Azure — you get the benefit of Azure's India compute without the risk of personal data being processed in Microsoft's global pipeline.

The DPDP-Compliant Azure OpenAI Stack

Recommended architecture: Application → CrewCheck Gateway (India-hosted) → Azure OpenAI (India region). The CrewCheck gateway: (1) Redacts Indian PII from prompts, (2) Adds the sanitised prompt to the Azure OpenAI API, (3) Scans the response for PII regeneration, (4) Logs the audit event. Azure OpenAI receives zero personal data — only sanitised prompts with placeholder tokens like [AADHAAR_1] or [CUSTOMER_NAME].

This architecture satisfies: DPDP Section 16 (no personal data crosses to third-party international processing), Section 8(3) (data minimisation — LLM receives minimum necessary context), and Section 8(5) (reasonable security safeguard at the gateway layer).

Compliance operational checklist

Azure OpenAI with DPDP Data Residency Requirements should be reviewed as an operating control, not only as a reference article. The minimum checklist is a data inventory, a stated processing purpose, owner approval, PII detection at the AI boundary, redaction or tokenisation where possible, retention limits, vendor transfer records, and a tested user-rights workflow. This checklist gives engineering and compliance teams a shared language for deciding what must be blocked, what can be allowed in shadow mode, and what needs human review before production release.

For AI systems, the review should include prompts, retrieved context, tool call arguments, model responses, logs, traces, analytics events, exports, and support attachments. Many incidents happen because teams scan only the visible form field while sensitive data moves through background context or observability tooling. CrewCheck's recommended pattern is to place the scanner at the request boundary, record the policy version, and keep audit evidence that shows which identifiers were detected and what action was taken.

A practical rollout starts with representative samples from production-like traffic. Run a DPDP scan, sort findings by identifier sensitivity and blast radius, fix Aadhaar, PAN, financial, health, children's, and precise-location exposure first, then move to consent wording, retention, deletion, and vendor review. Use shadow mode when false positives could disrupt users, and promote to enforcement only after the exceptions have owners and expiry dates.

This page is educational and should be paired with legal review for final policy interpretation. The operational proof should still come from repeatable evidence: scanner results, audit exports, pull-request checks, policy configuration, and a documented owner for the workflow. That combination is what makes the content useful during buyer diligence, board review, regulatory questions, or an incident investigation.

#Azure OpenAI#DPDP#data residency#Microsoft Azure#compliance

Check your own workflow

Run a free DPDP scan before this risk reaches production.

Scan prompts, logs, documents, and API payloads for Indian PII exposure, missing redaction, and audit gaps. Backlinks: learn hub, developer docs, pricing, and the DPDP scanner.