Compliance
Enterprise AI Security Audit Checklist for DPDP Compliance
A complete security audit checklist for enterprise teams deploying AI in India — covering DPDP Act requirements, LLM security, PII controls, and incident response.
Pre-Audit: Define Your AI Asset Inventory
Before auditing anything, enumerate every AI system in your organisation: which LLMs are used (internal, third-party API), which teams use them, what data they process, and who has API key access. Most enterprises find 3–5 shadow AI deployments during this step — engineers who've wired up OpenAI directly without going through the approved gateway.
Document for each AI system: data classification level, data residency, retention policy, access controls, and the business purpose. This inventory is your audit baseline.
Checkpoint 1: PII Handling at the AI Layer
❑ All LLM API calls routed through a PII-scanning gateway. ❑ Indian PII patterns configured: Aadhaar (with Verhoeff validation), PAN, UPI ID, IFSC, Voter ID, Passport, mobile numbers, email. ❑ PII redaction applied to prompts before transmission to third-party LLM APIs. ❑ LLM responses scanned for PII regeneration (models can reproduce training data). ❑ Redaction audit log retained for minimum 2 years. ❑ Fine-tuning datasets audited for PII — a commonly missed surface.
Checkpoint 2: Consent and Purpose Limitation
❑ Consent records stored with: user ID, timestamp, purpose, data type, version of privacy notice, and expiry. ❑ Every AI use case mapped to a legal basis under DPDP Section 7. ❑ Purpose limitation enforced technically — data collected for Purpose A cannot be used by an AI trained for Purpose B without separate consent. ❑ Consent withdrawal mechanism tested end-to-end, including downstream LLM data removal. ❑ Children's data processing disabled or requiring verifiable parental consent.
Checkpoint 3: Access Controls and Secrets Management
❑ LLM API keys stored in secrets manager (AWS Secrets Manager, Vault, GCP Secret Manager), not in .env files or code repositories. ❑ API key rotation schedule defined and tested. ❑ Principle of least privilege: production LLM keys not accessible from development environments. ❑ Gateway admin access requires MFA and is logged. ❑ Service accounts for LLM access have defined scopes and are reviewed quarterly.
Checkpoint 4: Data Residency and Cross-Border Transfers
❑ LLM API endpoints mapped to data residency — OpenAI processes data in US by default; confirm if this is permissible under DPDP Section 16 for your data categories. ❑ Restricted data categories (health, financial, biometric) explicitly prohibited from leaving India unless adequately protected destination is confirmed. ❑ Processor agreements (DPA equivalents) in place with all third-party LLM vendors. ❑ Internal AI systems hosted in Indian data centres where required.
Checkpoint 5: Incident Response and Breach Notification
❑ AI-specific incident response runbook defined: what constitutes a breach, who is notified, within what timeframe. ❑ DPDP Section 25 72-hour breach notification procedure documented and rehearsed. ❑ Breach detection: anomaly alerts on unusual PII query volumes, model extraction attempts, and prompt injection patterns. ❑ Post-incident: procedure for notifying affected data principals (Section 25(3)) and the Data Protection Board.
Checkpoint 6: Vendor and Third-Party Risk
❑ All LLM API vendors assessed for DPDP compliance: do they process Indian personal data? Do they use it for training? ❑ Sub-processor list documented and shared with your DPO. ❑ LLM training opt-out configured where available (OpenAI Enterprise, Azure OpenAI). ❑ Right-to-erasure contractually enforceable with third-party vendors — confirm you can request deletion of personal data from training sets.
Compliance operational checklist
Enterprise AI Security Audit Checklist for DPDP Compliance should be reviewed as an operating control, not only as a reference article. The minimum checklist is a data inventory, a stated processing purpose, owner approval, PII detection at the AI boundary, redaction or tokenisation where possible, retention limits, vendor transfer records, and a tested user-rights workflow. This checklist gives engineering and compliance teams a shared language for deciding what must be blocked, what can be allowed in shadow mode, and what needs human review before production release.
For AI systems, the review should include prompts, retrieved context, tool call arguments, model responses, logs, traces, analytics events, exports, and support attachments. Many incidents happen because teams scan only the visible form field while sensitive data moves through background context or observability tooling. CrewCheck's recommended pattern is to place the scanner at the request boundary, record the policy version, and keep audit evidence that shows which identifiers were detected and what action was taken.
A practical rollout starts with representative samples from production-like traffic. Run a DPDP scan, sort findings by identifier sensitivity and blast radius, fix Aadhaar, PAN, financial, health, children's, and precise-location exposure first, then move to consent wording, retention, deletion, and vendor review. Use shadow mode when false positives could disrupt users, and promote to enforcement only after the exceptions have owners and expiry dates.
This page is educational and should be paired with legal review for final policy interpretation. The operational proof should still come from repeatable evidence: scanner results, audit exports, pull-request checks, policy configuration, and a documented owner for the workflow. That combination is what makes the content useful during buyer diligence, board review, regulatory questions, or an incident investigation.
Related pages
Check your own workflow
Run a free DPDP scan before this risk reaches production.
Scan prompts, logs, documents, and API payloads for Indian PII exposure, missing redaction, and audit gaps. Backlinks: learn hub, developer docs, pricing, and the DPDP scanner.