DPDP Act
Consent Management under DPDP: Technical Implementation Guide
How to implement DPDP-compliant consent management for AI products. Consent collection, storage, withdrawal, and linkage to AI processing pipelines explained.
What DPDP requires in a consent notice
Section 6(1) of the DPDP Act requires that a consent notice (or 'notice') be provided to the data principal before or at the time of collecting personal data. Section 6(2) specifies the notice must be itemised, in clear and plain language, with the option to review it in English or any language listed in the Eighth Schedule of the Constitution. For AI products, the notice must describe the personal data being collected, each purpose for which it will be processed, and the right to withdraw consent.
For AI-specific processing, the notice must be more granular than a general privacy policy. If your product uses an LLM for customer support, the notice should state: 'Your support conversation messages will be processed by an AI system and sent to [provider name] for generating responses. We apply PII redaction before sending — Aadhaar numbers, PAN, and phone numbers are removed or masked. You can opt out of AI-assisted support and receive human-only responses at any time.' Generic language like 'we may use your data to improve our services' does not meet the Section 6 standard.
The notice must be presented before processing begins. For a chatbot feature, consent should be obtained before the user's first message is sent to the LLM, not after. If your product currently sends messages to an LLM before displaying a consent notice, this is a Section 6 violation that needs architectural remediation.
Consent database schema design
Each consent record should capture: user identifier, consent version, consent timestamp, specific processing purpose consented to, whether the user affirmatively opted in (as opposed to failed to opt out), the channel through which consent was given (in-app toggle, email link, etc.), and an expiry or review date. For AI-specific consents, add fields for: the AI feature or agent covered, the model provider name, and whether PII redaction is applied before data reaches the provider.
Store consent records separately from the application database, in a dedicated consent store that is append-only (never modify a consent record — instead, create a new record when consent status changes). This append-only design creates an immutable audit trail of consent history. When a user withdraws consent, add a withdrawal record; do not delete the original consent record. You need the full history to demonstrate to the Data Protection Board that processing was lawful at the time it occurred.
Index your consent store by both user identifier and processing purpose. When a request comes in to process personal data, your system must be able to quickly look up: does this user have a current, unrevoked consent for this specific processing purpose? This lookup must happen before the data is sent to any downstream system. For high-volume AI products, this consent check needs to be cached and low-latency — a 10ms consent lookup on every LLM request is acceptable; a 200ms lookup is not.
Implementing consent withdrawal
Consent withdrawal must be as easy as consent provision. If your user clicked a toggle in the app to enable AI-powered features, withdrawal must be achievable by clicking the same toggle. Do not require a support email, a ticket, or a 10-day wait for withdrawal to take effect. Section 6(4) is explicit that withdrawal is a right, not a process that the fiduciary controls.
When a withdrawal event is recorded, your system must immediately stop routing that user's data through the AI pipeline. If you have a message queue or event stream (Kafka, SQS, etc.) that buffers AI processing requests, you need to either: (a) check consent status at the point of AI processing (not just request ingestion), or (b) process withdrawal events as high-priority messages that purge pending requests for that user. Option (a) is generally more reliable.
After withdrawal, the user must still receive the core service if it is available without AI processing. If your support product has a non-AI fallback, route the user there. If your product is inherently AI-powered and cannot function without processing personal data through an LLM, consent withdrawal effectively means the user cannot use that feature — this is acceptable, but you should clearly communicate this in the consent notice.
Linking consent to AI processing events
Every AI processing event that involves personal data must be linked to the consent record that authorises it. In your audit trail, each record should include: the consent record ID (or a hash of it), the consent version, and the specific processing purpose being executed. This linkage allows you to: (a) prove to regulators that processing was consented to, and (b) quickly identify all AI processing events that were authorised by a consent record that has since been withdrawn, so you know what to delete or quarantine.
Implement consent-aware routing in your AI gateway. Before forwarding a request to an LLM provider, the gateway should verify: is there a current, unrevoked consent record for this user for this processing purpose? If no consent record exists or it has been withdrawn, block the request and return an appropriate response to the application. Log both the consent check and the block as distinct audit events.
When the Data Protection Rules prescribe specific consent notice formats and retention periods for consent records, you will need to update your implementation. Design your consent management system to be configuration-driven — the notice text, the purposes list, the retention periods — so these can be updated without code changes when Rules are published.
DPDP Act pillar implementation addendum
A pillar page should also connect the legal idea to a concrete implementation path. Start with ownership: name the product owner, engineering owner, security reviewer, and compliance reviewer for this topic. Then map the systems that can create, store, transform, or transmit the relevant personal data. The map should include frontend forms, backend APIs, queues, warehouses, LLM prompts, embedding stores, admin exports, vendor dashboards, and customer-success tooling.
Next, document the lawful purpose and the user-facing notice. The notice should be clear enough that a data principal understands what is processed, why AI may be involved, what categories of personal data are affected, and how consent or withdrawal works. If the workflow supports children, healthcare, financial services, employment, or government delivery, treat that context as higher risk and add stricter review before allowing personal data into model calls.
The engineering control should run before data leaves the application boundary. Scan the full prompt package, not just the user's message. That means system instructions, retrieved snippets, tool outputs, attachments, OCR text, chat history, and structured JSON all need inspection. When a high-confidence identifier is found, redact, tokenise, block, or route to a safer model depending on the policy. Keep the original sensitive value out of general logs unless a protected exception is approved.
Audit evidence should be designed for reconstruction. A reviewer should be able to answer: when did the request happen, which application sent it, which data type was detected, which rule fired, what action was taken, which provider received the final payload, and who approved any exception. Without that trail, teams are left with policy claims rather than proof. With it, they can respond faster to buyer diligence, internal audits, breach triage, and regulator questions.
Finally, make the process repeatable. Add sample payloads to tests, run scheduled scans against logs and representative documents, check sitemap and page health for public guidance, and keep the DPDP scanner linked from the page so readers can move from learning to action. The goal is not to freeze the system; it is to make every future AI workflow easier to review, safer to launch, and easier to explain.
Related pages
Check your own workflow
Run a free DPDP scan before this risk reaches production.
Scan prompts, logs, documents, and API payloads for Indian PII exposure, missing redaction, and audit gaps. Backlinks: learn hub, developer docs, pricing, and the DPDP scanner.