DPDP Act
DPDP Act Penalties & Fines: ₹10 Crore to ₹250 Crore Breakdown
Detailed breakdown of all DPDP Act 2023 penalties from ₹10 crore to ₹250 crore. Understand what violations trigger which fines and how to calculate your risk exposure.
The penalty schedule in full
Schedule I and Sections 25-32 of the DPDP Act create a tiered penalty structure. The highest penalty — ₹250 crore — applies when a data fiduciary fails to implement adequate security safeguards resulting in a personal data breach (Section 25(1)). The second tier — ₹200 crore — covers failure to notify the Data Protection Board of a breach (Section 26), failure to fulfil obligations related to children's data (Section 25(2)(a)), and processing data in breach of Section 9.
Lower penalties apply to procedural failures: failing to fulfil data principal rights (₹50 crore under Section 25(2)(b)), failing to take remedial measures after breach (₹150 crore under Section 25(3)), and data processors failing to comply with fiduciary instructions (₹50 crore). Repeated violations attract enhanced penalties. The Board is instructed to consider the nature, gravity, duration, and scale of the violation, the preventability, the deliberateness, and the gain to the violator.
These penalties are not automatically imposed — the Data Protection Board follows an inquiry process that includes a notice to the respondent, an opportunity to be heard, and a reasoned order. However, the Board can impose interim measures during the inquiry, including directing the fiduciary to cease processing. For a SaaS company, a 'cease processing' direction while an inquiry is ongoing could be existential.
How to calculate your actual risk exposure
Penalty exposure is a function of violation type, scale, deliberateness, and mitigating factors. For a startup processing 100,000 user records with inadequate security controls that result in a breach, the maximum statutory penalty is ₹250 crore — but the Board would likely impose a smaller penalty for a first violation by a smaller company, particularly if the company cooperated fully, notified promptly, and remediated quickly.
Scale matters. If your AI system processes 10 million Indian users' personal data and has no PII redaction before model providers, every API call that includes unredacted Aadhaar or PAN data is a potential violation. The Board can treat each affected individual as a separate instance of harm, but more likely will treat similar violations arising from the same systemic failure as a single course of conduct with a single penalty reflecting the aggregate harm.
Deliberateness is a significant aggravating factor. If your AI governance audit reveals that your engineering team knew the LLM was receiving unredacted Aadhaar numbers and chose not to implement redaction due to cost, the Board will likely treat this as deliberate rather than inadvertent. Document your remediation efforts and implement controls before regulatory scrutiny — retroactive compliance after a complaint is far less favourable than proactive compliance.
AI-specific violation scenarios and their penalties
Scenario 1: Your customer support LLM receives unredacted Aadhaar and PAN numbers in user messages and forwards them to OpenAI's API. The model provider logs these requests for 30 days for abuse monitoring. This constitutes a failure to implement adequate security safeguards (no PII redaction) and potentially a transfer to an unauthorised processor. Penalty exposure: up to ₹250 crore.
Scenario 2: Your EdTech platform uses an LLM to personalise learning content for students under 18 without implementing age verification or obtaining verifiable parental consent. Section 9 of the DPDP Act prohibits processing children's data without parental consent. Penalty exposure: up to ₹200 crore.
Scenario 3: A data breach exposes 50,000 users' AI interaction logs including their health queries and financial situations. You take 10 days to notify the Data Protection Board instead of the expected 72 hours. You face penalties under Section 25(1) for the inadequate safeguards, Section 26 for late notification, and potentially Section 25(3) for slow remediation. Cumulative penalty exposure could reach ₹500+ crore.
The enforcement timeline and what to expect
The Data Protection Board of India has not yet been constituted as of early 2026 — enforcement is tied to the Board's establishment and the publication of Rules under the Act. However, companies should not use this window as an excuse for non-compliance. The Act itself is in force, the obligations exist, and the Board once constituted can investigate historical violations.
MeitY is expected to publish the Data Protection Rules in 2025-26. These Rules will specify: the format and timeline for breach notifications, the criteria for designating Significant Data Fiduciaries, the list of trusted jurisdictions for cross-border transfers, and the specific consent notice requirements. Until Rules are published, the Act provides the framework and companies must make good-faith interpretations of the obligations.
Regulatory risk is not the only financial exposure. Data breaches can trigger class-action-style claims, customer churn, enterprise deal losses, and reputational damage that dwarf regulatory penalties. Indian enterprise customers are increasingly requiring DPDP compliance evidence as a condition of procurement. The cost of non-compliance is already real even before the Board begins enforcement.
DPDP Act operational checklist
DPDP Act Penalties & Fines: ₹10 Crore to ₹250 Crore Breakdown should be reviewed as an operating control, not only as a reference article. The minimum checklist is a data inventory, a stated processing purpose, owner approval, PII detection at the AI boundary, redaction or tokenisation where possible, retention limits, vendor transfer records, and a tested user-rights workflow. This checklist gives engineering and compliance teams a shared language for deciding what must be blocked, what can be allowed in shadow mode, and what needs human review before production release.
For AI systems, the review should include prompts, retrieved context, tool call arguments, model responses, logs, traces, analytics events, exports, and support attachments. Many incidents happen because teams scan only the visible form field while sensitive data moves through background context or observability tooling. CrewCheck's recommended pattern is to place the scanner at the request boundary, record the policy version, and keep audit evidence that shows which identifiers were detected and what action was taken.
A practical rollout starts with representative samples from production-like traffic. Run a DPDP scan, sort findings by identifier sensitivity and blast radius, fix Aadhaar, PAN, financial, health, children's, and precise-location exposure first, then move to consent wording, retention, deletion, and vendor review. Use shadow mode when false positives could disrupt users, and promote to enforcement only after the exceptions have owners and expiry dates.
This page is educational and should be paired with legal review for final policy interpretation. The operational proof should still come from repeatable evidence: scanner results, audit exports, pull-request checks, policy configuration, and a documented owner for the workflow. That combination is what makes the content useful during buyer diligence, board review, regulatory questions, or an incident investigation.
DPDP Act pillar implementation addendum
A pillar page should also connect the legal idea to a concrete implementation path. Start with ownership: name the product owner, engineering owner, security reviewer, and compliance reviewer for this topic. Then map the systems that can create, store, transform, or transmit the relevant personal data. The map should include frontend forms, backend APIs, queues, warehouses, LLM prompts, embedding stores, admin exports, vendor dashboards, and customer-success tooling.
Next, document the lawful purpose and the user-facing notice. The notice should be clear enough that a data principal understands what is processed, why AI may be involved, what categories of personal data are affected, and how consent or withdrawal works. If the workflow supports children, healthcare, financial services, employment, or government delivery, treat that context as higher risk and add stricter review before allowing personal data into model calls.
The engineering control should run before data leaves the application boundary. Scan the full prompt package, not just the user's message. That means system instructions, retrieved snippets, tool outputs, attachments, OCR text, chat history, and structured JSON all need inspection. When a high-confidence identifier is found, redact, tokenise, block, or route to a safer model depending on the policy. Keep the original sensitive value out of general logs unless a protected exception is approved.
Audit evidence should be designed for reconstruction. A reviewer should be able to answer: when did the request happen, which application sent it, which data type was detected, which rule fired, what action was taken, which provider received the final payload, and who approved any exception. Without that trail, teams are left with policy claims rather than proof. With it, they can respond faster to buyer diligence, internal audits, breach triage, and regulator questions.
Finally, make the process repeatable. Add sample payloads to tests, run scheduled scans against logs and representative documents, check sitemap and page health for public guidance, and keep the DPDP scanner linked from the page so readers can move from learning to action. The goal is not to freeze the system; it is to make every future AI workflow easier to review, safer to launch, and easier to explain.
Related pages
Check your own workflow
Run a free DPDP scan before this risk reaches production.
Scan prompts, logs, documents, and API payloads for Indian PII exposure, missing redaction, and audit gaps. Backlinks: learn hub, developer docs, pricing, and the DPDP scanner.