DPDP Act

Children's Data Protection under the DPDP Act: Complete Guide for EdTech

DPDP Act Section 9 requirements for processing children's data. Age verification, verifiable parental consent, prohibited processing, and EdTech compliance implementation.

12 min readUpdated 2026-05-04

Section 9: Children's data protections

Section 9 of the DPDP Act imposes the strictest protections in the entire legislation. No data fiduciary may process personal data of a child (defined as a person below 18 years) without verifiable parental consent. This applies absolutely — there are no deemed consent exceptions for children's data, no legitimate interest basis, no performance-of-contract exception.

Verifiable parental consent means that the consent must actually be given by a parent or guardian and must be verifiable — the fiduciary must be able to confirm that the person who gave consent is in fact the child's parent or guardian. This is technically and operationally challenging. Methods for verifiable parental consent include: digital signature by the parent, Aadhaar-based consent verification, video KYC of the parent, or linking the child's account to the parent's verified account.

Section 9(2) additionally prohibits data fiduciaries from tracking children, behavioural monitoring, or targeted advertising directed at children. For EdTech companies that use AI to personalise learning paths, this prohibition means the personalisation cannot be based on tracked behaviour across sessions — it must be based on in-session interaction without cross-session profiling.

Age verification: the technical challenge

The most difficult implementation challenge in Section 9 compliance is reliable age verification. The DPDP Act does not prescribe a specific mechanism for age verification — Rules will provide guidance, but companies must act now with reasonable good-faith measures. Options range from simple (date of birth declaration) to robust (Aadhaar eKYC to confirm age).

For EdTech companies, the typical user base is students from Class 1 through 12, which is entirely under 18. Rather than attempting to identify and separately handle minor users, most EdTech companies should implement parental consent as the default for all student accounts, regardless of whether the student might be 18 or older. This avoids the risk of mis-classifying a minor as an adult.

Build a parent-linking workflow: when a student account is created, require linking to a parent or guardian account that has been created with identity verification (phone number or email OTP at minimum, Aadhaar at best). The student account is inactive until the parent account confirms the link and consents to the processing. This two-step process provides verifiable parental consent that can withstand regulatory scrutiny.

Prohibited processing for children: what it means for AI

The prohibition on behavioural monitoring of children under Section 9 has significant implications for AI-powered EdTech. Cross-session learning analytics — tracking which topics a student struggles with over weeks, predicting learning trajectories, generating risk scores for dropout likelihood — all involve behavioural monitoring. If the student is under 18, this processing requires verifiable parental consent over and above the general consent for the platform.

Targeted advertising based on children's data is absolutely prohibited. If your EdTech platform uses an AI recommendation engine to suggest premium courses or content to students, and this engine uses behavioural data, this constitutes prohibited targeted advertising. The rule applies regardless of whether the advertising is by your company or third parties — you cannot share children's behavioural data with ad networks.

Learning analytics that improve the product without profiling individual children are permissible without special consent, as long as they do not involve cross-session individual tracking. Aggregate analytics (which topics are most difficult across all students this term, which video timestamps cause the most re-watches) do not involve personal data processing and are not restricted.

Implementation guide for EdTech platforms

Step 1: Audit your current data flows to identify all personal data of students under 18 being processed. This includes: name, school, class, marks, performance data, video watch history, quiz completion data, subscription status, and any AI-generated assessments or recommendations. Step 2: Identify which processing activities require specific parental consent beyond the general account creation consent. Step 3: Implement parental consent flows for those activities.

Step 4: Identify AI features that involve behavioural monitoring of children and either: (a) obtain explicit parental consent for these features with clear explanation, or (b) disable these features for users under 18. Step 5: Implement technical controls to ensure that children's data is not shared with advertising networks or third-party analytics providers without separate parental consent.

Step 6: Create a 'child-safe' processing environment for your AI systems. Route student data through an AI governance layer that blocks processing not covered by parental consent, applies stricter PII redaction (remove even pseudonymous identifiers before sending to model providers), maintains separate audit logs for children's data processing, and flags any behavioural tracking attempts for compliance review.

DPDP Act operational checklist

Children's Data Protection under the DPDP Act: Complete Guide for EdTech should be reviewed as an operating control, not only as a reference article. The minimum checklist is a data inventory, a stated processing purpose, owner approval, PII detection at the AI boundary, redaction or tokenisation where possible, retention limits, vendor transfer records, and a tested user-rights workflow. This checklist gives engineering and compliance teams a shared language for deciding what must be blocked, what can be allowed in shadow mode, and what needs human review before production release.

For AI systems, the review should include prompts, retrieved context, tool call arguments, model responses, logs, traces, analytics events, exports, and support attachments. Many incidents happen because teams scan only the visible form field while sensitive data moves through background context or observability tooling. CrewCheck's recommended pattern is to place the scanner at the request boundary, record the policy version, and keep audit evidence that shows which identifiers were detected and what action was taken.

A practical rollout starts with representative samples from production-like traffic. Run a DPDP scan, sort findings by identifier sensitivity and blast radius, fix Aadhaar, PAN, financial, health, children's, and precise-location exposure first, then move to consent wording, retention, deletion, and vendor review. Use shadow mode when false positives could disrupt users, and promote to enforcement only after the exceptions have owners and expiry dates.

This page is educational and should be paired with legal review for final policy interpretation. The operational proof should still come from repeatable evidence: scanner results, audit exports, pull-request checks, policy configuration, and a documented owner for the workflow. That combination is what makes the content useful during buyer diligence, board review, regulatory questions, or an incident investigation.

DPDP Act pillar implementation addendum

A pillar page should also connect the legal idea to a concrete implementation path. Start with ownership: name the product owner, engineering owner, security reviewer, and compliance reviewer for this topic. Then map the systems that can create, store, transform, or transmit the relevant personal data. The map should include frontend forms, backend APIs, queues, warehouses, LLM prompts, embedding stores, admin exports, vendor dashboards, and customer-success tooling.

Next, document the lawful purpose and the user-facing notice. The notice should be clear enough that a data principal understands what is processed, why AI may be involved, what categories of personal data are affected, and how consent or withdrawal works. If the workflow supports children, healthcare, financial services, employment, or government delivery, treat that context as higher risk and add stricter review before allowing personal data into model calls.

The engineering control should run before data leaves the application boundary. Scan the full prompt package, not just the user's message. That means system instructions, retrieved snippets, tool outputs, attachments, OCR text, chat history, and structured JSON all need inspection. When a high-confidence identifier is found, redact, tokenise, block, or route to a safer model depending on the policy. Keep the original sensitive value out of general logs unless a protected exception is approved.

Audit evidence should be designed for reconstruction. A reviewer should be able to answer: when did the request happen, which application sent it, which data type was detected, which rule fired, what action was taken, which provider received the final payload, and who approved any exception. Without that trail, teams are left with policy claims rather than proof. With it, they can respond faster to buyer diligence, internal audits, breach triage, and regulator questions.

Finally, make the process repeatable. Add sample payloads to tests, run scheduled scans against logs and representative documents, check sitemap and page health for public guidance, and keep the DPDP scanner linked from the page so readers can move from learning to action. The goal is not to freeze the system; it is to make every future AI workflow easier to review, safer to launch, and easier to explain.

#dpdp#children#edtech#minor-data#parental-consent#section-9

Check your own workflow

Run a free DPDP scan before this risk reaches production.

Scan prompts, logs, documents, and API payloads for Indian PII exposure, missing redaction, and audit gaps. Backlinks: learn hub, developer docs, pricing, and the DPDP scanner.