Industry
AI Governance for BFSI in India: RBI, DPDP, and LLM Gateway Controls
Complete guide to AI governance for Indian banks, NBFCs, and insurance companies — RBI FREE-AI, DPDP Act, and LLM gateway architecture for BFSI compliance.
The BFSI AI Governance Landscape in India
Indian financial services firms deploying AI in 2026 face a layered regulatory environment: the Digital Personal Data Protection Act governs personal data processing, RBI's Responsible AI and Governance Frameworks apply to banks and NBFCs, SEBI's guidelines cover AI in securities, and IRDAI has issued AI governance circulars for insurance. Each layer adds obligations, and they often overlap in ways that create compliance complexity.
The common thread across all frameworks: auditability, fairness, data minimisation, and human oversight for AI decisions. A well-implemented AI governance stack addresses all four simultaneously rather than building separate compliance layers for each regulator.
RBI's AI and Data Governance Expectations
RBI has signalled AI governance expectations through multiple channels: the Digital Lending Guidelines, the IT Governance Circular, and the forthcoming FREE-AI (Fairness, Reliability, Ethics, Explainability, and Accountability in AI) framework. Key RBI expectations: AI models affecting customer outcomes must be explainable, credit decisions cannot be made by opaque AI without human oversight, and AI systems must be auditable by RBI supervisors on demand.
For LLM-based applications in BFSI: if an LLM assists credit officers in decision-making, the prompts and responses are part of the audit trail. If an LLM generates customer communications (loan approval/rejection letters), the content must be accurate and compliant with consumer protection guidelines.
DPDP Obligations Specific to BFSI
BFSI firms are among the most likely candidates for Significant Data Fiduciary designation under DPDP Section 10 — they process high volumes of highly sensitive financial personal data. This means DPO appointment, Data Auditor engagement, and DPIA processes may be mandatory, not optional.
BFSI-specific DPDP challenges: KYC data (Aadhaar, PAN, Voter ID) collected during account opening must be purpose-limited to KYC — using it for marketing AI models requires separate consent. Loan repayment data (positive and negative) is personal financial data — using it in models beyond credit assessment requires documented legal basis.
Architecture: Kafka + CrewCheck for BFSI Compliance
The production BFSI AI governance architecture: all LLM traffic routes through CrewCheck, which applies the India PII policy pack (Aadhaar, PAN, UPI, IFSC, bank account, biometric indicators). Audit events are streamed to Kafka with 2-year retention. Downstream consumers include the SIEM for real-time alerts and the compliance reporting warehouse.
For core banking LLM integrations: CrewCheck connects to your LLM-as-a-tool workflow, ensuring that customer data fetched from core banking for LLM context is sanitised before the actual LLM API call. The customer service representative sees the full data; the LLM receives redacted placeholders.
Fair Lending and AI Bias Governance
RBI's fairness expectations for credit AI require monitoring for demographic bias. While DPDP doesn't explicitly mandate bias testing, using personal data in a way that results in discriminatory outcomes likely violates Section 8's accuracy and fairness principles.
Practical steps: (1) Audit credit models for disparate impact across gender, religion, and geography, (2) Document the fairness metrics and thresholds, (3) Log model version with each credit decision — when a model is updated, you need to trace which version made which decision, (4) Maintain a model card for each credit AI model as part of your DPDP documentation.
Industry operational checklist
AI Governance for BFSI in India: RBI, DPDP, and LLM Gateway Controls should be reviewed as an operating control, not only as a reference article. The minimum checklist is a data inventory, a stated processing purpose, owner approval, PII detection at the AI boundary, redaction or tokenisation where possible, retention limits, vendor transfer records, and a tested user-rights workflow. This checklist gives engineering and compliance teams a shared language for deciding what must be blocked, what can be allowed in shadow mode, and what needs human review before production release.
For AI systems, the review should include prompts, retrieved context, tool call arguments, model responses, logs, traces, analytics events, exports, and support attachments. Many incidents happen because teams scan only the visible form field while sensitive data moves through background context or observability tooling. CrewCheck's recommended pattern is to place the scanner at the request boundary, record the policy version, and keep audit evidence that shows which identifiers were detected and what action was taken.
A practical rollout starts with representative samples from production-like traffic. Run a DPDP scan, sort findings by identifier sensitivity and blast radius, fix Aadhaar, PAN, financial, health, children's, and precise-location exposure first, then move to consent wording, retention, deletion, and vendor review. Use shadow mode when false positives could disrupt users, and promote to enforcement only after the exceptions have owners and expiry dates.
This page is educational and should be paired with legal review for final policy interpretation. The operational proof should still come from repeatable evidence: scanner results, audit exports, pull-request checks, policy configuration, and a documented owner for the workflow. That combination is what makes the content useful during buyer diligence, board review, regulatory questions, or an incident investigation.
Related pages
Check your own workflow
Run a free DPDP scan before this risk reaches production.
Scan prompts, logs, documents, and API payloads for Indian PII exposure, missing redaction, and audit gaps. Backlinks: learn hub, developer docs, pricing, and the DPDP scanner.