Founder & Operator Guides

How to Evaluate AI Vendor Compliance Before You Sign - A 10-Point Buyer's Checklist

A practical checklist for Indian buyers evaluating whether an AI vendor can prove DPDP readiness before procurement, integration, or data sharing.

Harsh · 7 May 2026 · 8 min read

How to Evaluate AI Vendor Compliance Before You Sign - A 10-Point Buyer's Checklist
#buyer-checklist#vendor-risk#dpdp#ai-procurement
Not sure what applies to your product?DPDP Quick Check (2 minutes)Search verified vendors

Start with evidence, not claims

AI vendor compliance is hard to judge from a pitch deck because the riskiest behavior often happens inside prompts, integrations, analytics scripts, and provider routes. A vendor can say they are privacy-conscious while their public site loads trackers before consent or their product sends raw Indian identifiers to a model provider.

Before you sign, ask for evidence that is recent, specific, and tied to the workflow you will use. A public CrewCheck proof page is one useful starting point because it shows scan date, score, findings, and buyer-facing artifacts in one place. You can browse examples in the CrewCheck Trust Marketplace and compare the proof behind each listing.

The 10-point buyer checklist

Use this checklist before procurement approval: public DPDP notice, consent collection before processing, data principal rights, retention language, grievance contact, third-party data sharing, cross-border transfer disclosure, AI usage disclosure, shadow AI endpoint visibility, and evidence that Indian PII is detected before model calls.

A strong vendor should be able to show how Aadhaar, PAN, UPI, ABHA, mobile numbers, addresses, employee IDs, student IDs, and health data are handled. They should also explain whether data reaches OpenAI, Anthropic, Gemini, Azure OpenAI, analytics providers, or internal queues, and what audit event is created when a risky request is blocked or redacted.

  • Ask for the latest public scan or proof page.
  • Check whether the scan was verified recently, not last year.
  • Open the proof page and review critical or high findings.
  • Ask how the vendor redacts Indian PII before model calls.
  • Ask which model providers and regions process your data.
  • Confirm consent and withdrawal language is visible to users.
  • Confirm third-party trackers do not fire before consent.
  • Ask for one blocked request and one redacted request example.
  • Check whether data retention and deletion requests are operational.
  • Require a named owner for AI compliance changes.

How to use CrewCheck in procurement

Start with the Trust Marketplace when you want to discover vendors that already have public proof. If a vendor is not listed, ask them to run the free DPDP scan and share the proof page. That gives you a quick, comparable baseline before a deeper security review.

The scan is not a substitute for legal diligence or a full architecture review. It is a first filter. It helps buyers avoid spending weeks with a vendor that cannot answer basic AI data-safety questions, and it helps good vendors show proof before the buyer asks for a long questionnaire.

How to operationalize this in an Indian AI workflow

Treat this topic as a production workflow, not a policy note. Identify the user action that starts the AI call, the personal or regulated data that can enter the prompt, the model provider that receives it, and the owner responsible for changing the route when something goes wrong. For an Indian product, the data inventory should explicitly cover Aadhaar-like identifiers, PAN, UPI IDs, account numbers, ABHA IDs, mobile numbers, addresses, and mixed-language free text because those are the values that often slip through generic Western scanners.

Once the workflow is named, put the control at the boundary. For CrewCheck, that means routing the model call through the gateway so detection, redaction, rule evaluation, provider choice, and audit logging happen consistently. The important detail is that the control should run on every request, including retries, fallback providers, demos, internal admin tools, and queue workers that call models outside the main web path.

Control pointEvidence to retainOperational owner
Pre-prompt scanPII type, rule ID, request hash, redacted payloadPlatform engineering
Provider routingSelected provider, region notes, fallback reasonAI platform owner
Post-output scanBlocked text class, replacement copy, reviewer statusProduct owner

What evidence a buyer should ask for

A serious buyer should ask for evidence that connects the claim to live behavior. For a privacy claim, that means redaction logs, blocked examples, sanitized payloads, and data-retention behavior. For a safety claim, that means red-team cases, circuit-breaker decisions, and output scanning results. For a compliance claim, that means the notice, purpose, rule, and provider route can be reconstructed from the audit trail without waiting for an engineer to open production logs.

The practical standard is whether the team can answer a specific question without manual archaeology: what happened to this request, which rule fired, what data was removed, which provider saw the final payload, who approved the exception, and how long will the record be retained? If that answer requires five tools and a memory of how the system was meant to work, the evidence layer is not ready yet.

  • Keep one sample allowed request, one redacted request, and one blocked request for each high-risk AI route.
  • Link every public compliance claim to a live page, report export, gateway event, or scanner finding.
  • Review DPDP notice language whenever the AI feature changes its purpose, provider, or data fields.
  • Retest Hindi, Hinglish, spaced, hyphenated, and word-digit personal-data variants before release.

A safe next step

Start with one high-risk path and make it boringly inspectable. Run realistic Indian examples through it, including Aadhaar-like numbers, PAN formats, UPI IDs, mixed-language prompts, and attempts to override system instructions. Check the user-facing response, the gateway event, the dashboard state, and the exportable report. The path is ready only when all four tell the same story.

That narrow verification habit matters more than a large compliance checklist. AI governance fails when teams assume controls are present because the architecture says they are. It becomes trustworthy when the live product can show the exact request, exact decision, exact redaction, exact provider route, and exact evidence behind the claim.

After that, make the check repeatable. Keep the examples in a small regression pack, rerun them before deployment, and compare the result with the public claim you are about to make. If the route, report, or dashboard no longer proves the claim, change the product or change the claim before a customer finds the gap.

The habit is deliberately plain: one workflow, one owner, one evidence trail, one live verification path. That is enough to turn a short article, launch note, or procurement answer into something an operator can actually use when a bank, insurer, hospital, or enterprise SaaS buyer asks for proof.

Internal reference path

Use this article with the DPDP consent management implementation, the Indian PII types reference, and the LLM gateway for DPDP compliance. Those three pages give the legal, data-type, and runtime-control context needed to turn the article into an implementation review.

If the workflow touches banking, lending, insurance, healthcare, education, employment, or public-sector records, add one more internal review step before shipping. Ask whether the prompt uses the minimum data needed, whether the user-facing notice matches the route, whether output scanning runs before the response is shown, and whether an exportable event exists for the buyer, auditor, regulator, or incident commander who will eventually ask for proof during a real review.

Explore More

Check your own AI path

Your AI is probably leaking data you haven't checked for.

Author

Harsh

Building CrewCheck in public from India.