Issue #5 The Stack + The Briefing Saturday, 10 May 2026

You’re giving your client’s
data to OpenAI for free.

What Indian professionals don’t know about their AI tools is actively working against them. The ChatGPT toggle that’s on by default. What your employer already tracks. And 5 steps to fix it before your next session.

62%
Indians using GenAI at work
EY Work Reimagined 2025
72%
Learning AI on their own
No company training available
~0%
Have checked their
AI privacy settings
— The Story  ·  Part 1 of 2

The free tier isn’t free.
You’re paying with your work data.

On a Tuesday afternoon in 2023, a Samsung semiconductor engineer pasted some proprietary source code into ChatGPT to help debug a problem. He got his answer. Samsung got a data breach. Within weeks, Samsung banned ChatGPT company-wide. Apple, Amazon, JPMorgan and Verizon followed with identical restrictions.

The setting that caused it is still switched on in your ChatGPT account right now. It has been on since the day you signed up.

Go to ChatGPT Settings → Data Controls → “Improve the model for everyone.” For the vast majority of users, that toggle is ON. What it means in plain language: every prompt you send, every client brief you summarise, every contract clause you ask it to explain, every financial detail you paste in — all of it can be used by OpenAI to train future versions of ChatGPT. Not just stored. Trained on. Meaning human reviewers may read samples of your conversations.

OpenAI’s own privacy policy — verified May 2026

“We may use Content you provide us to improve our Services, for example to train the models that power ChatGPT.”

ChatGPT Free and Plus ($20/month): Training data ON by default. You must opt out manually.

ChatGPT Team and Enterprise: Training OFF by default. Contractual data processing agreement included.

Critical: Paying $20/month for Plus does NOT protect your data. The privacy settings are identical to the free tier.

The opt-out exists. It takes 30 seconds. But most users have never found it, because most assume that a product as widely used as ChatGPT would have privacy-protective defaults. It does not — at least not on the plans most Indian professionals use.

What Indian professionals are typically pasting into ChatGPT on free accounts: client names and company financials. GST invoice data. Legal contract clauses with counterparty details. Internal strategy documents. PAN numbers and Aadhaar details when asking for tax advice.

There is also a subtler problem. Even after you opt out, OpenAI retains your data for 30 days for what it calls “abuse monitoring.” This cannot be disabled on consumer plans. Zero data retention — the setting where nothing is stored — is only available on Enterprise contracts. The same Enterprise contracts that most Indian freelancers, CAs and lawyers will never have.

Every one of those categories represents data that India’s Digital Personal Data Protection Act 2023 considers sensitive. Every one of them may currently be sitting in OpenAI’s training pipeline because the toggle was never turned off.

— The Story  ·  Part 2 of 2

Your employer has a policy
about your AI use. You probably haven’t read it.

In January 2025, the Indian Ministry of Finance issued a formal directive banning the use of ChatGPT and DeepSeek on official government devices. The circular cited concerns that AI applications “could jeopardise sensitive government information.” It went to Revenue, Economic Affairs, Expenditure and Financial Services departments.

That was the government. But private sector India is not far behind — it is just less public about it. A 2026 survey found that 72% of Indian professionals are learning AI independently because structured company training is unavailable. Most Indian organisations do not yet have formal AI governance frameworks — which means employees are using AI tools in a policy vacuum.

The companies monitoring AI tool use are not doing so to penalise their staff. They’re doing it because a data breach caused by an employee pasting client data into a public AI tool is a liability the company carries — not the employee.

Why enterprise AI monitoring is growing in Indian firms

The monitoring reality varies by sector and size. Large IT firms and Big 4 consultancies are most advanced — several have deployed tools that flag when client data categories are pasted into external AI tools and log AI tool usage for compliance audit trails.

For most Indian SME employees and freelancers, the monitoring risk is lower — but the data risk is higher, because small businesses are less likely to have negotiated enterprise agreements that protect their data. A solo CA pasting client GST data into ChatGPT free has no corporate IT team to catch the mistake and no data processing agreement with OpenAI to provide legal recourse.

Platform Free / Personal default Business / Enterprise India-relevant note
ChatGPT Free/Plus Training ON by default Training OFF (Enterprise) No data processing agreement on free tier — DPDPA risk
Claude (Anthropic) Check your settings — see note below Training OFF (Team/Enterprise) Oct 2025: Anthropic gave every user an explicit choice. If you opted out then, you’re safe. If unsure: Settings → Privacy → “Help improve Claude.”
Gemini (Google) Training ON by default Excluded (Workspace) Uses data to improve Google’s models on personal accounts
Sarvam AI India-hosted, API-based Enterprise contracts DPDPA-compliant path available. Data processed in India.

Sources: Platform privacy policies (OpenAI, Anthropic, Google, Sarvam AI), verified May 2026. Claude note: Anthropic policy update October 2025.

India AI Brief · Free · Every Saturday
If this was useful — get every issue.
New AI workflows and India AI intelligence every Saturday.
— The Stack

How to use AI at work without burning yourself.

You do not need to stop using AI. You need to use it correctly.

5 steps. 10 minutes. Works for any profession. Free.

1
Turn off training data in ChatGPT. Right now.
Go to: Profile icon → Settings → Data Controls → “Improve the model for everyone” → switch OFF.
Takes 30 seconds. Do it before your next session.
Important: this opt-out is not retroactive. Data already submitted before today remains in their system.
✓ Do this first
2
Check your Claude settings — then use it for sensitive work.
In October 2025, Anthropic required every user to make an explicit training choice. If you opted out then — or declined during signup — Claude is the most private major AI platform available. Check now: Claude.ai → Settings → Privacy → “Help improve Claude” → should be OFF. Once confirmed, Claude is the right tool for anything involving client names, financial data or legal content. Free tier available.
➔ Free at claude.ai
3
Never paste these into any free AI tool without checking first.
Client names + financial figures  ·  GST invoice data with GSTIN  ·  PAN or Aadhaar numbers  ·  Internal strategy or pricing documents  ·  Legal contracts with counterparty details.
If you must use AI on sensitive content, redact identifiers first. Replace real names with [CLIENT A]. Replace specific numbers with [AMOUNT].
4
Check if your company has an AI policy.
Ask your HR or IT team: “Do we have a policy on which AI tools can be used for work, and what data can be shared?”
If they say no: assume the restrictive interpretation and stick to privacy-by-default tools like Claude.
If they say yes: read it before your next session. Under DPDPA 2023, your employer carries liability for how client data is handled — and so might you.
5
For Indian-language or India-specific content: use Sarvam AI.
Sarvam AI processes data in India, offers an INR-priced API (starting at ₹1,000 free credits), and is building toward DPDPA compliance. For voice content, regional language text, or anything that must stay within Indian infrastructure: Sarvam is currently the most credible India-first option. Bhashini (government, free API) is the alternative for legal and official government-domain content.
➔ Free at sarvam.ai
— Two Things This Week
📉 India: MeitY AI governance guidelines, November 2025

India has a policy framework for AI at work. Most organisations haven’t operationalised it.

MeitY’s guidelines require organisations to: define approved AI tools by name, classify what data categories can be shared with AI systems, and mandate human review before AI outputs are used in regulated domains (legal, financial, medical). These are not suggestions — they are the framework DPDPA enforcement will reference when it comes.

The gap: guidelines exist, implementation does not. 72% of Indian professionals are learning AI on their own because structured company guidance is unavailable. The companies that build governance frameworks now will have a compliance advantage that takes years to replicate.

🌎 Global: Canada just found OpenAI violated privacy law — 6 May 2026

Canada became the first country to formally rule that OpenAI broke privacy laws in building ChatGPT. The findings apply globally.

On 6 May 2026, Canada’s Privacy Commissioner and three provincial counterparts published the results of a joint investigation into OpenAI. The verdict: OpenAI violated federal and provincial privacy laws. The investigation found OpenAI collected significant amounts of personal information to train ChatGPT — including sensitive data like health details, political opinions, and information about minors — without obtaining valid consent from the individuals whose data was used.

Why this matters for Indian professionals: The legal principle established in Canada will apply pressure globally. India’s DPDPA creates a similar consent requirement. The ruling validates exactly what this issue covers — the data you paste into AI tools is not simply “processed and forgotten.” It may have built the model you’re using today.

➔ Source: PIPEDA Findings #2026-002

— The Ask

Forward this to one colleague
who uses AI at work.

Not because it’s interesting. Because the toggle in their ChatGPT is probably still on. And their client data is probably in OpenAI’s training pipeline right now. One forward is how India AI Brief grows.

Bunny  ·  Founder, India AI Brief  ·  hello@llmtools.in
India AI Brief is a free weekly newsletter by LLMTools.in — India’s curated AI tools directory.
India AI Brief  ·  Free  ·  Every Saturday
Don’t miss Issue #6.

AI tools, workflows and what’s actually happening in Indian AI — every Saturday. Free forever. Written by a human, for Indian professionals.

✓ You’re in. Issue #6 lands next Saturday.
No spam  ·  Unsubscribe anytime  ·  Every Saturday