What Indian professionals don’t know about their AI tools is actively working against them. The ChatGPT toggle that’s on by default. What your employer already tracks. And 5 steps to fix it before your next session.
On a Tuesday afternoon in 2023, a Samsung semiconductor engineer pasted some proprietary source code into ChatGPT to help debug a problem. He got his answer. Samsung got a data breach. Within weeks, Samsung banned ChatGPT company-wide. Apple, Amazon, JPMorgan and Verizon followed with identical restrictions.
The setting that caused it is still switched on in your ChatGPT account right now. It has been on since the day you signed up.
Go to ChatGPT Settings → Data Controls → “Improve the model for everyone.” For the vast majority of users, that toggle is ON. What it means in plain language: every prompt you send, every client brief you summarise, every contract clause you ask it to explain, every financial detail you paste in — all of it can be used by OpenAI to train future versions of ChatGPT. Not just stored. Trained on. Meaning human reviewers may read samples of your conversations.
ChatGPT Free and Plus ($20/month): Training data ON by default. You must opt out manually.
ChatGPT Team and Enterprise: Training OFF by default. Contractual data processing agreement included.
Critical: Paying $20/month for Plus does NOT protect your data. The privacy settings are identical to the free tier.
The opt-out exists. It takes 30 seconds. But most users have never found it, because most assume that a product as widely used as ChatGPT would have privacy-protective defaults. It does not — at least not on the plans most Indian professionals use.
What Indian professionals are typically pasting into ChatGPT on free accounts: client names and company financials. GST invoice data. Legal contract clauses with counterparty details. Internal strategy documents. PAN numbers and Aadhaar details when asking for tax advice.
There is also a subtler problem. Even after you opt out, OpenAI retains your data for 30 days for what it calls “abuse monitoring.” This cannot be disabled on consumer plans. Zero data retention — the setting where nothing is stored — is only available on Enterprise contracts. The same Enterprise contracts that most Indian freelancers, CAs and lawyers will never have.
Every one of those categories represents data that India’s Digital Personal Data Protection Act 2023 considers sensitive. Every one of them may currently be sitting in OpenAI’s training pipeline because the toggle was never turned off.
In January 2025, the Indian Ministry of Finance issued a formal directive banning the use of ChatGPT and DeepSeek on official government devices. The circular cited concerns that AI applications “could jeopardise sensitive government information.” It went to Revenue, Economic Affairs, Expenditure and Financial Services departments.
That was the government. But private sector India is not far behind — it is just less public about it. A 2026 survey found that 72% of Indian professionals are learning AI independently because structured company training is unavailable. Most Indian organisations do not yet have formal AI governance frameworks — which means employees are using AI tools in a policy vacuum.
The companies monitoring AI tool use are not doing so to penalise their staff. They’re doing it because a data breach caused by an employee pasting client data into a public AI tool is a liability the company carries — not the employee.
Why enterprise AI monitoring is growing in Indian firmsThe monitoring reality varies by sector and size. Large IT firms and Big 4 consultancies are most advanced — several have deployed tools that flag when client data categories are pasted into external AI tools and log AI tool usage for compliance audit trails.
For most Indian SME employees and freelancers, the monitoring risk is lower — but the data risk is higher, because small businesses are less likely to have negotiated enterprise agreements that protect their data. A solo CA pasting client GST data into ChatGPT free has no corporate IT team to catch the mistake and no data processing agreement with OpenAI to provide legal recourse.
| Platform | Free / Personal default | Business / Enterprise | India-relevant note |
|---|---|---|---|
| ChatGPT Free/Plus | Training ON by default | Training OFF (Enterprise) | No data processing agreement on free tier — DPDPA risk |
| Claude (Anthropic) | Check your settings — see note below | Training OFF (Team/Enterprise) | Oct 2025: Anthropic gave every user an explicit choice. If you opted out then, you’re safe. If unsure: Settings → Privacy → “Help improve Claude.” |
| Gemini (Google) | Training ON by default | Excluded (Workspace) | Uses data to improve Google’s models on personal accounts |
| Sarvam AI | India-hosted, API-based | Enterprise contracts | DPDPA-compliant path available. Data processed in India. |
Sources: Platform privacy policies (OpenAI, Anthropic, Google, Sarvam AI), verified May 2026. Claude note: Anthropic policy update October 2025.
You do not need to stop using AI. You need to use it correctly.
5 steps. 10 minutes. Works for any profession. Free.
MeitY’s guidelines require organisations to: define approved AI tools by name, classify what data categories can be shared with AI systems, and mandate human review before AI outputs are used in regulated domains (legal, financial, medical). These are not suggestions — they are the framework DPDPA enforcement will reference when it comes.
The gap: guidelines exist, implementation does not. 72% of Indian professionals are learning AI on their own because structured company guidance is unavailable. The companies that build governance frameworks now will have a compliance advantage that takes years to replicate.
On 6 May 2026, Canada’s Privacy Commissioner and three provincial counterparts published the results of a joint investigation into OpenAI. The verdict: OpenAI violated federal and provincial privacy laws. The investigation found OpenAI collected significant amounts of personal information to train ChatGPT — including sensitive data like health details, political opinions, and information about minors — without obtaining valid consent from the individuals whose data was used.
Why this matters for Indian professionals: The legal principle established in Canada will apply pressure globally. India’s DPDPA creates a similar consent requirement. The ruling validates exactly what this issue covers — the data you paste into AI tools is not simply “processed and forgotten.” It may have built the model you’re using today.
Not because it’s interesting. Because the toggle in their ChatGPT is probably still on. And their client data is probably in OpenAI’s training pipeline right now. One forward is how India AI Brief grows.
AI tools, workflows and what’s actually happening in Indian AI — every Saturday. Free forever. Written by a human, for Indian professionals.