The honest answer is: yes, with some specific caveats that are worth understanding clearly.
AI tools are not surveillance tools. They're not secretly selling your data to competitors. They're not going to get you hacked. But they do handle your inputs in ways that most people haven't thought through — and a few of those ways matter enough to change some habits before you start using them regularly for business.
This article covers what actually happens to what you type, what the current privacy settings are for Claude and ChatGPT, what you should never put into either tool, and two setting changes you should make today if you haven't already. No alarmism. Just the facts as they actually stand in 2026.
What actually happens when you type something into Claude or ChatGPT
When you send a message to either tool, a few things happen:
Your message gets processed on the company's servers to generate a response. That processing happens in real time and involves actual computation — your text is not just stored, it's actively read by the model to generate a reply.
Your conversation gets stored. For how long depends on your account type and settings — covered below. But assume that anything you type is stored on their servers for at minimum 30 days, often longer.
Depending on your settings, your conversation may be used to train future versions of the model. This is the part most people don't know about and the part that changed significantly in late 2025.
Human employees can review conversations in specific circumstances — primarily to investigate safety violations, respond to legal requests, or fix bugs. This isn't routine, but it is possible. Neither company operates a system where no human can ever read your chats.
None of this means your data is being sold, publicly exposed, or handed to competitors. It means your conversations are not completely private in the way a conversation with a lawyer or doctor is legally protected. That's the distinction that matters.
The Claude privacy situation: what changed in 2025 and where things stand now
Anthropic spent years positioning Claude as the privacy-forward AI assistant. That positioning shifted in late 2025 when Anthropic updated its consumer terms, giving users the choice to allow their data to be used to improve Claude, while retaining data for up to five years for those who opt in — compared to the 30-day retention for those who opt out.
The default for new and existing consumer accounts was set to "on" — meaning training was enabled unless you actively turned it off. The opt-out toggle was present but not prominently placed, set in smaller print below a prominent "Accept" button.
The current situation for Claude users:
These updates apply to users on Claude Free, Pro, and Max plans. If you opt out, you continue with the existing 30-day data retention period. Deleted conversations will not be used for model training under any circumstance.
The critical distinction is between consumer and business accounts. Claude for Work (Team and Enterprise plans) and API usage are explicitly excluded — those plans prohibit data training without exception and provide stronger confidentiality protections.
In plain English: if you're on a free or personal paid Claude plan and haven't changed your settings, your conversations may be used for training and stored for up to five years. If you use Claude for Work (the business plan), your conversations are not used for training by default.
How to check and change your Claude training setting:
Open Claude, go to Settings, open Privacy, then Privacy Settings, and turn off the model training control. This applies to future chats — past conversations cannot be retroactively excluded if training already occurred.
One important note: Incognito chats in Claude are not used to improve the model, even if you have model improvement enabled in your settings. If you need to discuss something sensitive without changing your account settings, use Incognito mode.
The ChatGPT privacy situation
ChatGPT is not private by default. For users on personal accounts (Free, Plus, and Pro), OpenAI uses conversations to train future versions of its models. Business plans offer better privacy — Enterprise and Business tiers exclude data from model training by default.
To opt out of training on a personal account: navigate to Your Profile → Settings → Data Controls → Improve the model for everyone → switch off the toggle. Once you opt out, new conversations will not be used to train the models.
If you turn off Chat History and Training, or use Temporary Chat mode, chats won't be saved in your visible history or used for training, but a copy is retained for up to 30 days for abuse monitoring before being permanently deleted.
The key distinctions for ChatGPT:
For personal plans (Free, Plus, Pro): training is on by default. Turn it off in Settings → Data Controls.
For ChatGPT Team, Enterprise, and API: OpenAI does not use business data from ChatGPT Team, Enterprise, or the API for model training or improvement unless you explicitly opt in.
Temporary Chat mode is the closest thing to a private conversation on personal plans — chats don't appear in history and aren't used for training, though a 30-day retention window for safety monitoring still applies.
The two settings to change right now
If you're using Claude or ChatGPT on a personal or free plan and you haven't changed any settings since signing up, do these two things before your next business conversation:
In Claude: Settings → Privacy → Privacy Settings → Turn off model training
In ChatGPT: Profile → Settings → Data Controls → "Improve the model for everyone" → Toggle off
Both take under two minutes. Both significantly reduce what either company can do with your conversations going forward. They don't retroactively affect past conversations, but they protect everything from this point on.
What you should never type into either tool — regardless of settings
Even with training turned off, your conversations pass through company servers, can be reviewed by employees under certain conditions, and may be retained for safety monitoring. That means some categories of information carry real risk regardless of your privacy settings.
Client personal information. Full names combined with sensitive details, contact information, financial situations, health conditions, or anything your clients shared with you in confidence. If a client data breach occurred because their information was in an AI conversation, the fact that you had training turned off would not be a meaningful defense. The safer habit: refer to clients by first name only or by role ("my retail client," "the founder I'm working with"), and strip identifying details before pasting anything.
Financial credentials and account details. Bank account numbers, payment details, tax identification numbers, your own or anyone else's. There is no legitimate reason to put these in an AI conversation. If you need to discuss a financial situation, describe it in general terms.
Passwords, API keys, and access credentials. These should never leave your password manager. Typing them into any web interface — AI or otherwise — creates unnecessary exposure.
Legally sensitive material. A February 2026 federal court ruling in the Southern District of New York established that prompts and outputs created using a public AI tool are not protected by attorney-client privilege, because the AI platform is a third party whose privacy policy permits using inputs for training and disclosing them to regulatory authorities. If you're dealing with a legal dispute, contract negotiation under NDA, or anything your lawyer has told you to keep confidential — keep it out of consumer AI tools entirely. Use the enterprise plan or don't use AI for that conversation.
Confidential client deliverables under NDA. If your client contract includes confidentiality clauses — common in consulting, design, strategy, and legal work — pasting the client's proprietary information into a consumer AI tool may technically violate that contract even if the tool never exposes the data. Check your agreements. When in doubt, anonymize before pasting.
Regulated data. If your business handles health information (HIPAA in the US), financial data under specific regulations, or you operate in the EU under GDPR, consumer AI tools are not appropriate for processing that data. You need enterprise plans with proper data processing agreements, or you need to keep that data out of AI workflows entirely.
What's actually fine to use AI for — which is most things
The categories above cover a real but limited subset of what most solo founders actually use AI for. The vast majority of typical use cases carry no meaningful privacy risk:
Drafting your own outreach emails, proposals, and follow-ups — where the content is about your services and your pitch, not private client data.
Improving your own writing, summarizing your own notes, preparing for meetings using publicly available information about the other party.
Building templates, SOPs, and internal documents that don't contain client-specific sensitive data.
Research, brainstorming, content creation, pricing analysis — anything involving your business's approach to the market rather than your clients' private information.
Using context about your business, your process, and your voice to get calibrated outputs — none of which is sensitive data.
For most solo founders, the daily AI workflow involves exactly this kind of work. The privacy considerations are real but narrow: they apply specifically to client data, legal matters, financial credentials, and regulated industries. For everything else, the tools are fine to use on a personal plan with training turned off.
If your business handles genuinely sensitive data regularly
If your work involves regular access to the categories above — if you're a consultant who regularly works with clients' confidential financials, a designer working under strict NDAs, a healthcare-adjacent service, or a legal professional — the right answer is not to avoid AI. It's to use the right tier.
Claude for Work and ChatGPT Team/Enterprise provide contractual protections against training on your data, stronger confidentiality commitments, and data processing agreements that personal plans don't offer. Both are $25–30/month per user — a straightforward cost if confidentiality is a regular requirement of your work.
The simple decision rule: if you regularly handle information that would create a problem if it appeared in a future AI training dataset — use a business plan. If you mostly handle your own business context and anonymized versions of client work — a personal plan with training turned off is fine.
The bottom line
AI tools are safe for typical solo founder business use, with two conditions: change the training settings on whatever tools you use, and don't paste confidential client data, credentials, or legally sensitive material into consumer-tier tools.
The risks that are real are specific and avoidable. The tools themselves are not surveillance systems, data brokers, or inherently unsafe. They're cloud services with data handling policies that you should understand and configure before relying on them for business work.
Do this today: Check your training settings in Claude and ChatGPT using the steps above. Takes two minutes total. That single action covers the main privacy risk for most solo founder use cases.
Next in Navigate the Friction: 7 Beginner AI Mistakes Solo Founders Make (And the Simple Fixes) →
Or step back to: AI Gave Me a Bad Output. Now What? →
Or go back to the main article: ← Navigate the Friction
Comments (0)
Leave a Comment