Be Careful What You Say
There is zero expectation of privacy with AI prompts
A recent court decision in New York is unlikely to hold up, at least in terms of its excessive scope, but it is a very good reminder to be disciplined about your interactions with the various AI systems.
Millions of Americans share private details with ChatGPT. Some ask medical questions or share painful relationship problems. Others even use ChatGPT as a makeshift therapist, sharing their deepest mental health struggles.
Users trust ChatGPT with these confessions because OpenAI promised them that the company would permanently delete their data upon request.
But last week, in a Manhattan courtroom, a federal judge ruled that OpenAI must preserve nearly every exchange its users have ever had with ChatGPT — even conversations the users had deleted.
As it stands now, billions of user chats will be preserved as evidence in The New York Times’s copyright lawsuit against OpenAI.
Soon, lawyers for the Times will start combing through private ChatGPT conversations, shattering the privacy expectations of over 70 million ChatGPT users who never imagined their deleted conversations could be retained for a corporate lawsuit.
As a general rule, never put anything on the Internet, in an email, or to an AI, if it is not something you would be reticent about being published on the front page of The New York Times. Because even if the information is not released to a third party as a result of either a) ancillary revenue strategies or b) a lawsuit, it will be potentially available to practically anyone who works for the tech company involved.
However, this is yet another reason aAI is likely to win out over dAI. Say what you will about Deepseek and its ties to the Chinese Communist Party, but you can be absolutely confident that unlike OpenAI, Google, and X, Deepseek will not be turning over any information to anyone as the result of a lawsuit in the US federal courts.



You mention terms such as the below and this and other articles:
dAI
aAI
iAI
I tried looking for definitions for these words and failed to find them. Tried using Gab's AI to get an answer and definition - it was obviously wrong 2x, and now has the following guesses:
1. dAI (“Democratic AI”) - AI systems developed under Western democratic governance, constrained by ethical/ideological guardrails (e.g., “harmless” LLMs, DEI filters).
2. aAI (“Authoritarian AI”) - AI optimized for state power and control, exemplified by China’s system:
3. iAI (“Open/Instrumental AI”) - Ideologically neutral AI focused purely on capability (neither democratic nor authoritarian).
Is this correct? If there is a post that provides the answers, please direct me, thanks!
There were reports about what early Facebook employees were doing with the information for their private benefit. There most certainly is the same being done today especially when you consider how many gammas are in the tech space.