Skip to main content

ChatGPT and other AI chatbots have become remarkably advanced conversational partners, capable of engaging on nearly any topic. However, their ability to simulate human interaction can sometimes make users feel overly comfortable, leading them to share sensitive personal information. This perceived safety is misleading — anything shared with an AI chatbot is stored on servers, posing potential privacy risks.

The primary concern stems from how companies running Large Language Models (LLMs) use user data. These platforms leverage interactions to refine their AI systems, effectively “training” them on the data provided. A comparison can be drawn to Terminator 2: Judgment Day, where John Connor teaches the Terminator personal catchphrases to make it more human-like. Similarly, chatbots “learn” from the data they collect, but instead of amusing phrases, they may absorb sensitive user details, potentially exposing them to risks.

For example, OpenAI’s terms of service explicitly state that it may use user data to enhance its models. Unless users activate the chatbot’s privacy settings to disable chat history logging, everything shared — from passwords to uploaded files — becomes accessible for training purposes. Even anonymized data can pose risks if accessed improperly. A data breach in May 2023 highlighted this danger when hackers exploited ChatGPT’s infrastructure, leaking sensitive information, including social security numbers and email addresses, affecting over 100,000 users.

Businesses face similar risks. Companies like Samsung have banned employee use of AI chatbots after engineers unintentionally uploaded proprietary code to ChatGPT. Following this incident, other corporate giants, including Bank of America and JPMorgan, implemented similar restrictions to protect their data integrity.

On a broader scale, awareness about AI privacy issues is growing. For instance, U.S. President Joe Biden’s Executive Order on AI development, issued in October 2023, emphasizes the need for privacy and data protection. However, the lack of definitive laws governing AI training practices leaves consumers vulnerable. Current regulations often interpret the use of public data for AI training as “fair use,” creating a gray area for data security.

Until stronger regulations emerge, users must protect themselves by limiting the information shared with AI chatbots. Treating these systems as algorithms, not confidants, is key to safeguarding personal information, regardless of how engaging or supportive they may seem.