The brand-new Chinese generative AI app, DeepSeek, shot to the top of the App Store charts just days after launching this January. But despite its growing popularity, serious concerns have emerged about its security and privacy practices.
A report from NowSecure, a mobile security firm based in Chicago, has exposed major flaws in DeepSeek’s iOS app. According to its analysis, the app collects and transmits a worrying amount of iPhone user data directly to servers in China.
NowSecure also found that DeepSeek is using outdated encryption methods, specifically 3DES (Triple DES), which was officially deprecated back in 2016 due to security weaknesses. Even when encryption is used, it’s not up to modern standards, making it far easier for bad actors to exploit user data.

Apple’s App Transport Security (ATS) is designed to enforce encrypted data transmission for iOS apps, ensuring user data remains secure. However, NowSecure discovered that DeepSeek has deliberately disabled ATS in its app.
This means that instead of securing user data, the app is sending it over unprotected channels, exposing it to potential interception. DeepSeek AI runs on a Huawei chip, so it’s not reliant on any technology from outside its native China.
While some of the collected data might seem harmless on its own, security experts warn that attackers can use this information to de-anonymise users. The report highlights that combining multiple data points over time makes it easy to identify individuals.
In an era where data privacy is more important than ever, using an app that deliberately bypasses security measures and transmits sensitive data unencrypted is a serious red flag. Even when using an AI chatbot that you trust, be sure to never disclose personal information, as this can be used to identify you.
Remember to not download apps you are not sure are legit and learn how to stay safe online.