
DeepSeek AI Under Fire: Privacy Risks and Data Leaks Exposed

DeepSeek, the AI chatbot competing with Gemini, ChatGPT, and Copilot, is now at the center of a major privacy controversy. Reports suggest that the iOS version of the DeepSeek app may be sending unencrypted user data to ByteDance, the parent company of TikTok, raising serious security concerns. Cybersecurity experts warn that this data transmission could be intercepted by malicious actors, exposing users to potential threats.
DeepSeek’s Privacy Risks: Unencrypted Data Exposed
According to NowSecure, a leading cybersecurity firm specializing in mobile apps, the DeepSeek iOS app is sharing sensitive user data over unprotected channels. This means that anyone with access to network traffic monitoring tools could potentially read user data in real-time. Worse yet, this vulnerability makes it easier for hackers and cybercriminals to intercept and exploit personal information.
Apple’s Security Protocol Bypassed
Apple strongly recommends that app developers implement App Transport Security (ATS) to protect data transmissions. However, cybersecurity analysts found that DeepSeek has disabled ATS, leaving user data exposed to security breaches. While some data may be encrypted using transport layer security protocols, decryption on the server side allows malicious actors to access and correlate user data, potentially identifying individuals and compromising their privacy.
Government Crackdowns and Growing Security Concerns
DeepSeek’s data handling practices have already drawn scrutiny from multiple governments. South Korea, Australia, and Taiwan have all raised alarms over the app’s potential national security risks. The National Intelligence Service (NIS) of South Korea recently warned that DeepSeek is excessively collecting personal data and storing it on Chinese servers, where it could be accessed under China’s data laws.
Manipulating Information? AI Bias in DeepSeek’s Responses
In addition to privacy concerns, DeepSeek has also been accused of providing politically biased responses. Reports indicate that the chatbot tailors its answers based on the language used. For example, when asked in Korean about the origin of kimchi, DeepSeek states that it is a Korean dish. However, when asked the same question in Chinese, the chatbot claims kimchi originated in China. These inconsistencies have sparked heated debates between South Korean and Chinese social media users.
Additionally, DeepSeek has been accused of censoring discussions about sensitive political events. When users inquire about the 1989 Tiananmen Square crackdown, the AI allegedly redirects the conversation, stating, “Let’s talk about something else.” This kind of behavior raises concerns over potential censorship and misinformation.
What’s Next? DeepSeek’s Response and Industry Implications
As security concerns grow, some South Korean government ministries have already blocked access to DeepSeek. However, the company has yet to issue an official response addressing these privacy and security allegations.
Meanwhile, China’s Foreign Ministry has defended its stance on data security, stating on February 6 that it protects user data in accordance with the law.
Should You Stop Using DeepSeek?
With mounting concerns over privacy breaches, unencrypted data transmission, and AI censorship, users should think twice before engaging with DeepSeek. If data security is a priority, opting for AI platforms with transparent privacy policies and robust encryption measures is essential.
As AI continues to shape the future, ensuring digital security and ethical AI development will be critical. Users, regulators, and tech companies must work together to create a safer and more accountable AI ecosystem.