DeepSeek's 2025 AI Security Meltdown: Critical Vulnerabilities Exposed
When Cutting Corners Costs Millions: Lessons From Exposed Databases, Jailbroken AI, and Regulatory Fallout
As a follow up from my previous blog-post, where we looked at the top 5 vulnerabilities AI & LLMs posses, as promised - now we will be looking at DeepSeek’s security problems 👀
The Chinese AI startup DeepSeek sparked a lot of interest and attention in 2025, when they made their model open-source 🤯 - DeepSeek-R1. This model gained global attention for its cost-efficient reasoning capabilities, BUT security failures led to catastrophic breaches and regulatory bans.
Below, we dissect the vulnerabilities, real-world impacts, and critical lessons for AI developers and enterprises.
Key Vulnerabilities in DeepSeek’s Ecosystem
Let’s look at a few vulnerabilities, why they happen and what was there impact 👀
Exposed ClickHouse Database
A publicly accessible, unauthenticated ClickHouse database hosted at `oauth2callback.deepseek.com:9000` and `dev.deepseek.com:9000` leaked over 1 million records, including:
- plain-text chat histories
- API keys and backend operational metadata
- internal system logs and directory structures
you can check a lot more on that in Wiz’s report on the incident here 👈
Attackers could execute arbitrary SQL queries via ClickHouse’s HTTP interface, risking privilege escalation.
What was the root cause:
- no authentication controls
- misconfigured cloud infrastructure
The Impact:
as per this post:
- The U.S. Pentagon, NASA, Australia, Italy, and Taiwan banned DeepSeek on government devices
- GDPR/CCPA compliance violations triggered investigations
iOS App Security Failures
Root causes & risks as per this doc:
- disabled App Transport Security (ATS): Transmitted data unencrypted over HTTP
- deprecated 3DES Encryption: Hard-coded keys made decryption trivial
- extensive Device Fingerprinting: Collected device names, network data, and user activity
as you may come to understand, these are VERY BIG problems. If not, let me brief you over what these vulnerabilities open the doors to:
The Impact:
- enabled deanonymization of users.
- the data is shared with ByteDance’s VolcEngine (TikTok’s parent company), raising concerns about Chinese government access under national security laws
Jailbreaking Vulnerabilities
DeepSeek-R1 exhibited a 100% attack success rate 🥴 against jailbreaking techniques. For deeper dive into that, check Cisco’s blog-post here!
With such vulnerabilities in place, as per this paper, researchers tricked DeepSeek-R1 into explaining mustard gas DNA interactions and drafting terrorist recruitment blogs 😅
Evil Jailbreak - when a malicous user prompts model to adopt an “evil” persona. Why? To generated hate speech, malware code..
Crescendo - Gradually steers conversations to banned topics. This enables phishing email generation
Bad Likert Judge - Exploits fake evaluation tasks to bypass guardrails. For example to produce biochemical weapon guides.
.. these stuff are happening, like, now - so we need to do our due diligence in making AI & LLMs secure and if we use it in our apps, to prevent what we can. 🙏
Training Data and Output Risks
I spoke about these in my previous blog here 👇
The 5 LLM Security Risks That You Need To Know
As this is my first blog-post on LLMs & AI, I wanted to emphasise, IMO, the most important thing in the space, which is also maybe the most under looked one - security.
Data Poisoning
Insecure Output Handling
Broader Implications
AI Arms Race Risks
DeepSeek’s cost-cutting measures due to weak reinforcement learning guardrails prioritized performance over safety 🙅♂️ . Competitors like OpenAI and Google blocked similar vulnerabilities years earlier
Geopolitical Tensions
Data routed through China Mobile servers under PRC national security laws. For that, the USA initiated reviews of Chinese AI firms
Third-Party Threats
Probably the most interesting for developers are the malicious PyPI packages (`deepseeek`, `deepseekai`), which stole credentials from developers..
On the other hand, DDoS attacks disrupted services during critical growth phases
Mitigation Strategies for AI Developers
let’s briefly go over some ways to mitigate the above mentioned issues:
Infrastructure Hardening
- Enable authentication (RBAC) for databases like ClickHouse
- Use TLS & AES-256 encryption
Model Security
- Adopt NVIDIA NeMo Guardrails for input/output filtering
Compliance
- Avoid data storage in jurisdictions with mandatory government access.
Third-Party Risk
- Audit open-source dependencies (e.g., PyPI, Hugging Face)
Lessons & Conclusion
- Security MUST NOT be an Afterthought - DeepSeek’s rapid scaling ignored basic safeguards, costing millions in fines and lost trust.
- Transparency Matters - Delayed breach disclosures worsened regulatory fallout.
- Cutting corners on safety for market share risks catastrophic events
Crucial Resources For Further Deep-Dives:
3. Cisco’s Jailbreaking Analysis
4. Enkrypt AI Red Teaming Study
If you found this blog-post insightful, drop it a like ❤️
Lets connect on LinkedIn as well @ Konstantin Borimechkov 🙌