What You Need to Know: Digging Deep into DeepSeek

June 3, 2025
DeepSeek-R1, an advanced AI reasoning model has gained significant attention for its cost-effectiveness and capabilities. Despite its promising future and lower development costs, security leaders must carefully weigh the risks against the benefits.
IANS Faculty

Is DeepSeek really cheaper, faster, and secure? This question is at the forefront of any security leader’s mind when considering the adoption of DeepSeek-R1, the advanced AI reasoning model developed by the Chinese start-up DeepSeek.

As with any new technology, security leaders must carefully evaluate and manage the risks of adopting innovative tools against the potential benefits they could bring to their environment. DeepSeek-R1 is an advanced AI reasoning model developed by start-up DeepSeek, primarily focused on complex problem-solving tasks through its reasoning capabilities. R1 utilizes techniques such as reinforcement learning to produce accurate outputs across various domains. It leverages an open-source model, making its code and training data accessible to the public. However, unlike open-source software, open AI models cannot be effectively audited for vulnerabilities, backdoors, or alignment that introduces specific biases. Open-source AI models are just data.

While the premise of DeepSeek-R1 sounds very promising, a few factors associated with the model require a closer look and deeper examination. DeepSeek’s origins raise concern among information security leaders about data privacy, but its cost savings and capability claims have also raised eyebrows.


How Does DeepSeek Work?

DeepSeek burst onto the scene in January, claiming its latest model costs less than $6 million to develop. Compare that to the $100 million or more American competitors have reported spending on their models, and one must wonder how is that possible? DeepSeek quickly became the most downloaded app in the U.S., surpassing ChatGPT. OpenAI also speculated that DeepSeek harvested data from its models to build R1, which could potentially exaggerate its cost savings and capabilities.

DeepSeek challenges the accepted idea that AI companies need leading-edge computer chips, to train the best systems. Many AI companies have justified spending billions on advanced chips based on the idea that greater computing power was needed for sophisticated large language models to function.

 

Find Guidance: Tips to Build an AI Governance Team

 

Early adopters describe R1’s writing and problem-solving skills as impressive, but some have noted that the model performed worse on specific types of problem solving. IANS Faculty Jake Williams reports that R1’s performance is “actually really good for code completion, and it’s great for code comprehension, based on limited testing.”

The DeepSeek app is free, and its R1 model is now being widely tested and used across the U.S. and North America. R1 has fewer guardrails than competitors such as ChatGPT, meaning it can be more easily jailbroken and used for malicious purposes. Organizations are working now to define policies around its use, much like they have done with ChatGPT and Microsoft Copilot. Still, security leaders and organizations are hoping models such as R1 will spur lower cost AI-usage worldwide.

 

READ MORE: AI Governance: Tech Problems Without Tech Solutions

DeepSeek Security: Data Privacy Best Practices

Security leaders must be aware that DeepSeek poses data privacy concerns due to its collection of sensitive user data, including chat history, keystroke patterns, and IT address—which are stored on servers in China.

Not only does the lower investment worry industry watchers, but DeepSeek’s origins also raise concerns. The data being stored in China represents potential risks of data exposure and non-compliance with cybersecurity standards. Combine this with DeepSeek’s jailbreaking vulnerabilities in which malicious actors can manipulate the model to generate harmful content such as malware or phishing templates, and security leaders must move forward with caution. DeepSeek’s open-source nature could also introduce vulnerabilities with potential for poisoned training data or backdoors.

While using DeepSeek’s API does introduce data privacy issues, the data privacy risks are mitigated when running a model locally or on dedicated infrastructure. Data submitted to a local copy of the model is not communicated back to DeepSeek, therefore lessening the risk of the Chinese government accessing the data submitted and generated.

Security leaders looking into adopting DeepSeek-R1 should proceed with caution, following a few best practices before diving in.

  • Before deploying DeepSeek, conduct a comprehensive risk assessment considering potential data privacy concerns and security vulnerabilities.
  • Consider downloading open-parameter models like DeepSeek-R1 and make them available for internal use, which would facilitate experimentation while avoiding the risks.
  • Understand the differences between open-source AI models and open-source software because there is very limited visibility into open-source AI models, which are data and not code.
  • Test to determine if biases present are harmful given the use case, because bias is a concern in any model, but more so in a foreign-developed model.
  • While adopting DeepSeek, limit the amount of sensitive data used with DeepSeek and implement strong data anonymization techniques where possible.
  • Actively monitor DeepSeek outputs for signs of malicious activity, specifically generated content that could be used for phishing and malware distribution.
  • Evaluate other AI language models with stronger security features and data privacy practices.
  • Expect the fear, uncertainty, and doubt (FUD) that come with Western organizations and governments about the risks of Chinese-developed models.
  • Separate the real risks from the hype surrounding AI and the companies developing innovative apps.

 

DOWNLOAD THE CHECKLIST: AI Vendor Questionnaire

 

DeepSeek-R1 offers compelling cost savings and capabilities, but security leaders must approach its adoption with a critical eye. By conducting comprehensive risk assessments, deploying locally, and actively monitoring the model’s outputs, security leaders can potentially harness the benefits of DeepSeek-R1 while also mitigating its associated risks. The model’s origins and potential vulnerabilities, such as data exposure, require a cautious approach and a thorough evaluation. A common sense balance of innovation with heightened security practices will ensure that DeepSeek won’t compromise your organization’s data privacy or system safety.

Gain Access to Leadership Data: Take the 2025-2026 CISO Comp & Budget Survey

Take our CISO Comp and Budget Survey in less than 10 minutes and receive career defining data and other valuable insights and data sets.

Although reasonable efforts will be made to ensure the completeness and accuracy of the information contained in our blog posts, no liability can be accepted by IANS or our Faculty members for the results of any actions taken by individuals or firms in connection with such information, opinions, or advice.


Subscribe to IANS Blog

Receive a wealth of trending cyber tips and how-tos delivered directly weekly to your inbox.

Please provide a business email.