Potential Risks of Using the DeepSeek AI Assistant


Cybersecurity Advisory

Published: January 31, 2024

Key Concerns

There is strong evidence to suggest that DeepSeek, along with its AI models, applications, and data collection processes, operates under significant influence and oversight from the Chinese government. Key concerns include:

  • The DeepSeek AI Assistant app is likely designed to produce responses that align with the strategic narratives and objectives of the Chinese Communist Party (CCP).

  • The app collects personal user data, including device details and user inputs, storing this information on servers located in China.

To mitigate potential risks, organisations, particularly those handling sensitive or critical information, should:

  • Evaluate restrictions on the use of DeepSeek applications on enterprise devices.

  • Educate employees about privacy concerns and security risks associated with the app.

  • Implement policies regarding the use of generative AI applications to prevent unauthorised data exposure.

  • Include training on responsible AI usage as part of cybersecurity awareness programs.

Background

On January 10, 2025, DeepSeek, a Chinese AI developer, launched a free AI Assistant app for iOS and Android. The app quickly became one of the most downloaded globally, surpassing well-known AI tools like ChatGPT.

Government officials from multiple countries have raised security and privacy concerns regarding the DeepSeek AI Assistant.

  • Australian officials have cautioned users about data security risks.

  • The UK government acknowledged concerns over potential censorship within the app.

  • In the United States, the National Security Council is reviewing the security implications, while certain government agencies have restricted use of the app on official devices.

Security Assessment

DeepSeek and its applications are subject to CCP influence, largely due to Chinese laws requiring corporations to cooperate with national intelligence agencies. This extends to:

  • AI model development, regulatory compliance, and government oversight.

  • Potential access to private sector data.

  • The risk of remote access to user devices.

Unlike other China-based applications, DeepSeek explicitly states that its services are governed by mainland China’s laws, which raises additional concerns about data privacy and security.

Censorship and Biased Outputs

DeepSeek’s AI model appears to be aligned with CCP-approved narratives. Examples include:

  • Refusing to discuss politically sensitive topics, such as the 1989 Tiananmen Square incident, while offering detailed responses on issues concerning other countries, such as the January 6 Capitol Riots in the United States.

  • Describing Taiwan’s territorial status in a manner consistent with official Chinese government positions.

User Data Collection and Storage

The DeepSeek AI Assistant app collects and stores user data in China, including:

  • Detailed device and network information.

  • User interactions and inputted data.

  • Data that may be retained indefinitely, even after the app is deleted, in compliance with Chinese data laws.

Recommendations

Organisations handling critical infrastructure, commercial IP, or personal data should consider restricting access to the DeepSeek AI Assistant app. Additional steps to protect data integrity include:

  • Providing employees with guidance on the risks associated with using DeepSeek and similar applications.

  • Establishing clear policies on generative AI usage within the organisation.

  • Prohibiting the input of sensitive, proprietary, or personally identifiable information into AI tools.

  • Training staff on how to validate AI-generated content before relying on it for decision-making.

By implementing these measures, organisations can reduce exposure to potential privacy risks and security threats associated with unregulated AI applications.

Next
Next

Australia’s first cyber security bill passes – what does this mean for businesses?