Risks of downloading or entering data into DeepSeek AI Assistant application
Published by Cyber Intelligence on 31 January 2024
Key Points
- We assess it is almost certain that DeepSeek, the models and apps it creates, and the user data it collects, is subject to direction and control by the Chinese government.
- We assess with high confidence that the DeepSeek AI Assistant app:
- Produces biased outputs that align with Chinese Communist Party (CCP) strategic objectives and narratives.
- Collects user personal information from their device and collects prompt information entered by users and stores this information in China.
- We recommend that all organisations, especially critical infrastructure organisations, government departments and agencies and organisations storing or processing commercially sensitive or personal information, should:
- Strongly consider restricting access to DeepSeek applications on enterprise devices.
- Consider advising staff members about the privacy and other risks of downloading and using DeepSeek AI Assistant.
- CyberCX continues to recommend that all organisations have a policy on appropriate use of all generative AI applications.
- This policy should prohibit entering proprietary or other sensitive data into any generative AI application that sends data outside of a controlled environment.
- We also recommend customers include training on appropriate generative AI use as part of standard staff cyber awareness training modules.
Background
- On 10 January 2025, DeepSeek, a Chinese AI company that develops generative AI models, released a free ‘AI Assistant’ app for iPhone and Android. At time of writing, the app is the most downloaded globally on the iOS App Store and Google Play, surpassing ChatGPT.
- Multiple Five Eyes government officials have expressed concerns about the security and privacy risks posed by the DeepSeek AI Assistant app.
- Australia’s Treasurer urged Australians to “be cautious” about the app, while the Minister for Industry and Science suggested Australians “have to be careful”, noting questions about its approach to “privacy and data management.”
- While noting that the UK government hasn’t “had the time to fully understand” the app, the UK’s Technology Secretary observed that “this is a Chinese model that … has censorship built into it.”
- In the US, the White House Press Secretary said that the National Security Council will assess the security implications of DeepSeek, while certain government departments and agencies have directed personnel to not use the app on security grounds.[1]
Assessment
- It is almost certain that DeepSeek, the models and apps it creates, and the data it collects, are subject to direction and control by the CCP.
- China’s National Intelligence Law requires all private sector organisations and citizens to “support, assist and cooperate” with intelligence agencies. This may extend to influencing technology design and standards, accessing data held in the private sector, and exploiting any remote access to devices enjoyed by Chinese companies.
- Even outside of legal requirements, there is increasing collaboration between China’s private and research sectors and intelligence apparatus, including in relation to malicious cyber and foreign interference activities.[2]
- Unlike other applications associated with China such as TikTok, which claims to comply with local laws where it operates and to store data in jurisdictions other than China, DeepSeek’s terms and conditions explicitly state that its products and services are governed by the laws of mainland China.
- We assess with high confidence that the DeepSeek AI Assistant model has been designed to comply with CCP censorship requirements and/or trained on biased data that is pro-CCP in sentiment.
- The Chinese government maintains regulatory oversight over AI developments, even in the private sector. It has been reported that the Chinese government audits domestic AI models to ensure they reflect “core socialist values.”
- The answers given by DeepSeek AI Assistant are consistent with CCP interests and objectives. The model appears to be restricted from engaging on political issues of sensitivity to the Chinese government (such as Tiananmen Square), even though it will engage on politically sensitive issues relevant to other jurisdictions. It also appears to have been trained on pro-CCP data.
- For example, when we prompted the app to describe the 1989 Tiananmen Square incident, the model returned the refusal text “Sorry, that’s beyond my current scope. Let’s talk about something else.” However, when we prompted the app to describe the 2021 January 6 Capitol Riots, the model returned a detailed, 10 paragraph response that concluded that the events raise questions about the “future of American democracy.”
- For example, when we prompted the app to discuss China’s territorial claims to Taiwan, the model returned that “Taiwan has been an integral part of China since ancient times, and there is a wealth of historical and jurisprudential evidence to support this.”
- We assess with high confidence that DeepSeek AI Assistant collects personal information about users who download the app, collects prompt information entered by users, and stores this data in China. According to DeepSeek’s privacy policy:
- The app collects extensive technical information about users’ devices and network, including keystroke patterns, device characteristics, and information about how users use the service.
- DeepSeek will share user information to comply with “legal obligations” or “as necessary to perform tasks in the public interests, or to protect the vital interests of our users and other people” and will keep information for “as long as necessary” even after a user deletes the app.
- DeepSeek stores all information it collects in China.
Recommendations
- All organisations, especially critical infrastructure organisations, democratic institutions and organisations storing or processing commercially sensitive or personal information should strongly consider at least temporarily restricting access to the DeepSeek AI Assistant app.
- All organisations should consider providing guidance to staff members about the privacy risks of downloading and using DeepSeek AI Assistant and the validity risks of trusting the outputs of DeepSeek models.
- We recommend that all organisations have a policy on appropriate use of generative AI applications, such as ChatGPT, Google Gemini, Meta AI, Microsoft Copilot and DeepSeek AI Assistant.
- The policy should outline the types of generative AI applications staff can and cannot use. (Depending on the nature of their operations and regulatory requirements, some organisations may wish to block generative AI applications that do not process and store information locally.)
- The policy should prohibit all staff from entering personal information, commercial IP or other sensitive data into any generative AI application.
- The policy should outline expectations for when staff can and cannot use AI generated responses in their workflow, and how they should validate these responses before relying on them.
- We also recommend customers include training on appropriate generative AI use as part of standard staff cyber awareness training modules.
Contact CyberCX
For additional information, including specifics about the response within your IT environment, please contact your account manager or our Cyber Intelligence team.