The rise of DeepSeek, China’s latest AI innovation, signals a shift in artificial intelligence that carries profound risks. Unlike standard chatbots, DeepSeek is more than a text processor; it is a data harvester, a surveillance tool, and a potential instrument of global influence. More than just answering queries, it gathers user data, refines behavioral patterns, and absorbs vast amounts of publicly available information. In the hands of an authoritarian regime, this is not just innovation—it is a weapon. I also think I have been overly focused on the “thinking” capabilities of artificial general intelligence (AGI) with its independent decision-making with the benefit of our judgment. With small, limited language models that are self-contained and task-oriented to control weapons of war on the battlefield using China’s more expansive mobile G5 Wifi for CNC (Command and Control) coupled with the insatiable lust for AGI powered by Quantum Computing and nuclear power plants, these new “dogs of war” offer a combined arms threat that could end it all in a matter of nanoseconds.
The Capabilities of DeepSeek
Artificial intelligence has progressed from simple text-based models to advanced multi-modal systems that process images, video, voice, and real-time analysis. OpenAI’s ChatGPT and Google’s Gemini operate within certain ethical and legal constraints, but DeepSeek is developed under the strict oversight of the Chinese Communist Party (CCP).
The specifics of DeepSeek’s training data and architecture remain unclear, but given China’s established AI initiatives, it is likely integrated into the state’s broader surveillance and censorship system. Unlike Western models that grapple with privacy regulations and copyright concerns, DeepSeek exists in an environment where mass data collection is routine, and dissent is swiftly silenced. China’s penchant to expand its surveillance and control through its Belt and Road initiative raises a pressing question: Will DeepSeek be deployed as a global tool for knowledge or a mechanism for ideological control?
The Information War: AI as a Tool of Control
AI is no longer just a tool for productivity—it is a battleground for influence and control. Large language models refine themselves based on user input, which means every question asked and every response given trains the system, enhancing its knowledge of users, societies, and political sentiment.
China has long weaponized technology to control information, both domestically and abroad. The Great Firewall censors internet access, TikTok’s algorithm subtly shapes discourse, and WeChat monitors communications even among expatriates. DeepSeek has the potential to take this even further, embedding an AI-driven system into the global discourse that subtly aligns narratives with Beijing’s strategic goals.
The stakes are high. If DeepSeek integrates with existing Chinese platforms used worldwide or is adopted in educational and business settings, its influence could become far-reaching. Unlike Western AI models built with some degree of transparency, DeepSeek operates in a regime where transparency is non-existent. Will its responses promote the CCP’s political objectives? Will it subtly erase historical events, downplay human rights violations, or manipulate information to align with state interests? Given China’s track record, these are not hypothetical concerns.
Intellectual Property and AI’s Appetite for Data
Another looming issue is intellectual property. AI models like DeepSeek and ChatGPT do not generate knowledge from thin air—they aggregate, synthesize, and repackage existing human-created content. The training process involves scraping books, research papers, and articles, often without consent from the original creators.
The implications are vast. The New York Times has sued OpenAI for using its copyrighted content without permission. Now, OpenAI accuses DeepSeek of doing the same thing to its proprietary models. The irony is hard to miss—OpenAI, under fire for its own unauthorized data scraping, is now a victim of the same distillation process copyright holders have condemned. This underscores the broader challenge of AI: where does fair use end and intellectual theft begin?
At its core, this issue isn’t just about copyright—it’s about ownership of knowledge. Human authorship is devalued if AI models can freely extract and reproduce protected works. With community libraries expanding their resources digitally to provide more “knowledge” with less money, the risk of tainted, corrupted, and censored material grows exponentially, making the printed word much more valuable but largely inaccessible to most users.
What Comes Next?
The unchecked rise of AI raises urgent questions about privacy, ownership, and control. Used responsibly, AI can drive progress. Used recklessly, it becomes a tool for manipulation and suppression.
The West must not ignore the risks posed by an AI system designed and controlled by an authoritarian state. The response cannot be passive. At a minimum, governments should consider restrictions on DeepSeek’s deployment outside China, increased scrutiny of how AI models use copyrighted content, and regulations ensuring transparency in AI-driven decision-making.
If history has taught us anything, it is that information is power. And power, in the wrong hands, is always dangerous.
The DeepSeek Fallout: Industry Shockwaves and Market Chaos
The rise of DeepSeek is not just a technological threat—it has already disrupted markets, fueled insider trading concerns, and triggered AI industry infighting. Here are four significant news developments shaping the conversation:
1. OpenAI Accuses DeepSeek of Stealing AI Tech
OpenAI says China’s DeepSeek may have copied its AI technology. The company told the Financial Times that DeepSeek likely used “distillation” to train its model—transferring knowledge from a larger AI to a smaller one to improve performance cheaply.
While distillation may be common, OpenAI argues that using it to build a direct competitor without permission violates its terms of service. It raises concerns about China’s aggressive push into AI and whether foreign models will be exploited to accelerate its AI dominance. OpenAI declined to provide further details.
Source: Financial Times (ft.com)
2. Hedge Fund Billionaire: DeepSeek Boosts AI Race
Billionaire investor Steve Cohen says DeepSeek’s rise is good for AI. Speaking at a hedge fund conference, he dismissed concerns over tech stocks falling, calling it misinformation and market overreaction.
Cohen believes DeepSeek accelerates progress toward artificial superintelligence, where AI surpasses human cognition. “It’s coming quick,” he said, suggesting AI’s growth will outpace expectations. He sees AI stocks as a long-term bullish bet despite short-term volatility.
Sources: Financial Times (ft.com), Point72 (point72.com)
3. Alibaba’s New AI Model Challenges DeepSeek
Alibaba has launched Qwen 2.5-Max, an AI model it claims outperforms DeepSeek-V3 and OpenAI’s GPT-4o. The announcement, made on Lunar New Year, suggests urgency—DeepSeek’s rapid rise is pressuring not just global AI leaders but China’s own tech giants.
In most benchmarks, Alibaba’s cloud unit says Qwen 2.5-Max beats DeepSeek, GPT-4o, and Meta’s Llama-3.1-405B. The AI arms race is heating up as Chinese firms compete for dominance in the global AI market.
Source: Nikkei Asia (asia.nikkei.com)
4. Nvidia Short Sellers Cash In $6 Billion After DeepSeek Panic
Short sellers made record profits after DeepSeek’s AI launch sent shockwaves through Wall Street. Bets against Nvidia alone raked in $6.6 billion, marking the largest single-day short-selling gain ever, according to Ortex.
Nvidia’s market value plunged $593 billion—the biggest one-day loss in history—after DeepSeek claimed its models match or exceed U.S. AI giants at a fraction of the cost. Broadcom short sellers pocketed over $2 billion, while losses in AI-linked firms like Super Micro, Equinix, and Vistra earned traders an additional $900 million.
With so much short-selling activity involving Chinese investors, this raises questions for the SEC about possible insider trading.
Source: Reuters (reuters.com)
Sources (Expanded List):
- New York Times, “The New York Times Sues OpenAI Over Copyright Infringement.” (2023).
- Henry Kissinger, Eric Schmidt, Daniel Huttenlocher, The Age of AI: And Our Human Future. Little, Brown and Company (2021).
- Stanford Internet Observatory, “China’s Information Controls and AI Influence Campaigns.” (2023).
- The Wall Street Journal, “China’s Expanding AI Ecosystem and Global Influence.” (2024).
- Financial Times, Point72, Nikkei Asia, Reuters (Various AI reports, 2025).
Grammarly did proofreading, and made suggestions to improve my original non-AI assisted draft.
OpenAI’s ChatGPT was used to expand my research and edit/revise my first draft and ideas.
OpenAI’s DALL-E prepared the image per my instructions.