What is DeepSeek? China’s $45 Billion Bet Threatening OpenAI

DeepSeek

China’s chip fund is in talks to lead a $45 billion funding round for DeepSeek. This follows the Chinese AI startup’s V4 model launch which costs a fraction of what OpenAI charges while matching ChatGPT’s performance.

DeepSeek is a Chinese AI startup founded in 2023 that builds large language models (LLMs). On April 24, 2026, it launched DeepSeek V4 with a 1 million token context window priced at $1.74 per million input tokens compared to $5 for OpenAI’s GPT-5.5.

But the pricing disruption is just the beginning. OpenAI accused DeepSeek of stealing its technology. U.S. officials say DeepSeek used banned Nvidia chips in Inner Mongolia. And now DeepSeek is running inference on Chinese chips, signaling it may not need American hardware at all.

The $45 billion valuation announced on May 6 suggests China is betting big that DeepSeek can challenge American AI dominance.

The $45 Billion Bet

The China Integrated Circuit Industry Investment Fund, known as the Big Fund, is in discussions to lead the funding round. The Big Fund backs China’s largest chip players including Semiconductor Manufacturing International Corp. Its involvement signals DeepSeek is now a strategic national priority, not just another startup.

A $45 billion valuation would place DeepSeek among the most valuable AI companies globally. Anthropic raised funding at a reported $18 billion valuation in 2024. DeepSeek got here in three years while operating under U.S. chip export restrictions tightened three times since 2022.

V4’s Pricing Disruption

DeepSeek V4 comes in two versions: Pro and Flash. Both handle 1 million token context windows.
The pricing is what matters. V4 Pro costs $1.74 per million input tokens through May 31. OpenAI’s GPT-5.5 costs $5. That’s a 65% discount.

For developers running AI coding agents, the savings are dramatic. An 8-hour coding session that costs $50 to $200 on OpenAI now costs $1.50 to $6 on V4 Pro.

Switching is simple. For developers already using OpenAI, it’s a one-line code change.

Moving to Chinese Chips

DeepSeek V4 uses Chinese chips for inference. The company may still rely on Nvidia hardware for training, but the shift for inference is significant.

In a 1 million token context, V4 Pro uses only 27% of the computing power required by V3.2 while cutting memory use to 10%. That efficiency enables the switch to less powerful Chinese chips. And it reduces DeepSeek’s dependence on American hardware that can be cut off at any moment.

DeepSeek‘s Controversies

In February 2026, U.S. officials alleged DeepSeek used banned Nvidia chips in Inner Mongolia, violating export restrictions to China.

OpenAI accused DeepSeek of stealing its technology by routing around access restrictions. Anthropic made similar claims about DeepSeek copying Claude’s capabilities. The irony is rich, considering OpenAI faces lawsuits from The New York Times for allegedly building ChatGPT using content it didn’t have rights to.

DeepSeek claimed in January 2025 that training its model cost $5.6 million. That claim triggered Nvidia’s $589 billion single-day market loss. The real number was closer to $1.3 billion, according to reports.

In January 2025, researchers discovered an exposed DeepSeek database with no password required. The breach exposed API keys, chat logs, and user passwords. Italy pulled the app from its stores, while South Korea ordered changes after finding DeepSeek transferred data to China without permission.

Additionally DeepSeek’s models have been criticized over refusing to answer questions about Tiananmen Square or Taiwan, thanks to built-in censorship required by China’s government.

What DeepSeek Built

DeepSeek was founded in 2023 by Liang Wenfeng, co-founder of High-Flyer, a quantitative hedge fund.

DeepSeek V3 (December 2024): 671 billion parameters, but only 37 billion activate per token. This Mixture-of-Experts architecture enables over 90% computational reduction.

DeepSeek R1 (January 2025): The reasoning model that triggered the market shock. Matched OpenAI’s o1 performance while claiming $5.6 million training costs.

DeepSeek V4 (April 2026): The current flagship. Two variants with 1 million token context windows, 35× cheaper than Claude Opus, and the first to use Chinese chips for inference.

Why This Matters

The $45 billion funding round sends a clear signal. China views DeepSeek as critical infrastructure for AI sovereignty, not just another startup.

For years, frontier AI development was dominated by Western companies operating under the assumption that building competitive models required billions in capital and unrestricted access to cutting-edge chips. DeepSeek challenged both assumptions.

When V4 Pro costs $1.74 per million tokens compared to $5 for GPT-5.5 and $15 for Claude Opus, it forces every AI company to justify why their models cost 3 x to 10 x more for comparable performance.

DeepSeek has achieved what it set out to prove. A Chinese company can compete with American AI giants. Alternative architectures can challenge dense models. And the AI race is no longer U.S. vs. U.S.

The $45 billion bet China just made says they believe that is permanent.

Frequently Asked Questions
What is the DeepSeek $45 billion funding round?


China’s main chip investment fund is in talks to lead a funding round for DeepSeek at $45 billion valuation. The Big Fund’s involvement signals DeepSeek is now a national priority.

What is DeepSeek V4?


DeepSeek V4 launched April 24, 2026, with 1 million token context windows. V4 Pro costs $1.74 per million input tokens, 35× cheaper than Claude Opus 4.7. It runs inference on Chinese chips.

Did DeepSeek cost $6 million to train?


No. DeepSeek claimed $5.6 million but that only covered GPU pre-training. SemiAnalysis estimates total infrastructure spend at $1.3 billion including 50,000 Hopper GPUs.

What controversies does DeepSeek face?


Alleged use of banned Nvidia chips, theft accusations from OpenAI and Anthropic, exposed databases with plaintext passwords, unauthorized data transfers to China, and built-in censorship of politically sensitive topics.

How does DeepSeek V4 compare to ChatGPT?


V4 Pro matches GPT-5.5 and Claude Opus performance on coding benchmarks at 35× lower cost. Tradeoffs include censorship and privacy issues.

Is DeepSeek using Chinese chips?


Partially. V4 uses Chinese chips for inference but likely still needs Nvidia hardware for training.

See Also:

Can Huawei Replace NVIDIA in China’s AI Sovereignty Race?

China’s Semiconductor Industry No Longer Needs NVIDIA

The MATCH Act: ASML’s China Lifeline Just Got a Kill Switch

Share this article

Latest news

Subscribe to our newsletter!

More News