When Chinese labs like DeepSeek trained their models on heavily restricted, older hardware, what was the actual result?

They fell 3 to 5 years behind the current U.S. frontier models.

They resorted to stealing U.S. model weights via corporate espionage.

They achieved parity by burning up to 4 times more electricity and writing hyper-efficient code.

Correct. A spectacular policy backfire.

Insight 11 of 12 from the full story.

Unfold the Case