🎉 Unlock the Power of AI for Everyday Efficiency with ChatGPT for just $29 - limited time only! Go to the course page, enrol and use code for discount!

Write For Us

We Are Constantly Looking For Writers And Contributors To Help Us Create Great Content For Our Blog Visitors.

Contribute
DeepSeek-R1-0528: New Update Brings Top-Tier Reasoning, Logic, and Coding Skills
Technology News, General

DeepSeek-R1-0528: New Update Brings Top-Tier Reasoning, Logic, and Coding Skills


May 29, 2025    |    0

DeepSeek has just announced a powerful upgrade to its flagship AI model, DeepSeek R1, unveiling version DeepSeek-R1-0528. This new release represents a leap forward in reasoning depth, coding ability, and math-solving skills, pushing the model closer than ever to matching giants like OpenAI's O3 and Gemini 2.5 Pro.

News Summary Template
DeepSeek R1 Gets Major Update with Enhanced AI Capabilities
Major Version Upgrade Released
DeepSeek has launched DeepSeek-R1-0528, a significant upgrade that brings the model closer to competing with industry leaders like OpenAI's O3 and Gemini 2.5 Pro. The update features enhanced post-training algorithms and increased computing power, enabling deeper reasoning with an average of 23,000 tokens per question compared to 12,000 previously.
Dramatic Mathematics Performance Boost
Performance on the challenging AIME 2025 math test jumped from 70% to 87.5% accuracy, demonstrating significantly improved mathematical reasoning capabilities.
 
AIME 2025 Test Accuracy: 87.5% (up from 70%)
Improvements Across Multiple Domains
  • Mathematics: AIME 2024 accuracy reached 91.4%, HMMT 2025 scores nearly doubled
  • Programming: LiveCodeBench performance jumped from 63.5% to 73.3%
  • Logic & QA: GPQA-Diamond saw a 9.5% boost, "Humanity's Last Exam" accuracy more than doubled
  • User Experience: Lower hallucination rates and smoother function calling
New Lightweight Model for Open Source
The update introduces DeepSeek-R1-0528-Qwen3-8B, a distilled version that transfers advanced thinking patterns into the lightweight Qwen3 8B model. This achievement allows smaller models to rival much larger ones like Qwen3-235B on challenging tests, making advanced AI capabilities more accessible to developers and researchers.
Easy Access and Full Commercial Use
  • Web Interface: Available at chat.deepseek.com with "DeepThink" mode
  • Developer API: Accessible through platform.deepseek.com
  • Local Deployment: Supports system prompts and simplified reasoning triggers
  • Open Source: Released under MIT License for full commercial use and distillation

Sharper Brain, Deeper Thought

Thanks to enhanced post-training algorithms and more computing power, DeepSeek-R1-0528 has made major progress in handling complex reasoning tasks. For example, in the notoriously tough AIME 2025 math test, the model's accuracy jumped from 70% to 87.5%. It now uses nearly double the thinking power per question, averaging 23,000 tokens versus 12,000 previously, a sign of deeper and more nuanced reasoning.

Better in Code, Logic, and Real-World Reasoning

The model shines across a wide range of benchmarks:

  • Mathematics: AIME 2024 accuracy rose to 91.4%, and HMMT 2025 scores nearly doubled.
  • Programming: In LiveCodeBench, performance leaped from 63.5% to 73.3%.
  • Logic & QA: GPQA-Diamond saw a 9.5% boost, and "Humanity's Last Exam" accuracy more than doubled.

Beyond performance, users will enjoy a lower hallucination ratesmoother function calling, and a more intuitive experience when using the model for coding support (also known as "vibe coding").

DeepSeek Benchmark Comparison

DeepSeek R1 Benchmark Evolution

Interactive comparison between original and improved versions

DeepSeek R1
 
 
DeepSeek R1 0528
Drag the slider to compare versions

General

Code

Math

Tools

12
Improvements
1
Regressions
2
New Features

Qwen3-8B Learns to Think

The update also brings something for open-source fans: a distilled version called DeepSeek-R1-0528-Qwen3-8B. It transfers the thinking patterns of DeepSeek-R1 into the lightweight Qwen3 8B model, achieving state-of-the-art resultsamong models in its class. It even rivals much larger models like Qwen3-235B on tests like AIME 2024.

How to Try It

You can interact with DeepSeek-R1-0528 at chat.deepseek.com by activating the "DeepThink" mode, or access it via API at platform.deepseek.com. For developers, running it locally is also easier than ever. The latest version now supports system prompts and no longer requires special formatting to trigger advanced reasoning.

Open-Source and Ready to Use

All DeepSeek-R1 models, including this latest version, are released under the MIT License, allowing for full commercial use and distillation. This means startups, researchers, and developers can build on top of the model freely.