Recently, DeepSeek unveiled a preview version of its new inference model, DeepSeek-R1-Lite-Preview. The model is said to match the performance of OpenAI's o1 model and offers a unique feature: displaying the model's detailed reasoning steps to users. This level of transparency is unmatched by other similar products in the market.
However, DeepSeek has not yet released detailed technical documentation for R1-Lite-Preview, including model cards, performance benchmarks, or training architecture information. Nevertheless, interested users can explore the model's capabilities through DeepSeek's online chat interface, DeepSeek Chat, with a daily message sending limit of 50.
The release of this model comes just two months after OpenAI launched its o1-preview inference model. Unlike mainstream models such as Claude 3.5 or Llama 3, the "inference model" addresses more complex tasks by applying expansion rules during reasoning, providing more accurate responses. DeepSeek has successfully implemented this feature in R1-Lite-Preview.
Looking ahead, DeepSeek plans to release the R1 series models and their associated APIs as open-source, continuing to support the development of the open-source artificial intelligence community.