Researchers created an open rival to OpenAI’s o1 ‘reasoning’ model for under $50
Stanford/UW researchers built a rival model s1 for under $50, using distillation from Google's AI, raising questions about AI commoditization.

Stanford and University of Washington researchers have successfully trained an AI model named s1 for under $50, using cloud compute credits. By employing the distillation method, they fine-tuned the model from Google's Gemini 2.0 Flash Thinking Experimental, which demonstrates mathematics and coding abilities similar to OpenAI's o1 and DeepSeek's R1.
Their approach involved creating a dataset of 1,000 questions with answers that included the 'thought process' from Google's AI, and then using 16 Nvidia H100 GPUs to train s1 in under 30 minutes. The researchers highlighted the use of supervised fine-tuning (SFT), a process more cost-effective compared to other methods like DeepSeek's reinforcement learning approach.
Such efforts challenge the financial barriers typically associated with AI innovation, prompting discussions about AI commoditization. OpenAI accused DeepSeek of improper data usage for model distillation purposes, showcasing the tensions within the field. Despite Google offering limited free access to Gemini 2.0, its terms prohibit developing competing services.