GPT-5 has reportedly solved an unsolved mathematical problem independently for the first time, according to Schmidt's post on X—achieving the result without human intervention. The model delivered an elegant solution that surprisingly drew from techniques across disparate areas of algebraic geometry rather than relying on conventional approaches. Peer review is still pending. This follows recent anecdotal reports from prominent mathematicians like Terence Tao, who have highlighted AI's growing utility in mathematical research.
The final paper exemplifies diverse forms of human-AI collaboration: proofs generated by GPT-5 (base model, not Pro) and Gemini 3 Pro, written text segments produced by Claude, and formal Lean proofs implemented via Claude Code and ChatGPT 5.2. As part of an experiment in transparent AI attribution, each section is labeled as either human- or AI-authored, with links provided to corresponding prompts and conversation logs.
Transparency matters—but not at the cost of bureaucracy
Schmidt’s approach ensures high transparency and traceability, allowing anyone to verify whether ideas originated from humans or AI. However, this method comes with drawbacks: meticulously annotating every paragraph is time-consuming and may become impractical as AI tools become seamlessly integrated into everyday workflows. While transparency is essential, it should not devolve into administrative overhead.
Moreover, the boundary between human and AI contributions isn’t always clear-cut. Who crafted the initial prompt? Who selected, refined, or validated the output? Such attribution models may prove difficult to adapt across other scientific disciplines.
Perhaps science must first confront a more fundamental question: What defines a contribution—solely human effort, human-guided AI, or fully autonomous AI? And can genuine contributions truly emerge from AI alone, independent of human intent?