Ask HN: The Proof or Bluff paper. Can "AI" do math?
Over the past 12 months I've seen lots of comments all over the place (here, X, legacy media, blogs etc.) making the case for "AI" performance on the IMO (Math Olympiad) being evidence for continued rapid increases in LLM performance. I've heard my friends who work in AI safety quote these results pretty often whenever they encounter scepticism about the coming AI singularity.
It seems to me that these comments stem from the DeepMind results from last summer[0] and February this year[1]. As I understand it, the models they're using for these tasks are very specialised to the task and also only accept formal language as input (i.e. not a textual or visual representation that a large multi-modal model could use).
I was having a read through the Proof or Bluff paper[2] this morning and while I don't think it's been reproduced yet, they found that none of the tested SOTA LLMs were able to make any meaningful progress (none scored over 5%) on solving questions in their test set. This corresponds with my limited experience in using LLMs for similar tasks. Needless to say I've not heard a peep about this paper from my AI safety friends.
My question is: How should I interpret the above? Maybe it's too cynical but my current thesis is that the DeepMind results are convenient headline-grabbers for the AI safety crowd, who are conflating the performance of a task-specific model with more general LLMs in order to make an unsubstantiated claim about progress in generalisable AI. Is that reasonable? What am I missing?
If the authors of Proof or Bluff are in here I'd also like to say thanks for doing the work on this. I can imagine that work like this isn't the sexiest but it is so refreshing seeing people take the time and care to generate some hard data about how good these models actually are. As someone considering a career switch at the moment, data like this is really useful context when trying to evaluate what the next few decades might look like.
[0] https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level
[1] https://techcrunch.com/2025/02/07/deepmind-claims-its-ai-performs-better-than-international-mathematical-olympiad-gold-medalists
[2] https://arxiv.org/abs/2503.21934v1