Discussion about this post

User's avatar
Neural Foundry's avatar

This interview beautifully captures a distinction I haven't seen articulated elsewhere: the difference between AI imitating logical steps and actually possessing the intuitive 'folklore knowledge' that experienced mathematicians develop. Jitomirskaya's observation about the '3-example problem' is profound—humans can form abstractions and see patterns from minimal data, while models need vastly more training examples. This echoes Kahneman's System 1 vs System 2 thinking: machines excel at System 2 (deliberate reasoning) but struggle with the rapid pattern-matching intuition of System 1. Her optimism about Lean verification creating an error-free mathematical corpus is compelling, though I wonder if the 7x translation overhead will bottleneck adoption untill the automatic translator arrives. The chess/running analogy for a post-AGI math world is also fascinating—perhaps mathematics will become more about formulating beautiful questions than grinding through proofs.

Expand full comment
suman suhag's avatar

Common sense says that “yes we are closer to solving something” because, as time goes by more and more “machinery” is being developed that may become part of the solution to one or more of those problems. Which problem I do not know. And exactly what machinery is another unknown.

If you think about Fermat’s Last Theorem, the solution that Andrew Wiles eventually produced could not have been done by Fermat for the simple reason that much of the machinery used in the proof hadn’t been developed in Fermat’s day.

Expand full comment

No posts