The [AI] models are already powerful enough to produce publishable results under competent supervision. That’s not the bottleneck. The bottleneck is the supervision. Stronger models won’t eliminate the need for a human who understands the physics; they’ll just broaden the range of problems that a supervised agent can tackle. The supervisor still needs to know what the answer should look like, still needs to know which checks to demand, still needs to have the instinct that something is off before they can articulate why. That instinct doesn’t come from a subscription. It comes from years of failing at exactly the kind of work that people keep calling grunt work. Making the models smarter doesn’t solve the problem. It makes the problem harder to see.
The machines are fine. I'm worried about us.
On AI agents, grunt work, and the part of science that isn't replaceable.
Preslav Rachev




