The [AI] models are already powerful enough to produce publishable results under competent supervision. That’s not the bottleneck. The bottleneck is the supervision. Stronger models won’t eliminate the need for a human who understands the physics; they’ll just broaden the range of problems that a supervised agent can tackle. The supervisor still needs to know what the answer should look like, still needs to know which checks to demand, still needs to have the instinct that something is off before they can articulate why. That instinct doesn’t come from a subscription. It comes from years of failing at exactly the kind of work that people keep calling grunt work. Making the models smarter doesn’t solve the problem. It makes the problem harder to see.
The machines are fine. I'm worried about us.
On AI agents, grunt work, and the part of science that isn't replaceable.
Have something to say? Join the discussion below 👇
Want to explore instead? Fly with the time capsule 🛸
You may also find these interesting
Who Is the Assistant?
The way a smart assistant works is, it processes information ahead of you, trims it down, and presents you with a couple of options to decide on.
Display your Claude Code Token Usage on Your Mac's Toolbar
A simple Python script and xbar setup to monitor Claude Code token usage directly in your macOS toolbar.
Hitting the Brakes on Claude Code
Prevent Claude Code from burning tokens aimlessly. Slow things down with a simple shell trick.
Preslav Rachev


