Perfection Is Poison for Learning AI
Law trained you to fear every error. AI rewards the lawyers who seek them out. Some tips.
“We expect you to be perfect.”
It was first-year orientation in New York. I was 27, just arrived from Berkeley, sitting alongside eager new associates in a conference room lined with orchids and paneled in blonde wood. The partner speaking was a luminary of the field.
I still remember the feeling of hearing that message. Every mistake was a mark against me, I thought.
The firm was successful because it did whatever it took for its clients. And doing whatever it took started with being perfect.
We had self-selected into this. Law school and firms didn’t plant the perfectionism; they drew it out of us. Cold calls broke apart our reasoning so you’d better know your stuff. Final exams rewarded the scorched earth of spotting every issue, every branch of the analysis.
The profession recruits people who already treat errors as personal failings, then calls the instinct professionalism.
For years, that training served us well. The emotional toll aside, it delivered results.
It is also poison for learning AI.
Eat your own dog food
In tech, the term dogfooding means using your own product day-to-day before anyone else does. This is not just one-off testing in some clinical setting. It’s using the products as your customers would in their everyday lives.
So you install the apps, carry the devices, use the services.
The name is based on an old industry saying, popularized in tech by Microsoft employees in the 1980s: eat your own dog food.
The point is to break things. Push the tool to its edges, feed it bad inputs, find the cracks before users do. In tech, the cadence is to prototype, ship, iterate. You build a rough version, put it out, watch it break, fix.
Learning AI is the same. Dogfooders learn AI better than perfectionists.
Philosophically, this can turn lawyers off. But expecting perfection at this stage means never even starting.
Let’s be honest. AI breaks a lot. It forgets my context. It misses a skill command I built in the same session. It sometimes fails to work out that today, April 23, is actually a Thursday.
But here’s the wrong mindset: run AI, get a bad output and decide that it’s not for you.
Nobody picks up a guitar, plays one bad chord and throws it away. Nobody loads an agent, gets an irrelevant response and quits.
This is why I don’t see AI as just software to learn. Instead, I encourage lawyers to build AI intuition: the skill to direct AI agents the way you would manage a team of associates.
And just like how people aren’t perfect, neither is AI.
Building AI intuition
One way to build intuition: give feedback to AI the way you would to an associate.
Early on, I asked ChatGPT to come up with new recipes mixing different cuisines I loved. It suggested dumplings with hummus.
Eh, not to my taste. The lawyer’s reflex is to scoff. The builder’s reflex is to respond: “That was bad. What led you there, and how can we tune for more grounded answers?”
You wouldn’t take mediocre work from an associate, grumble and ignore them. You’d invest in the relationship if you wanted it to last.
Another way: build your own curated arsenal of AI tools, then ignore the rest. Find the tools that actually do real work for you, not what others hype.
If you could pick your own team of associates, you’d know each one’s strengths and weaknesses. Same with AI.
Claude’s web search misses what Gemini finds in seconds, so I avoid Claude for research. Gemini writes like a committee; I stopped handing it prose. ChatGPT is strong at voice and images.
At the frontier
AI noise floods the world. Every week brings new models or features. I follow along because I love this tech, but I don’t change my AI usage lightly. I keep a small collection of agents I trust: I ruthlessly test them, break them, then personalize them to my liking.
I wrote in my very first Substack that AI will break, and breaking is the point. I’ve watched it play out in every bootcamp I’ve run: the lawyers who learn AI fastest are those who are comfortable finding AI’s limits.
Every wall you hit is a map of where the frontier is right now.
In fact, if you find your AI performing poorly on a task: congratulations, you’re a lawyer now working at the frontier of this fast-moving, transformative technology.

