(Reposted from https://www.iamcharliegraham.com/ai-coding-and-the-peanut-butter-jelly-problem/)
Over the past year, I’ve been fully immersed in the AI-rena—building products at warp speed with tools like Claude Code and Cursor, and watching the space evolve daily. I’ve used these tools to (in the last 6 months) develop:
-
🧠 Betsee.xyz: a prediction market aggregator that can even tell you prediction markets based on tweets.
-
📝 TellMel.ai: an empathetic personal biographer to share life stories and lessons
-
📞 GetMaxHelp.com: a family-powered tech support line powered by AI and voice
-
💬 YipYap.xyz: a thread-based community chat app
Even my son has joined the AI-rena, playing with tools like Lovable, Replit, and Bolt to build a learn-to-type game styled after Brawl Stars (which I’ll post about later). It’s been energizing and eye-opening. Six months ago, I wouldn’t trust AI to do much beyond autocomplete. Now, I don’t want to code without it.
But despite all that progress, I keep running into the same issue—one that takes me back to my very first computer science class.
Way back in college, I took one of the earliest iterations of the now-famous CS50 course at Harvard, taught by the fantastic Margo Seltzer. Today, CS50 is taught online globally and is one of the most popular computer science courses. But back then, it wasn’t famous. And we got to do a classroom exercise that still sticks with me (yes, pun intended), and which they still teach to this day.
On the first day of class, Margo walked in carrying a loaf of bread, a jar of peanut butter, and a jar of jelly.
“I’m a computer,” she told us. “You are the programmers. Give me instructions—one step at a time—on how to make a peanut butter and jelly sandwich.”
And then came the chaos.
The first student said, “Take some bread out of the bag.” Instead of nicely taking two slices out, Margo proceeded to rip a hole in the bag and crush five or six slices into a mashed lump in her fist – because that also means “take some bread out of the bag”.
The next person said, “Put the bread down.” Margo dropped the clump onto the floor. After all, “down” could mean the ground.
Then someone tried: “Put jelly on the bread.” You know where this is going… Margo dumped the entire jar of jelly directly onto the pile. No spoon. No spreading. Just one catastrophic glob of sugar.
By the end, there was peanut butter, jelly, and bread everywhere. No sandwich. But we’d learned the point: you have to be super clear in your instructions or it won’t know what you want.
LLMs are undeniably more advanced than the computers of 20 years ago. Honestly, today’s AI could probably make a decent peanut butter and jelly sandwich. They’d likely infer that you want two slices of bread, placed on a counter, spread neatly with a normal amount of jelly and peanut butter.
But the problem reappears when you move past familiar territory. Using AI tools often feels like working with a junior developer from across the globe—someone fast, capable, and willing, but who lacks your product context, customer insight, or nuance.
If your “sandwich” is a product that doesn’t have an obvious recipe—a novel app, an unfamiliar UX, or a unique set of features—LLMs struggle. They’re great at copying what’s been done before, remixing code that already exists. But ask for something new? Something creative, or specific to your vision? Something that “just works” for a specific use-case? Now you’re back to giving vague or ambiguous instructions to a junior developer from across the globe, who doesn’t know your customer, your context, or what “done right” actually looks like.
Living on the frontlines of the AI-rena has taught me that prompt engineering isn’t the real bottleneck. The real differentiator is clarity and communication. And I don’t mean the kind of prompt engineering where you trick the AI into doing something by telling it its grandmother is going to die or pretending it’s an expert in a field. This is something much harder: having a clear, precise vision of what you want built, knowing what “right” looks like, and being able to explain it step by step—and course-correct when the AI veers off track.
This ability to define your desired outcome in crisp, complete terms is one of the most important superpowers of the AI era. AI can only infer so much—you still need to give it context and clear instructions.
Most people won’t do that. They’ll wave their hands, type vague “best practices” prompts, and hope the AI figures it out. And they’ll often end up with a gooey mess on their hands.
In the AI-rena, success won’t go to the fastest coders, but to those who can both clearly understand and explain how to turn a fuzzy idea into something that actually works… and maybe even walk away with a sandwich that didn’t end up on the floor.
Footnotes:
-
This was hard to write—while the class was about making a pb&j, I personally hate peanut butter and jelly sandwiches.
-
You can see videos of this the PB&J lecture here and here. I remember it being far more chaotic & messy our year though!