I Dismissed Prompting as a Gimmick. I Was Wrong.
What I thought was a temporary hack turned out to be the core skill.
This week, we’re talking:
Last year I rolled my eyes at “prompt engineers.” This year I’m eating crow—and rewriting workflows. 🧠⌨️🛠️
A new AI model makes weather look less like prediction and more like divine foresight. 🌀📡
To find dark matter, physicists are suspending magnets in silence and waiting for the universe to flinch. 🌌🧲
Radiohead and Erasure have better privacy advice than most CMOs. 🪩🎛️
What happens when Jony Ive meets Sam Altman? A $6.5B design fever dream that could change how we touch tech. 🧠🛠️
Congress wants to block state AI laws for a decade. The reason they’re giving isn’t the real one. 🔒🏛️
$11.6B, 50,000 chips, and one giant Texas data center. This is what an AI arms race actually looks like. ⚡💰
DeepMind built an AI that evolves its own algorithms—then got out of the way. Math may never be the same. 📄🧬
A major newspaper printed a summer reading list full of books that don’t exist. And no one noticed. 📚🗞️
My Take:
Last year, if someone told me they were a “prompt engineer,” I’d have laughed. Or rolled my eyes. Or—if I was feeling generous—gently suggested they pick a more durable hustle.
To me, prompt engineering sounded like one of those job titles you invent on the walk to your coworking space: part Mad Libs, part LinkedIn cosplay. It reeked of AI-adjacent opportunism.
So I dismissed it.
I said—confidently and repeatedly—that prompt engineering wasn’t a real profession. That LLMs would soon be smart enough to understand what you meant, not what you typed. That the whole thing was a short-lived grift.
I was wrong.
What I dismissed as gimmick turned out to be craft. Prompting matters. More than I expected. Sometimes more than fine-tuning. Sometimes more than model choice.
Here’s what changed:
Over the past 12 months, I’ve watched teams get dramatically better results not by switching models, not by fine-tuning… but by improving their prompts.
Same LLM. Same infrastructure. Different outcome. All because someone treated the prompt like a key input that deserved rigor, iteration, and intention—not something you dash off like a text message.
Turns out prompt engineering isn’t training wheels. It’s not scaffolding until the model learns to ride solo.
Prompting is the translation layer between what we want and what the model understands.
Why It Works
Prompting is high-leverage differentiation.
Most models are trained on the same data. But the prompts? That’s where things get interesting. The prompt is your point of view. Your tone. Your strategy. Your guardrails. It’s where you encode your edge.Prompting often outperforms fine-tuning.
Fine-tuning is expensive. It’s rigid. It’s slow. Prompting is agile, iterative, and surprisingly effective—especially when paired with a system designed to scaffold and evaluate responses. When done right, it feels like fine-tuning… without the overhead.Prompting is a product skill.
It’s not just for devs or engineers—it’s for product managers, marketers, domain experts. Anyone who wants to coax better thinking from a model needs to understand how to frame, sequence, and adapt prompts. It's less about clever wordplay and more about building AI-native workflows.Context matters. Strategic prompts that include examples, constraints, clear objectives, and instructions lead to results that are much better than terse prompts that fail to paint the target. Don’t say “give me three taglines for an exercise app.” Instead, say something like, “I want a playful, confident tagline with no more than five words for an exercise app that appeals to Millennials and Gen-Z’ers. Examples of other taglines I like are ‘Tastes great, less filling’ and ‘I’m lovin’ it.’”
In your zeal to get it right, you don’t want to overshoot the mark with too much context. There’s a balance to be struck between too much and too little. If you overload the context window, first your compute costs can go up unexpectedly. In one of our companies at super{set}, we saw how a naive approach for managing context in one of the applications we were building could’ve led to a per-customer monthly cost-to-serve of $2000. Engineering a slimmer, still effective context window allowed us to manage down costs by 100X.
The other challenge to be mindful of here is what’s called the needle in a haystack problem. With too much context, it’s hard for the answer engine to discern what’s truly important. And if you have a specific idea or hook in mind – the needle – you want to call it out and guide the engine to the outcome you have in mind. If the AI doesn’t know where to focus, it might miss the gist of your question, answer the wrong part, or give vague results.
The Real Admission
So yes, I was wrong about prompt engineering. But the deeper lesson here isn’t just about prompts. It’s about how quickly we—especially in tech—reach for the dismissive take.
When we don’t understand something, we call it a gimmick.
When we can’t measure it easily, we call it hype.
When something feels too simple to be powerful, we assume it’s a fad.
So once again, I was wrong, but I’m here now—less smug, more curious.
My Media Diet:
📡 AI Forecasting Leaves Weathermen in the Dust
The Aurora model is making traditional meteorologists look like amateur astrologers. With faster, more accurate weather predictions than anything humans can manage, AI is now controlling the skies.
🌀 Science News
🧲 Levitating Magnet Joins the Hunt for Dark Matter
Physicists are floating magnets in vacuum chambers hoping to detect dark matter. No signs of the cosmic bogeyman yet—but the Force is strong with this one.
🌌 Science News
🎧 Radiohead and Erasure Know What Adtech Forgot
Allison Schiff drops a synth-pop takedown of consent theater in adtech. When “You do it to yourself” meets “Give a little respect,” you know the privacy apocalypse has a soundtrack.
🪩 AdExchanger
🛠️ Jony Ive Joins the AI Borg
OpenAI just bought Jony Ive’s hardware startup in a $6.5B all-stock deal. Altman wants to move us beyond screens—possibly to a future where AI whispers in your ear and LoveFrom designs your reality.
🧠 NYT
🏛️ Congress Pushes a 10-Year-Ban on State AI Laws
House Republicans are pushing a 10-year ban on state AI laws. Why? To avoid a “patchwork” of regulation. Translation: Let’s preemptively kneecap any attempt to hold AI companies accountable at the state level. Forty state AGs are calling BS.
🔒 WSJ
💰 Crusoe Hauls in $11.6B to Power OpenAI’s Chip Dreams
In what might be the most Texas thing ever, Crusoe is building a giant AI data center in the Lone Star State to house 50,000 Nvidia chips—funded by a casual $11.6B raise. This is OpenAI’s biggest compute hub yet, and the latest salvo in the AI arms race.
⚡ Reuters
🧬 AlphaEvolve Is Teaching Itself to Write the Rules
DeepMind just dropped a paper on AlphaEvolve—a self-improving AI system that evolves algorithms like Darwin in a datacenter. It’s using LLMs, simulations, and evaluation loops to discover better math faster than most grad students. Today: matrix multiplication. Tomorrow: who knows, quantum pizza recipes?
📄 PDF
📖 Fiction by AI, Published as Fact
The Chicago Sun-Times ran a summer reading list packed with entirely fictional books—hallucinated by AI, unedited by humans, and syndicated nationwide. What makes it worse? The author admitted they “sometimes use AI for background” but didn’t bother checking this time. Lincoln Michel’s piece is a sobering reflection on how editorial shortcuts, once unthinkable in print journalism, are becoming the norm in the age of generative slop.
📚 Substack – Lincoln Michel
Thank you for sharing your confession, Tom! Your blog has made me even more curious and motivated to develop my prompting skills like a craftsman. I love how you put it: 'Prompting is high-leverage differentiation.'