What LLMs and Stoned Jazz Musicians Have in Common
Genius, chaos, and absolutely no memory of what just happened
This week, we’re talking:
What LLMs and stoned jazz musicians have in common
China can’t out-innovate the US so they’re writing the rulebook instead. 📝🌏
Nvidia’s chips might have surveillance backdoors. Or this is a diplomatic feint. Either way, yikes. 💻🚨
Trump’s executive order makes model alignment a culture war weapon. ❌🇺🇸
Big Tech is buying nuclear plants to feed the AI beast. Hope the grid’s ready. ☢️⚡
Amazon wants you to star in your own cartoon. AI-generated fanfiction goes mainstream. 📺🎮
My Take:
I used to play music with a guy named Uriah who was an absolute freak on the bass. Rarely played the same thing twice, and often played pure magic. One night he ripped through a solo so good the whole room stopped.
“Holy shit,” I said. “Play that again.”
He looked up, totally blank: “I can’t.” He had already forgotten what he played.
That’s kind of what working with LLMs feels like.
You get brilliance. But try to recreate it? Pffft. Gone.
Same prompt. Same model. *Wildly different results.*
Turns out, LLMs have a lot in common with a stoned jazz master. Improvising wildly, then staring at you like you’re the weird one for asking what the hell just happened.
Sometimes that’s a feature. Get a mix of great ideas and let a thousand flowers bloom. Awesome.
But for a business user? The second you move into numbers and forecasts, it’s a huge f***ing problem. Business users don’t want improv. If the numbers flicker, trust is gone.
So the nondeterminism problem is a doozy for people (like me!) building enterprise AI. And it’s one of those problems that lives below the waterline.
We’re building around it – snapshotting outputs, forcing consistency, caching answers – because we have to. (Bonus: it saves $$$ because you’re not hammering the model again and again, paying all the extra gas fees, trying to get a stable answer.)
But more than money, it’s a clunky workaround to a deeper problem. Keeps us from losing our minds every time the model decides to improvise.
But I know we’re not the ones banging our heads against the keyboard. Curious how others are handling nondeterminism in their stack. What’s working? What’s breaking? Who's losing their mind?
And remember, if your LLM ever does play the perfect solo? Don’t blink. Screenshot it. Because it won’t remember a thing in the morning.
My Stack:
China wants to write the rules for AI 🇨🇳
At the World AI Conference in Shanghai, China proposed a 13-point global governance plan: a UN-led oversight body, open-source coordination, and international frameworks for “trustworthy AI.” They know they won’t out-innovate the U.S. on foundational models or global brand power. But they also know the West is in chaos — fragmented regulation, stalled alignment debates, reactive diplomacy.
So CCP fills the vacuum and propose structure, because they understand something Silicon Valley often forgets:
What wins isn’t necessarily what works best but what gets adopted first.
📍 Source: Wired
Nvidia gets grilled over chip backdoors 💻
China’s Cyberspace Administration summoned Nvidia execs this week over claims that their H20 AI chips include remote tracking or shutoff features. These are the same chips the U.S. just un-banned for export. Either Nvidia’s tech has surveillance capabilities it shouldn’t OR the accusation itself is a new diplomatic weapon. Neither option is great.
📍 Source: Reuters
Trump bans “woke AI” in federal systems ❌
A new executive order blocks government contractors from deploying AI that reflects DEI values or critical race theory. It's the first major attempt that I’m aware of to legislate model alignment at the ideological level. What OpenAI feared from China is now happening here: state-aligned models.
📍 Source: AP News
Big Tech is going nuclear ☢️
Microsoft, Amazon, and Meta are snapping up nuclear plants to power their next-gen data centers. Some are even investing in small modular reactors. Watch this space for how power grid dysfunction and AI demands collide. Spoiler: it’s going to get complicated quickly.
📍 Source: Washington Post
Amazon backs AI-generated streaming startup Showrunner 📺
Fable Studio just opened its AI-powered streaming service to the public — and Amazon is backing the play. For $10–$40/month, users can generate animated shows, remix licensed IP, and even insert themselves into scenes using Fable’s SHOW‑2 model. Showrunner isn’t trying to compete with Hollywood. It’s trying to become a new medium, closer to video games than television. Think: fanfiction meets Fortnite. Tl;DR: AI shows may not win Oscars, but they might win fandom. And fandom prints money.
📍 Source: Variety
> Curious how others are handling nondeterminism in their stack.
I have an acquaintance who has been writing around this topic and I have been finding the discussion interesting (ex. https://www.varungodbole.com/p/why-do-companies-struggle-shipping ).
Personally I don’t believe that LLMs on their own will ever achieve anything approaching sufficiently reliable/predictable/non-hallucinatory behavior. What this means for software that must harness their new potential--but must also be depended upon--is what we’re all groping with at the moment.