This week, we’re talking:
The OpenAI files paint a picture and it’s one that is hard to unsee. ֎📂
TV is out, TikTok is in. For the first time, more Americans get their news from social media than from television. 📺👋📱
Execs bet big on AI. Now they’re backpedaling. 🚲🔄
Rubin Observatory will scan the cosmos every night for 10 years. What it finds could reshape our understanding of the universe. 📸 🌌🪐💫☄️
AI with a public service mission. Pano AI raised $44M to expand its wildfire detection system—already used by 250+ agencies across 30 million acres. 🦾👨🚒
My Take:
I was not planning on reading legal documents into the wee hours of the morning after a long day on the road, but here we are. I'm popping my head out of the rabbit hole to tell you what I found most interesting.
And damn. Some of this stuff is wild. None of it is new, per se. It has all been tweeted, whispered, debated in group chats. But seeing it laid out—documented, sourced, corroborated—makes the pattern impossible to ignore.
1. The NDAs were insane
Which, okay, fine. Most companies use NDAs. That’s not the issue. But OpenAI's were something else entirely: Can’t criticize the company publicly. Can’t even mention that the NDA exists. Break either rule? Kiss your vested equity goodbye.
If your lawyers are drafting equity docs right now, please don’t do this. This is how you destroy trust with your team.
Sam Altman publicly claimed he didn’t know this was happening. But the incorporation documents authorizing the clawback provisions include his actual signature.
2. The profit cap was always fake
Remember their whole "capped profit" thing? How they'd make some money and then everything else would go to humanity? Yeah. In 2023, they quietly added a clause that lets the cap grow 20% per year. Anyone who's done basic compounding math knows that makes it meaningless.
Now they’re just... removing it entirely. This wasn’t some big pivot announcement. They just kind of hoped nobody would notice.
3. Board conflicts? Only a problem when convenient
Several current board members have financial ties to companies that directly benefit from OpenAI’s success. Which is probably fine, and not uncommon. But Sam previously asked other board members to step down over lesser conflicts. Now similar (or worse) ones are suddenly not an issue. The rules didn’t change. The incentives did.
4. They hid a major hack for over a year
In 2023, someone broke into OpenAI’s systems and stole sensitive info. They didn’t tell anyone. Not the public, not regulators. It only came out a year later after somebody leaked it to the press.
It’s even more egregious when you remember that this is the same company that signed public pledges to share information about AI safety and security risks with the U.S. government and international regulators.
5. Safety team got starved, then pushed out
OpenAI promised 20% of its compute would go to long-term safety research. Spoiler: it didn’t.
Safety researchers asked for resources and got nowhere. As product development ramped up, safety was deprioritized. Some team members resigned. Others were let go.
6. Guy warns about security risks, gets fired for it
Leopold Aschenbrenner, a researcher on OpenAI’s Superalignment team, wrote a memo warning that the company’s internal security was so weak that someone could walk out with their most powerful models, posing a serious national security risk. He shared the memo with colleagues and eventually with the board. A few days later, he was fired, and OpenAI explicitly cited his memo as the reason why.
That’s not how you handle internal risk reporting.
The thing is...
There’s no single smoking gun here. It’s more like death by a thousand cuts. Page after page, the pattern emerges: a company drifting further and further from the mission that inspired its founding.
Not all at once. Not loudly. Just a slow, menacing creep towards something soulless, with existential implications for the rest of us.
My Stack:
Reuters Dropped Its Digital News Report. Here’s What You Should Know: 📺👋📱
Social media overtook TV as America’s main news source for the first time ever.
TikTok is the fastest-growing news source globally, especially among Gen Z.
Over 70% of Americans say they struggle to tell real news from fake.
Influencers and politicians are now viewed as top sources of misinformation.
Source: Reuters Digital News Report
AI Agents Aren’t Ready for Primetime 🚲🔄
Companies that downsized for the AI takeover are now reversing course.
Half of execs no longer plan major cuts to customer service staff (Gartner).
Klarna is rehiring after cutting 22% of its workforce.
Gartner predicts that 50% of orgs will ditch plans to reduce customer service workforce
Another indicator that the future is Augmented intelligence.
Sources: Gartner, Futurism, Business Insider
Astronomy’s Biggest Camera Starts Rolling 📸🌌🪐💫☄️
The Vera C. Rubin Observatory just switched on a 3.2-billion-pixel camera—the largest ever built. It’ll scan the sky every few nights for 10 years, snapping 20 terabytes of data a day.
In addition to the many unknown unknowns it might surface, scientists are hoping that by capturing the subtle bending of light around galaxies (gravitational lensing), Rubin will map the invisible scaffolding of the universe.
One photo = 400 4K screens.
AI Gets Into Firefighting 🦾👨🚒
Pano AI just raised $44 million to expand its wildfire detection network. The system uses smart cameras and real-time alerts to help emergency crews respond faster.
It now covers 30 million acres and supports over 250 agencies across three countries.
It’s one of the clearest use cases for AI in public safety—and it’s scaling fast.
Sources: Wall Street Journal, Pano AI
There's an interview with Aschenbrenner on the Dwarkesh Podcast, where he talks about what happened, what OpenAI was like, and what he learned. It's pretty interesting.