The next software failure will kill people
CrowdStrike bricked the world. One year later, we still don’t have guardrails
This week, we’re talking…
One year since CrowdStrike bricked the world, we’ve learned… nothing? 🧱🌍
Teenagers are confiding in AI bots instead of people. Parents, good luck. 👧 ❤️ 🤖
Crypto got its first big legislative win, and bought it fair and square. ₿🤑
Surveillance pricing isn’t theoretical anymore. It’s boarding at Gate 32. 🕵🏼 🏷️
Medicaid records handed to ICE. Health data used as a deportation tool. If you ever wondered how trust in public systems dies, this is it. ⚕️🛂
My Take:
A regular-looking guy piggybacked on a card swipe behind an employee. Like an absent-minded engineer, he looked too distracted to do his own swipe -- no biggie. He entered the office, plugged in a device that connected to the WiFi network, and gained fast access to sensitive customer and financial data.
Another guy spotted an employee's badge hanging from their belt buckle, snapped a pic on his cell phone, and used a badge copier to clone it. Counterfeit badge in hand, he walked into the office, found some unmanned desks with open laptops, and installed malware that downloaded sensitive data in 15 seconds.
It wasn’t Mission Impossible. It was Salesforce's internal red team, a floor of security operatives (all sporting the same black v-neck or black cropneck uniform), trained to break your software and challenge your security procedures. Social engineers. Ethical spies. They tested not just our code, but our human firewall.
Some weeks later, they handed us and another recently acquired company some worrisome reports. Secrets exposed. Flawed vaulting. Inadequate key rotation.
My baby, my product, called ugly in crisp bullet points with supporting appendices.
At first I was pissed. Eventually, I realized: this is what grown-up security looks like.
At Salesforce, for all its frustrations, they’ve institutionalized paranoia in a way that drives results. You couldn't ship code without bulletproofing it first. You couldn't hand-wave flaws away. The culture was less "move fast and break things." It was more: move carefully, because the things we're building are consequential AF.
In contrast, Saturday will mark one year since the CrowdStrike outage.
One update. Tens of millions of machines, bricked.
Since then, CrowdStrike has added safeguards: phased rollouts, better testing, red-team layers. Good. Necessary. The right things, if only a year too late.
The trouble is, it shouldn’t take a global failure to develop a grown-up risk posture.
When Boeing's bolts come loose, the FAA doesn't "wait and see." They ground planes and crawl into every crevice of the supply chain until the risk is understood and addressed.
That's what happens when a company builds critical infrastructure and fails.
Or look at what happens when there IS no regulatory oversight.
The British Post Office spent 16 years prosecuting over 900 innocent subpostmasters for "theft" caused by software bugs in Fujitsu's Horizon system. Fujitsu knew about the flaws from day one — even had engineers secretly altering branch accounts to cover up discrepancies. They said nothing. For over a decade, they let innocent people go to prison rather than admit their software was broken. Thirteen people died by suicide.
And why? Because unlike airlines or pharmaceuticals, critical software infrastructure has no mandatory oversight.
We let software companies patch first and explain later. We let them write their own rules. But why?
We don't let airlines decide how safe is safe enough. We don't let pharmaceutical companies self-regulate clinical trials. We don't let nuclear plants set their own safety standards.
But somehow, we're fine letting software companies — who control our hospitals, our power grids, our financial systems — decide for themselves how seriously to take security.
This isn't about punishing CrowdStrike. It's about recognizing that the stakes are too high to leave this to individual company culture.
Because this wasn't just a CrowdStrike problem. It was a market-wide blind spot: no stress tests, no mandatory standards, no centralized oversight.
The systems are patched. The regulatory framework? Still missing.
It's been a year. We're still treating critical software infrastructure like a hobby project where companies get to choose their own adventure. What's needed is a two-pronged regulatory framework: one that addresses the technical, screen-based risks, the other that attacks the human risks. As my team and I learned first-hand, they're *both* critical. And there's no partial credit for addressing one to the exclusion of the other.
The stakes keep rising. Every year, more of our critical infrastructure moves to software. More hospitals. More power grids. More emergency systems. The next failure won't just brick computers — it will kill people.
And we'll still be asking the same question: Why did we let them regulate themselves?
The Intimacy Economy Is Here. Good Luck, Parents 👧 ❤️ 🤖
A new report that was released yesterday from Common Sense Media has pretty appalling findings:
73% of teens have used AI companions.
45% use them regularly.
1 in 3 has turned to an AI companion instead of a person for a serious conversation.
1 in 4 has shared personal information with one.
Intimacy simulators built by companies that profit from attention, not well-being. The line between connection and manipulation is vanishing fast. Hard to overstate the impact here.
Source: Common Sense Media
Crypto’s Big Win Was Paid for in Advance" ₿🤑
The crypto industry just scored its first major legislative win… and it definitely didn’t happen by accident. The GENIUS Act and its companion bills were the product of years of lobbying, campaign donations, and backroom dealmaking. This is Citizens United in action: a policy written by the people who stand to profit most. It paves the way for banks and big players to dominate crypto under the banner of “clarity,” while decentralization and consumer protection are left behind.
Sources: NYTimes, Bloomberg, WSJ
Surveillance Pricing Takes Off 🕵🏼 🏷️
Delta’s experimenting with ticket prices that shift based on who you are. Not your destination. Not the seat. You. This is surveillance pricing in action—where your personal data becomes the variable in a private algorithmic auction you don’t know you’re in. Loyalty programs, device type, even your browsing behavior can be used to decide how much pain you’ll tolerate. The price isn’t the price anymore. It’s a guess at your breaking point.
Sources: Fortune, arsTECHNICA, Gizmodo
Medicaid Data Used for Immigration Enforcement ⚕️🛂
The Trump administration gave ICE access to state Medicaid records, quietly turning a health program into an enforcement tool. It’s a reminder that data collected for one purpose can be repurposed without consent, oversight, or warning.
Pretty infreakingcredible, nes pas?